text
stringlengths
1
2.25M
=-5mm [**Gravitational Lensing Effects of Fermion-Fermion Stars:\ I. Strong Field Case** ]{}\ Ke-Jian Jin$^{1,3}$, Yuan-Zhong Zhang$^{2,1^{*},4,}$ and Zong-Hong Zhu$^{5,1}$\ > [We investigate a two-component model for gravitational lenses, i.e., the fermion-fermion star as a dark matter self-gravitating system made from two kinds of fermions with different masses. We calculate the deflection angles varying from arcseconds to even degrees. There is one Einstein ring. In particular, we find three radial critical curves for radial magnifications and four or five images of a point source. These are different from the case of the one-component model such as the fermion stars and boson stars. This is due to the fermion-fermion star being a two-component concentric sphere. Our results suggest that any possible observations of the number of images more than 3 could imply a polytropic distribution of the mass inside the lens in the universe.\ > PSCA numbers: 98.62.Sb, 95.35.+d, 04.40.-b ]{} It is suggested that most of the matter in the univers may be dark. Several types of dark matter distribution, such as local dark matter, galaxy dark matter, cluster dark matter and background dark matter, exist in the universe \[1\]. The dark matter may be consist of bosons or/and fermions \[2\]. Many authors have studied the dark matter self-gravitating systems, e.g., fermion stars \[3\], boson stars \[4\], boson-fermion stars \[5\] and fermion-fermion stars \[6\]. The dark matter stars could be formed by ejecting part of the dark matter, carrying out the excess kinetic energy \[7\]. This may also be a mechanism on the formation of such dark matter stars, though a finite temperature situation is still needed to be studied. It is assumed that the only coupling of the dark matter stars to ordinary matter and radiation is gravitational. So the stars would be transparent which allows the light to pass through them. General relativity predicts the deflection of light in gravitational fields, which is the foundation of compact objects as gravitational lenses. The basic theory of gravitational lensing was developed by Liebes and others \[8\]. The first example of gravitational lensing, twin images QSO 0957+561 A,B separated by 5.7 arcseconds at the same redshift $z_{s}=1.405$ and mag $\approx$ 17, was discovered in 1979 \[9\]. In 1988 Hewitt et al. \[10\] observed the first Einstein ring MG1131+0456 at redshift $z_{s}=1.13$. Schwarzschild gravitational lensing in the week gravitational field region is well-known \[11\]. Recently, gravitational lensing effects in strong gravitational field regions of black holes, neutron stars and boson stars were discussed \[12\]. However, all the stars have a smooth mass distribution. In the present paper, we give a two-component model for gravitational lenses. We calculate gravitational lensing effect for the relativistic fermion-fermion stars (for the case of week gravitational fields, see \[13\]). We find that there are typically four or five images of a point source, being different from the cases of both the fermion stars and boson stars. This is due to the mass distribution inside both the boson stars and fermion stars being smooth, while the fermion-fermion star is a two-component concentric sphere (a polytropic mass distribution). We also get one tangential critical curve (Einstein ring) for tangential magnification and three radial critical curves for radial magnification. Consider a two-component model consisting of two types of fermions with different masses $m_1$ and $m_2$. Each of the two types is assumed to be a Fermi gas. For the system, Einstein’s field equations (in G=c=1 units) read $$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R =-8\pi\left[ T_{(1)\mu\nu}+T_{(2) \mu\nu} \right]. \eqno (1)$$ The assumption of the only gravitational interaction inside the system implies the covariant conservation equations, $$T_{(i)~~;\nu}^{~\mu\nu}=0, ~~~~(i=1,2), \eqno (2)$$ where $$T_{(i)\mu}^{~~~\nu}={\rm diag}(-\rho_{i},~ p_{i},~ p_{i},~p_{i}) \eqno (3)$$ is the energy–momentum tensor for the [*i*]{}-th Fermi gas. What is more, the equations of state (in the parametric form) will be $$\rho_{i}=K_{i} ({\rm sh} t_{i} -t_{i}), \eqno(4)$$ $$p_{i}=\frac{1}{3}K_{i} \left({\rm sh}t_{i}-8 {\rm sh} \frac{1}{2}t_{i}+3t_{i} \right), \eqno(5)$$ where $$K_i \equiv \pi m_{i}^{4}/4h^{3}. \eqno(6)$$ Let $$K_{2}=\frac{1}{4\pi}, \eqno(7)$$ then the unit of length is $$\xi =\frac{1}{\pi}\left(\frac{h}{m_{2}c}\right)^{3/2}\frac{c}{( m_{2}G)^{1/2}} =\frac{392}{[m_{2}({\rm eV})]^{2}}{\rm kpc}. \eqno(8)$$ Correspondingly the unit of mass is $$\eta =\frac{c^2 a}{G}=\frac{8.18\times 10^{18}}{[m_{2} ({\rm eV})]^2}{\rm M_{\odot}}. \eqno(9)$$ For the polytropic sphere model, the general metric takes the form $$ds^2 =e^{\nu}dt^2 -e^{\mu}dr^2 -r^2 (d\theta ^2 +\sin^2 \theta d\phi^2) \eqno(10)$$ with ${\nu}$ and ${\mu}$ being functions of the radial distance $r$ from the center of the star. Finally, we get the basic equations as follows: $$\frac{dt_{i}}{dr}=- 4\frac{M +4\pi r^3 p}{r(r-2M)} {\rm cth}\frac{1} {4}t_{i}, ~~~ ~~~~ \frac{dM_{i}}{dr}=k_{i}r^2 ( {\rm sh}t_{i}-t_{i}), \eqno(11)$$ and $$e^{\mu}=(1-2M/r)^{-1},~~~ ~~~ \frac{de^{\nu}/dr}{2e^{\nu}}= \frac{M +4\pi r^3 p}{r(r-2M)}, \eqno(12)$$ where $$M =M_{1}+M_{2}, ~~~~p=p_{1}+p_{2}, \eqno(13)$$ with the boundary conditions: $$t_{i}(r=0)=t_{i0}> 0, ~~~~ M_{i}(r=0)=0, \eqno(14)$$ and $$t_{i}(r=R_i )=0 \eqno(15)$$ where $R_i $ is the radius of the $i$-th fermion sphere. A light ray would be deflected by gravitational field, and then the deflection angle of which is given by \[14\] $$\hat{\alpha}(r_0)=\Delta\phi (r_0)-\pi, \eqno(16)$$ with $$\Delta\phi (r_0)=2\int^{\infty}_{r_0}\frac{e^{\mu/2}}{\sqrt {\left(r^4/b^2\right)e^{-\nu}-r^2 }} dr \eqno(17)$$ where $r_{0}$ is the closest distance between the light ray and the center of the gravitational force, and the impact parameter $b$ is defined by $$b=r_{0}exp[-\nu (r_{0})/2]. \eqno(18)$$ For the case of the light ray deflected by a fermion-fermion star, the equation (17) becomes $$\Delta\phi (r_0)=2\int^{R}_{r_0} \frac{e^{\mu/2}}{\sqrt{ \left(r^4/b^2\right)e^{-\nu}-r^2}}dr + 2\int^{\infty}_{R} \frac{1}{\sqrt{\left(r^4/b^2\right) - r^2\left[1-2M(R)/r\right]}}dr. \eqno(19)$$ where $R$ is the radius of the star. An observer O and a point source S are assumed to be located in an asymptotically flat spacetime far away from a fermion-fermion star (sa a lens L). Let $D_{ol}$ denote the distance from the observer to the center of the lens, $D_{ls}$ the distance between the lens and the source, and $D_{os}$ the distance between the observer and the source. An image position is specified by the angle $\theta$ between OL and the tangent to the null geodesic at the observer. $\beta$ stands for the true angular position of the source. The lens equation may be expressed as \[12\] $$\sin (\theta -\beta )=\frac{D_{ls}}{D_{os}}\sin\hat{\alpha}. \eqno(20)$$ From the geometry of the lens we have $$\sin\theta = \frac{b}{D_{ol}} = \frac{r_0}{D_{ol}}e^{-\nu(r_0)/2}. \eqno(21)$$ The magnification of images is given by $$\mu =\left(\frac{\sin\beta}{\sin\theta}\frac{d\beta}{d\theta} \right)^{-1}. \eqno(22)$$ The tangential and radial critical curves (TCC and RCC, respectively) follow from the singularities of the tangential $$\mu_{t}\equiv \left(\frac{\sin\beta}{\sin\theta} \right)^{-1} \eqno(23)$$ and the radial magnification $$\mu_{r}\equiv \left(\frac{d\beta}{d\theta} \right)^{-1}. \eqno(24)$$ Then we have from (20) $$\mu_{r}^{-1} =\frac{d\beta}{d\theta}=1-\frac{D_{ls}}{D_{os}} \frac{\cos\hat{\alpha}}{\cos(\theta -\beta)}\frac{d\hat{\alpha}} {dr_0}\frac{dr_0}{d\theta}, \eqno(25)$$ where $\cos(\theta -\beta)$ can be found from (20), $dr_{0}/d\theta$ can be got from the derivative of (21) with respect to $r_0$, $$\frac{dr_0}{d\theta}=\frac{2D_{ol}e^{\nu(r_0)/2}\sqrt{1- \frac{r_{0}^{2}}{D_{ol}^{2}}e^{-\mu(r_0)}}}{2-r_{0}\frac{d\nu(r_0) }{dr_0}}, \eqno(26)$$ and $d\hat{\alpha}/dr_{0}$ can be calculated by parametric differential of (16) and (17) with respect to $r_0$. We calculated numerically the angle $\beta$ as a function of the angle $\theta$, and the tangential and radial magnifications. In the present paper, we consider the maximal fermion-fermion star. The relevant parameters of the star are: the mass radio for the two kinds of fermions $m_{2}/m_{1}=5$, the maximal central gravitational redshift $z_{c}=1.22$, the total mass $M=1.73\eta$, and the radius $R=16.6\xi$; By using the definitions (9) and (8) for $\eta$ and $\xi$, the total mass of the maximal star is $M==1.42 \times 10^{19}{\rm M_{\odot}}/[m_{2}({\rm eV})]^2$; The radius is $R=6.51 {\rm Mpc}/[m_{2}({\rm eV})]^{2}$; For example, in case of $m_{2}=10{\rm eV}$ and then $m_{1}=2{\rm eV}$, $M=1.42\times 10^{17}{\rm M_{\odot}}$ and $R=65.1\times {\rm kpc}$; for $m_{2}=10{\rm GeV}$ and then $m_{1}=2{\rm GeV}$, $M=0.14 M_{\odot}$ and $R=2.0 {\rm km}$. The angular position $\beta$ of the point source, the tangential magnification $\mu_t$, and the radial magnification $\mu_r$ are plotted against the angular position $\theta$ of the image, which being shown in figures 1–4. In Fig. 1 (it is assumed that $D_{ls}/D_{os}=1/2$ and $D_{ol}=5\times 10^{5} \xi$), the continuous curve denotes the plot of the source position angle $\beta$ agianst the image position angle $\theta$, and the lines of $\beta=$ constants are given by the dashed straight lines. The intersections between the lines (with $\beta=$constants) and the continuous curve indicate the angular positions of the images and their numbers from 1 to 5, of which the ones with $\theta \not= 0$ and $\beta=0$ present the Einstein ring that corresponds to the tangential critical curve (TCC) coming from the singularity of $\mu_t$ in Fig. 2. The continuous curve with $\beta >0$ or $\beta <0$ shows two peaks following from the two-component concentric sphere structure of the fermion-fermion star, so that the numbers of the images may be 4 or 5. The maximal deflection angle $\hat{\alpha}_{max}$ corresponds to the maximum (28 degrees) of $\beta$ at about $\theta=0.25$ arcseconds, and then the maximal reduced deflection angle $|\alpha|=|\theta -\beta|\simeq 28^{\circ}$. From (20) and because of $\theta\ll\beta$ and $D_{ls}/D_{os}=1/2$, we have the maximal deflection angle $\hat{\alpha}_{max} \simeq\sin^{-1}(2\sin 28^{\circ})\simeq 70^{\circ}$. In Fig. 2 the tangential magnification $\mu_{t}$ is plotted as a function of the image position $\theta$. The singularity in the magnification $\mu_{t}$ shows the angular position of the Einstein ring (TCC), which is at about $\theta=542$ arcseconds. Figure 3 gives the radial magnification $\mu_{r}$ as a function of $\theta$. There are three singularities in the magnification $\mu_{r}$ corresponding to three extreme values (i.e., two maxima and a minimum) on the the continuous curve of $\beta(\theta)>0$ in Fig. 1, which indicate the angular positions of the three radial critical curves (RCCs) at about $\theta=0.254, 0.883, 3.811$ arcseconds, respectively. Comparing to the one-component models such as the boson stars \[12\] and the fermion stars, the results from figures 1 and 3 give us the difference: There may be four or five images of the point source; At the same time, there are three radial critical curves. This is due to that the mass distribution for the fermion-fermion stars are generally not smooth but has two-component structure. Our results suggest that any possible observations of the number of images more than 3 could imply a polytropic distribution of the mass inside the lens in the universe. Finally we want to say that the function $\beta (\theta)$ depends upon the values of $D_{ls}/D_{os}$ and $D_{ol}$, $\beta$ being smaller as $D_{ol}$ larger. In figure 4 we give, as an example, the source position angle $\beta$ as a function of the image position $\theta$ for the case of $D_{ol}=1.5\times 10^{11}\xi$ and $D_{ls}/D_{os}=1/2$, where the values of $\beta$ and $\theta$ are of the same order in arcseconds. This work was supported partially by The National Natural Science Foundation of China under Grants Nos. 19745008 and 19835040. [**—————————**]{} 1. Mailing address;   Electronic address: [email protected]\ or [email protected] 2. See, e.g., M.S. Turner, in the Proceedings of the Second International Workshop on Particle Physics and the Early Universe, Asilomar, USA, Nov. 15–20, 1998. 3. L. Roszkowski, in the Proceedings (see \[1\]); hep-ph/9903467. 4. M.A. Markov, Phys. Lett. [**10**]{} 122 (1964); J.G. Gao and R. Ruffini, Acta Astrophys. Sin. [**1**]{} (1981) 19; C.R. Ching, T.H. Ho and Y.Z. Zhang, Commun. in Theor. Phys. (China) [**2**]{} 1145 (1983). 5. M. Colpi, S.L. Shapiro and I. Wasserman, Phys. Rev. Lett. [**57**]{} 2485 (1986). 6. A.B. Henriques, A.R. Liddle and R.G. Moorhouse, Phys. Lett. B [**233**]{} (1989) 99; Nucl. Phys. [**B337**]{} 737 (1990). 7. Y.Z. Zhang and K.J. Jin, Phys. Lett. [**A128**]{} 309 (1988); K.J. Jin and Y.Z. Zhang, Phys. Lett. [**A142**]{} 79 (1989). 8. E. Seidel and W.M. Suen, Phys. Rev. lett. [**72**]{} 2516 (1994). 9. Jr.S. Leibes, Phys. Rev. [**B133**]{} 835 (1964); S. Refsdal and J. Surdej, MNRAS [**128**]{} 295 (1964); R.R. Bourassa and R. Kantowski, APJ [**195**]{} 13 (1975); for review see: R. Narayan and M. Bartelmann, Lectures on gravitational lensing, astro-ph/9606001. 10. D. Walsh, R.F. Carswell, and R.J. Weymann, Nature [**279**]{} 381 (1979). 11. J.N. Hewitt, et al., Nature [**333**]{} 537 (1988). 12. P. Schneider, J. Ehlers, and E.E. Falco, 1992, Gravitational lenses, Springer Verlag, Berlin. 13. K.S. Virbhadra, D. Narasimha, and S.M. Chitre, Role of the scalar field in gravitational lensing, astro-ph/9801174; M.P. Dabrowski and F. Schunck, Boson stars as gravitational lenses, astro-ph/9807039; K.S. Virbhadra and G.F.R. Ellis, Schwarzschild black hole lensing, astro-ph/9904193. 14. Z.H. Zhu, K.J. Jin, and Y.Z. Zhang, Gravitational lensing effects of fermion-fermion stars: II. week field case, submitted. 15. S. Weinberg, 1972, Gravitation and cosmology: principles and applications of the general theory of relativity, John Wiely & Sons, NY. FIG. 1. Gravitational lensing for the maximal fermion-fermion star. The true angular position $\beta$ of the point source is plotted as a function of the image position $\theta$ with $(D_{ls}/D_{os})=1/2$ and $D_{ol}=10^{5}\xi$, which is given by the continuous curve. The values of $\theta$ corresponding to the interactions between the dashed lines $\beta={\rm constants}$ and the continuous curve indicate the image positions; It is shown that the number of the images may be from 1 to 5. The interactions of the straight line $\beta =0$ with the curve present the Einstein ring at $|\theta |= 542$ arcsecs. Note that the curve with $\beta>0$ or $\beta<0$ shows two peaks which come from the two-component structure of the fermion-fermion star.\ FIG. 2. Corresponding to the curve in Fig. 1, the tangential magnification $\mu_{t}$ is plotted as a function of the image position $\theta$.\ FIG. 3. The radial magnification $\mu_{r}$ as a function of $\theta$. Because of three extreme values (i.e., two maxima and a minimum) for the the continuous curve with $\beta(\theta)>0$ in Fig. 1, there are three singularities in the magnification $\mu_{r}$.\ FIG. 4. Gravitational lensing for the maximal fermion-fermion star in the case of $(D_{ls}/D_{os})=1/2$ and $D_{ol}=1.5 \times 10^{11}\xi$. The angular positions of the point source and images are all of the same order in arcseconds.
--- abstract: 'We study the transverse spectra of hadrons in nearly central $AA$ collisions at RHIC and LHC in a broad transverse momentum range [@Eskola:2005ue]. Low-$p_T$ spectra are calculated by using boost-invariant hydrodynamics with initial energy and net-baryon densities from the EKRT [@EKRT] pQCD+saturation model. High-$p_T$ spectra are obtained from pQCD jet calculation [@EH03] including the energy loss of the parton [@EHSW04] in the matter prior to its fragmentation to final hadrons.' address: - 'Department of Physics, P.B. 35, FIN-40014 University of Jyväskylä, Finland' - 'Helsinki Institute of Physics, P.B. 64, FIN-00014 University of Helsinki, Finland' - 'Department of Physics, University of Virginia, P.O.B. 400714, Charlottesville, VA 22904-4714, USA' author: - 'K. J. Eskola, H. Honkanen, , P. V. Ruuskanen, and S. S. Räsänen.' title: Transverse Spectra of Hadrons in Central $AA$ Collisions at RHIC and LHC from pQCD+Saturation+Hydrodynamics and from pQCD+Energy Losses --- =-1cm Introduction ============ Transverse momentum spectra of hadrons in ultrarelativistic nuclear collisions provide valuable information of particle production mechanism in the collisions as well as dynamics and properties of the produced QCD matter. Low-$p_T$ features of the single particle spectra are well described by hydrodynamical models and the data is consistent with an ideal fluid behaviour of the matter. The observed azimuthal asymmetry in non-central Au+Au collisions at RHIC has been argued to result from strong collective motion and early thermalization of produced partonic matter. The suppression observed in the high-$p_T$ tail of the spectra relative to the p+p and d+Au collisions is understood as the energy loss of high-$p_T$ partons into the thermalized partonic matter. Our approach is to calculate initial particle production using the EKRT model which is based on the idea that the low-$p_T$ particle production is controlled by saturation among the final state gluons in contrast to the initial state saturation models, where saturation is the property of colliding nuclei. The EKRT model provides a closed framework to calculate initial transverse energy and net-baryon number in midrapidity in nuclear collisions with sufficiently large $\sqrt{s}$ and A. Final low-$p_T$ hadron spectra are calculated by using hydrodynamics with initial densities from the EKRT model. A good agreement with the measured data in central Au+Au collisions at RHIC is obtained [@Eskola:2005ue; @Eskola:2002wx]. We further predict the hadron spectra in central Pb+Pb collisions at the LHC [@Eskola:2005ue]. High-$p_T$ part of the spectra is calculated using factorized pQCD parton model for high-$p_T$ parton production and taking into account the parton energy loss before fragmentation. We compare these two models with the RHIC data and determine the regions where each component is dominant. In particular, we find that the low-$p_T$ hydrodynamical spectrum dominates over the fragmentation spectrum in a much wider $p_T$ region at the LHC than at RHIC. We discuss the independence of the two components and conclude that they are most likely almost independent even in the cross-over region. Models ====== pQCD + saturation + hydrodynamics --------------------------------- The EKRT model [@EKRT] estimates the final state saturation scale using the following geometric criterion: Saturation becomes important when the produced gluons fill the whole available transverse area of the colliding nuclei. For central collisions this can be written as $$N_{AA}(q_0, \sqrt{s}, A) \frac{\pi}{q_0^2} = \pi R_A^2,$$ where $N_{AA}$ is the number of gluons above a transverse momentum cut-off, $q_T > q_0$ in the rapidity interval $|y|\leq 0.5$, and $R_A$ is the nuclear radius. This condition provides the saturation scale $p_{sat}=q_0$. If $p_{sat} \gg \Lambda_{QCD}$, pQCD can be used to estimate the number of produced partons and the amount of transverse energy at midrapidity. This approach gives also the net-baryon number at midrapidity. If we assume that the produced matter thermalizes immediately after production at $\tau_0=1/p_{sat}$, we obtain the initial energy and net-baryon density at $\tau_0$ for hydrodynamical evolution. We use ideal fluid hydrodynamics with boost-invariance and azimuthal symmetry. In the bag-model equation of state the hadron gas phase consists of all hadronic states with $m<2$ GeV and the QGP phase of massless gluons and three flavors of quarks. Critical temperature is chosen to be $165$ MeV. After the hydrodynamic expansion and cooling of the matter, the Cooper-Frye decoupling prescription is applied for the calculation of the low-$p_T$ spectra. Below we show the sensitivity of the decoupling procedure on the decoupling condition. pQCD + fragmentation + energy loss ---------------------------------- High-$p_T$ spectra are calculated using nuclear parton distributions, pQCD parton cross sections, fragmentation functions and quenching weights for energy losses. We use leading order perturbative cross sections with K-factors fixed from $p+p(\bar{p})$ data [@EH03]. The K-factors are extrapolated to the LHC energy by using different parametrizations to estimate the uncertainties in the extrapolation. The magnitude of the energy losses can be expressed with one effective transport coefficient, which is fixed from the RHIC Au+Au data at $\sqrt{s_{NN}} = 200$ GeV [@EHSW04]. Since the transport coefficient is proportional to energy density, fixing it in one collision predicts it for other collisions. There is quite a large uncertainty associated with the eikonal approximation used in the energy loss calculation; arbitrarily large energy losses are allowed whereas the energy of real jets is limited. There are different ways to deal with this [@EHSW04], shown here as an uncertainty in our pQCD fragmentation + energy loss results. Results ======= In Fig \[fig:charged200\]a we show our results for unidentified charged hadrons at midrapidity for 5 % most central Au+Au collisions at $\sqrt{s_{NN}} = 200$ GeV. As mentioned above we show our hydro results with two different decoupling temperatures to show the sensitivity on the decoupling condition. From the figure we observe that both the normalization and slope of the spectra are well reproduced with a single decoupling temperature $T_{dec} = 150$ MeV. The spectra from fragmentation calculation with and without energy loss are plotted in the same figure. It is seen clearly that the measured spectra at high $p_T$ cannot be explained without energy loss. The transport coefficient that determines the energy loss is fixed from this data so that in this case the agreement is obtained by construction. The band in the energy loss results shows the uncertainty of eikonal approximation. For identified particles the spectra can be found from [@Eskola:2005ue]. Fig \[fig:charged200\]b shows our prediction for the 5 % most central Pb+Pb collisions at the LHC. The hydrodynamical results are again shown as a band corresponding to the decoupling temperatures between $120$ and $150$ MeV. The band for the pQCD fragmentation results without energy loss is from the uncertainty in the extrapolation of the K-factors to the LHC energy. The large uncertainty in the energy loss calculation comes, as before, from the eikonal approximation. We see in Fig \[fig:charged200\]a that at RHIC the hydrodynamical and pQCD + energy loss spectra cross at $p_T \sim 3\dots4$ GeV, and that the data starts to deviate from the hydro spectrum in the same $p_T$ region. At the LHC, the crossing region is moved to $p_T \sim 5 \dots6$ GeV, suggesting a wider $p_T$ region of applicability of hydrodynamics at the LHC than at RHIC. It is also interesting to note that in the cross-over region the fragmentation and the hydrodynamical components are most likely almost independent: At RHIC 95 % of the thermalized matter comes from mini-jets with partonic transverse momenta $p_{sat} < q_T < 3.6$ GeV and the higher-$q_T$ partons contribute to the normalizations and the slopes of the hydrodynamic spectra only slightly. On the other hand even without the energy loss the pQCD pions are dominantly from partons with $q_T \sim 1.7 p_T$, which is $5.1$ GeV for $p_T \sim 3$ GeV pions. With energy loss included they originate from even higher $q_T$. Thus the partonic origin of fragmentation pions and pions from thermalized matter is quite different suggesting that the two contributions are quite independent even in the cross-over region and can be added without serious double counting. The same argument holds also at the LHC. Crossing between hydro and fragmentation spectra depends also on particle species as studied at RHIC energies in ref. [@Hirano:2003pw]. Conclusions =========== We have calculated low-$p_T$ spectra of hadrons for central Au+Au collisions at RHIC and Pb+Pb collisions at the LHC using the EKRT model to calculate the initial parton production and hydrodynamics to calculate the expansion of produced matter. High-$p_T$ hadron spectra are calculated by assuming that the high-$q_T$ partons do not thermalize but fragment to hadrons after loosing energy when traversing the thermal matter. Our model is shown to be in a good agreement with the measured data in central collisions at RHIC. We have provided predictions for central Pb+Pb collisions at the LHC, including also the net baryon number, see [@Eskola:2005ue]. The origins of the thermal and the pQCD fragmentation spectra are discussed and it is argued that even in the cross-over region where the two components are comparable, they are essentially independent. ![ Transverse momentum spectra of charged hadrons at $y=0$ in 5 % most central Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV. The solid and dotted lines shows our hydrodynamic results with $T_{dec}=150$ MeV and $T_{dec}=120$ MeV respectively. The shaded band shows the pQCD fragmentation+energy loss spectrum and dashed line pQCD fragmentation without energy loss. The data is from Refs. [@Adams:2003kv; @Adler:2003au; @Back:2003qr; @Arsene:2003yk]. [**b)**]{} As Fig. \[fig:charged200\]a but for the 5 % most central Pb+Pb collision at $\sqrt{s_{NN}}=5500$ GeV. []{data-label="fig:charged200"}](charged_200.eps){width="90mm"} ![ Transverse momentum spectra of charged hadrons at $y=0$ in 5 % most central Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV. The solid and dotted lines shows our hydrodynamic results with $T_{dec}=150$ MeV and $T_{dec}=120$ MeV respectively. The shaded band shows the pQCD fragmentation+energy loss spectrum and dashed line pQCD fragmentation without energy loss. The data is from Refs. [@Adams:2003kv; @Adler:2003au; @Back:2003qr; @Arsene:2003yk]. [**b)**]{} As Fig. \[fig:charged200\]a but for the 5 % most central Pb+Pb collision at $\sqrt{s_{NN}}=5500$ GeV. []{data-label="fig:charged200"}](charged_5500.eps){width="90mm"} [9]{} K. J. Eskola, H. Honkanen, H. Niemi, P. V. Ruuskanen and S. S. Räsänen, arXiv:hep-ph/0506049. K. J. Eskola, K. Kajantie, P. V. Ruuskanen and K. Tuominen, Nucl. Phys. B [**570**]{} (2000) 379 \[arXiv:hep-ph/9909456\]. K. J. Eskola and H. Honkanen, Nucl. Phys. A [**713**]{} (2003) 167 \[arXiv:hep-ph/0205048\]. K. J. Eskola, H. Honkanen, C. A. Salgado and U. A. Wiedemann, Nucl. Phys. A [**747**]{} (2005) 511 \[arXiv:hep-ph/0406319\]. K. J. Eskola, H. Niemi, P. V. Ruuskanen and S. S. Räsänen, Phys. Lett. B [**566**]{} (2003) 187 \[arXiv:hep-ph/0206230\]. T. Hirano and Y. Nara, Phys. Rev. C [**69**]{} (2004) 034908 \[arXiv:nucl-th/0307015\]. J. Adams [*et al.*]{} \[STAR Collaboration\], Phys. Rev. Lett.  [**91**]{} (2003) 172302. S. S. Adler [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. C [**69**]{} (2004) 034910. B. B. Back [*et al.*]{} \[PHOBOS Collaboration\], Phys. Lett. B [**578**]{} (2004) 297. I. Arsene [*et al.*]{} \[BRAHMS Collaboration\], Phys. Rev. Lett.  [**91**]{} (2003) 072305.
--- abstract: 'In this paper, we show that non-symmetric convex polytopes cannot serve as a window function to produce a Gabor orthonormal basis by any time-frequency sets.' address: 'Department of Mathematics, San Francisco State University, San Francisco, California 94132' author: - Randolf Chung - 'Chun-kit Lai' bibliography: - 'References.bib' title: 'Non-symmetric Convex Polytopes and Gabor orthonormal bases' --- Introduction ============ Let $\Omega$ be a subset of ${{\mathbb{R}^{d}}}$ with $|\Omega|>0$ ($|\cdot|$ denotes the Lebesgue measure). If $\Gamma$ is a discrete subset of ${{\mathbb{R}^{d}}}$, we write $E_\Gamma$ for the set of exponentials $\{e_\gamma(x):\gamma\in \Gamma\}$ where $e_\gamma(x):=e^{2\pi i \langle \gamma,x\rangle}$ for $x\in{{\mathbb{R}^{d}}}$. Let $g\neq 0$ be a function in $L^2({{\mathbb{R}^{d}}})$ and let $\Lambda=\{(t,\lambda): t,\lambda\in{{\mathbb{R}^{d}}}\}$ be a discrete subset of ${{\mathbb{R}^{2d}}}$. A [*Gabor system*]{} is a collection of translations and modulations of the function $g$ by $\Lambda$: $$\label{Gaborsystem} \mathcal{G}(g,\Lambda):=\{e_\lambda(x)g(x-t): (t,\lambda)\in\Lambda\}.$$ In particular, a measurable set $\Omega\subseteq \mathbb{R}^d$ is called a [*Gabor orthonormal basis set (GONB set)*]{} if ${\mathcal G}(|\Omega|^{-1/2}\chi_{\Omega},\Lambda)$ forms an orthonormal basis for $L^2({{\mathbb{R}^{2d}}})$. We call $g$ and $\Lambda$ the [*window function*]{} and the [*time-frequency set*]{} respectively. $\Lambda$ is said to be [*separable*]{} if there exists sets ${\mathcal J}$ and $\Gamma$ on ${{\mathbb{R}^{d}}}$ such that $\Lambda = {\mathcal J}\times \Gamma$. In recent years, determining a pair $(g,\Lambda)$ such that $\mathcal{G}(g,\Lambda)$ arises as a frame or orthonormal bases has received much attention and many important cases have been solved. Yet, there is still an abundance of mysteries and unexpected results within this classification (for example, see [@Grochenig2001; @Grochenig2014]). Concerning the structure of GONB sets, the following problem may give us some positive insight. It was recently proposed and studied by several authors [@Agora2017; @Iosevich2017; @Gabardo2015; @Lai]. Suppose $\Omega\subseteq {{\mathbb{R}^{}}}^d$ is a GONB set. Then 1. (Spectrality) there exists $\Gamma$ such that $E_\Gamma$ forms an orthonormal basis for $L^2(\Omega)$, and 2. (Tiling) there exists a discrete set ${\mathcal J}$ such that ${{\mathbb{R}^{d}}}$ is the almost disjoint union of $\Omega+t, t\in {\mathcal J}$. Equivalently, $$\sum_{t\in{\mathcal J}} \chi_{\Omega}(x-t) = 1 \ \mbox{a.e.}$$ In general, sets satisfying (1) and (2) are called [*spectral sets*]{} and [*translational tiles*]{} respectively. Historically, the first related version of the Fuglede-Gabor problem was introduced in [@Liu2003]. They conjectured that if the window functions were compactly-supported and the time-frequency sets were separable, then the conclusion of the Fuglede-Gabor problem holds. Due to the separability condition, the problem was settled by [@Dutkay2014] if the window was non-negative. Our interest is the non-separable case. In fact, considering standard objects such as the unit cube $[0,1]^d$, there exist uncountably many distinct (up to translation) non-separable time-frequency sets $\Lambda$ such that ${\mathcal G}(\chi_{[0,1]^d},\Lambda)$ forms an orthonormal basis if $d\ge 2$ (see [@Gabardo2015]). The Fuglede-Gabor problem is motivated by a related conjecture called the [*spectral set conjecture*]{}: A set $\Omega$ is a spectral set if and only if it is a translational tile. This conjecture was introduced by Fuglede [@Fuglede1974] during his studies of extensions of commuting self-adjoint differential operators to dense subspaces of $L^2(\Omega)$. His conjecture was disproven in one direction by Tao [@Tao2003] for $d\geq 5$ and then in both directions by Kolountzakis and Matolcsi [@Kolountzakis2006] for $d\geq 3$. Despite this however, the conjecture was verified in many significant cases including the following: 1. $\Omega$ tiles by a lattice [@Fuglede1974], 2. $\Omega$ is a union of two intervals on ${\mathbb R}^1$ [@Laba2000], 3. $\Omega$ is a convex body with a point of positive Gaussian curvature [@Iosevich2001], 4. $\Omega$ is a non-symmetric convex body [@Kolountzakis2000]. The first three cases have partially been resolved recently in the Fuglede-Gabor problem (see [@Lai] for case (1), [@Agora2017] for case (2), and [@Iosevich2017] for case (3)). Each case used machinery similar to its Fuglede counterpart’s, but due to the extra consideration of the set $\Omega\cap(\Omega+t)$, none of the cases were proven in their full generality. In this paper, we consider the fourth case with non-symmetric convex polytopes. Our main result is \[main1\] Let $\Omega$ be a non-symmetric convex polytope in ${\mathbb R}^{d}$. Then $\Omega$ is not a GONB set. In other words, there cannot exist a $\Lambda$ such that ${\mathcal G}(|\Omega|^{-1/2}\chi_{\Omega},\Lambda)$ forms an orthonormal basis. We are unable to generalize the proof in [@Kolountzakis2000] to obtain a more general result for convex bodies (see Remark \[remark\_end\]). Instead, we will follow a similar approach by Greenfeld and Lev [@Greenfeld2017 Theorem 3.1] (originally from [@Kolountzakis2002]). To fully utilize the same line of thought, we will first consider the intersection of the polytopes $\Omega$ and its translate $\Omega+t$. We must assure that for a sufficiently small vector $t$, $\Omega\cap (\Omega+t)$ will remain non-symmetric with the $(d-1)$-volume of their facets staying continuous (Theorem \[epsi\]). After that, we apply an analogous argument from Greenfeld-Lev twice on the frequency and time axes to obtain a similar contradiction. Lemmas on polytopes =================== In this section, we study the structure of convex polytopes. Main references will be taken from [@Gruber2007; @Schneider2013]. Let us recall some terminology. Let ${\operatorname{V}_{\alpha}}$ be the $\alpha$-dimensional volume function on ${{\mathbb{R}^{d}}}$. A (closed) half-space $H$ is defined by $\{x\in {{\mathbb{R}^{d}}}: \langle a,x\rangle \le b\}$ where $a$ is the normal vector to $H$. A *convex polyhedron* is a finite intersection of closed half-spaces; thus, a convex polyhedron $\Omega$ is a closed set admitting a [*half-space representation*]{} $$\label{polyhedra} \Omega = \{x\in{\mathbb R}^d: \langle a_i,x\rangle\le b_i, \ \forall i=1,...,n\}= \bigcap_{i=1}^n H_i,$$ where $H_i = \{x\in{\mathbb R}^d: \langle a_i,x\rangle\le b_i\}.$ A [*facet*]{} $F_i$ of $\Omega$ is the intersection of $\Omega$ with the boundary of a half-space in its half-space representation; namely, $F_i = (\partial H_i)\cap \Omega$ such that ${\operatorname{V}_{d-1}}(F_i)>0$. A [*convex polytope*]{} is the convex hull of finitely many points. It is well-known that a convex polytope is equivalent to a bounded polyhedron. A convex polytope is [*(centrally) symmetric*]{} if there exists a point $x\in{{\mathbb{R}^{d}}}$ such that $$x-\Omega = \Omega-x.$$ If $F=(\partial H)\cap \Omega$ is a facet of $\Omega$, then $F'=(\partial H')\cap \Omega$ is the [*parallel*]{} of $F$ if $(\partial H)\cap(\partial H')=\emptyset$ (i.e. $H$ and $H'$ share unit normal vectors in opposing directions). By convention, we take $\emptyset$ to be the parallel facet of $F$ if a parallel facet does not exist. The following theorem fully characterizes symmetric convex polytopes in terms of parallel facets and volume (see [@Gruber2007 Corollary 18.1]): A convex polytope is symmetric if and only if for every facet $F\subset \Omega$, there exists a parallel facet $F'$ such that ${\operatorname{V}_{d-1}}(F')={\operatorname{V}_{d-1}}(F)$. Let ${\mathcal C}:={\mathcal C}[{{\mathbb{R}^{d}}}]$ be the set of compact convex sets on ${{\mathbb{R}^{d}}}$, and let $B_\delta(x)$ be the open ball of radius $\delta$ centered at $x$. We will denote by ${\mathcal P}:={\mathcal P}[{{\mathbb{R}^{d}}}]$ the set of all polytopes in ${\mathcal C}$. For any $E,F\in{\mathcal C}$, the [*Hausdorff metric*]{} of $E$ and $F$ is defined as $$d_{H}(E,F) = \inf\{\delta: E\subset F^{\delta} \ \mbox{and} \ F\subset E^{\delta}\},$$ where $E^{\delta} := \bigcup_{x\in E} B_\delta(x)$ and similarly for $F^{\delta}$. The metric space $(\mathcal C,d_{H})$ is complete. Now we remark that in general the volume function is not continuous for general compact sets. Let $T_0:=[v_1;v_2;v_3]$ denote a 2-simplex in ${{\mathbb{R}^{2}}}$ with unit side lengths, and let $T_n:=[v_1;v_2;(1/n)v_3+(1-1/n)v_1]$ for $n>0$. $T_n$ converges to the line segment $L$ joining $v_1,v_2$, but of particular interest, we see the *non-convex* sequence $\partial T_n$ converges to the line segment $L$ joining $v_1,v_2$. By triangle inequality, ${\operatorname{V}_{1}}(\partial T_n)\geq 2$ while ${\operatorname{V}_{1}}(L)=1$, so ${\operatorname{V}_{1}}(\partial T_n)$ cannot converge to ${\operatorname{V}_{1}}(L)$. Nonetheless, ${\operatorname{V}_{d-1}}$ is continuous on ${\mathcal C}$. A quick way to see this can be found in [@Gruber2007 p.104-105]. In summary, up to a constant, ${\operatorname{V}_{d-1}}$ computes the surface area of a facet and according to [@Gruber2007 p.104-105], $${\operatorname{V}_{d-1}}(C) = k_dW_1(C),$$ where $W_1$ is the quermassintegral of $C$ and $k_d>0$ is some constant dependent on the dimension. It is thus a continuous function on $({\mathcal C}, d_{H})$ (by [@Gruber2007 Theorem 6.13]); hence, $$\label{continuity} \lim_{n\rightarrow\infty}{d_H}(E_n,F) \ \Rightarrow \ \lim_{n\rightarrow\infty}\mbox{V}_{d-1}(E_n) =\mbox{V}_{d-1}(F).$$ Our goal now is to show that $\Omega_t:=\Omega\cap (\Omega+t)$ is non-symmetric for $t$ small if $\Omega$ is non-symmetric. \[min\] Let $t\in{{\mathbb{R}^{d}}}$, and let $\Omega$ be given by . Then $\Omega_t$ admits a representation $$\label{eqnmin} \Omega_t=\bigcap_{i=1}^{n}M_i$$ where $$M_i:=M_i(t)=\{x\in{\mathbb R}^d: \langle a_i,x\rangle\le m_i\} \quad \mbox{and} \quad m_i:=\min\{b_i, b_i+\langle a_i,t\rangle\}.$$ Let $\Omega=\bigcap_{i=1}^n H_i$ where $H_i=\{x:\langle a_i,x\rangle\leq b_i\}$. We have $$H_i+t=\{x+t:\langle a_i,x\rangle\leq b_i\}=\{x:\langle a_i,x-t\rangle\leq b_i\}=\{x:\langle a_i,x\rangle\leq b_i+\langle a_i,t\rangle\}$$ Let $m_i$ be defined as above. Then it follows immediately that $$H_i\cap (H_i+t) =\{x:\langle a_i,x\rangle\leq m_i\} = M_i$$ Since $$\Omega_t=\Omega\cap(\Omega+t) = \left(\bigcap_{i=1}^n H_i\right)\cap \left(\bigcap_{i=1}^n H_i+t\right) = \bigcap_{i=1}^n (H_i \cap (H_i+t))=\bigcap_{i=1}^n M_i,$$ this implies . The following lemma shows that the facet in $\Omega_t$ converges to the original facet $\Omega$ in Hausdorff metric. \[continter\] Let $\Omega\in\mathcal{P}$, and let $F=(\partial H)\cap \Omega$ be a facet of $\Omega$. Write $H = \{x\in{\mathbb R}^d: \langle a,x\rangle\le b\}$ and let $M(t)$ be as defined in Lemma \[min\] for $H$. Then the facets $F(t)=(\partial M(t))\cap \Omega_t$ converges to $F$ as $t\to0$. By [@Schneider2013 Theorem 1.8.8], a sequence of compact convex sets $K_i$ converges to $K$ if and only if 1. every point $x\in K$ is the limit of some sequence of points $\{x_i\}, x_i\in K_i$. 2. for any convergent sequences $(x_{i_j})$ with $x_{i_j}\in K_{i_j}$, the limit of $x_{i_j}$ belongs to $K$ $(1)$ is clear since $x+t\to x$ as $|t|\to 0$ and $x+t\in F(t)$. For $(2)$, choose any convergent sequence $(x_{t_i})$ with $x_{t_i}\in F(t_i)$ and denote its limit by $x$. Then Lemma \[min\] implies that $$\partial M(t) = \{x: \langle a,x\rangle =\min\{b,b+\langle a,t\rangle\}\}.$$ Now, $x_{t_i}\in \partial M(t_i)$, so $\langle a,x_{t_i}\rangle = \min\{b,b+\langle a,t_i\rangle\}$. But $t_i$ converges to 0 by the continuity of $\langle \cdot,\cdot\rangle$ and $\min$, so $\langle a,x\rangle = b$. In other words, $x\in \partial H$. On the other hand, $x\in \Omega_t = \Omega\cap (\Omega+t)\subset \Omega$, so $x\in F$. This completes the proof. \[epsi\] Suppose $\Omega$ is a non-symmetric polytope. Then there exists $\epsilon>0$ such that for all $|t|\leq \epsilon$, $\Omega_t$ is non-symmetric. More specifically, given a non-symmetric facet $F$ in $\Omega$, $F(t)$ is a non-symmetric facet for $\Omega_t$ for $|t|\leq \epsilon$. It suffices to show the second statement since then the first statement will follow from Minkowski’s Theorem. Let $F$ be a non-symmetric facet of $\Omega$, and choose a facet $F'$ parallel to $F$ with ${\operatorname{V}_{d-1}}(F)\neq {\operatorname{V}_{d-1}}(F')$. By Minkowski’s Theorem, such a facet is guaranteed to exist. Define $$V(t):=|{\operatorname{V}_{d-1}}(F(t))-{\operatorname{V}_{d-1}}(F'(t))|.$$ By Lemma \[continter\] and , ${\operatorname{V}_{d-1}}(F(t))$ and ${\operatorname{V}_{d-1}}(F'(t))$ are continuous at $0$, hence $V(t)$ is continuous at $0$. So $$V(0)=|{\operatorname{V}_{d-1}}(F(0))-{\operatorname{V}_{d-1}}(F'(0))|=|{\operatorname{V}_{d-1}}(F)-{\operatorname{V}_{d-1}}(F')|>0,$$ thus we can choose some $\epsilon>0$ such that $V(t)>0$ for $|t|<\epsilon$. Choosing $\epsilon$ smaller, this holds true for the compact ball $|t|\leq \epsilon$. Thus $$|{\operatorname{V}_{d-1}}(F(t))-{\operatorname{V}_{d-1}}(F'(t))|=V(t)>0.$$ This complete the proof. We remark that the condition on $t$ cannot be removed. The following example shows that intersection of a non-symmetric polytope and its translate may become symmetric for sufficiently large $t$. \[ex2.6\] Let $\Omega$ be the polytope with five edges and vertices given by $(0,0), (2,0), (2,2), (1,2),$ and $ (0,1)$. It is a square with a top left-hand corner removed and it is clearly non-symmetric. Consider $t = (-1,-1)$. Then $\Omega\cap (\Omega+t)$ becomes a square with vertices $(0,0),(1,0),(0,1),(1,1)$, so it is symmetric. Proof of the main theorem ========================= Let $\Omega$ be the convex polytope on ${{\mathbb{R}^{d}}}$. We denote by $\sigma_F(x)$ the surface measure on the facet $F$ of $\Omega$. Let $n_{F}$ denote the outward unit normal to the facet $F$ on $\Omega$. From Lemma \[min\], the corresponding facet $F(t)$ of $\Omega_t = \Omega \cap (\Omega+t)$ shares the same normal vector. The following lemma is a variant of Greenfeld-Lev [@Greenfeld2017 Lemma 2.7] (the case $t=0$). We show that the lower order term can be bounded, independent of $t$. \[bound\] Let $A(t)$ be a facet of $\Omega_t$, and let $B(t)$ be the parallel facet to $A(t)$ of $\Omega_t$ with outward unit normals $e_1$ and $-e_1$. Then there exists $\omega:=\omega_\Omega>0$, independent of $t$, such that in the cone $$C(\omega):=\{\lambda\in {{\mathbb{R}^{d}}}:|\lambda_j|\leq \omega|\lambda_1|\mbox{ for all }2\leq j\leq d\},$$ we have $$\label{eqn3} -2\pi i\lambda_1\widehat{\chi}_{\Omega_t}(\lambda)=\widehat{\sigma}_{A(t)}(\lambda)-\widehat{\sigma}_{B(t)}(\lambda)+G_t(\lambda)$$ with $$|G_t(\lambda)| \le \frac{C}{|\lambda_1|}$$ for some constant $C>0$, independent of $t$. By divergence theorem (see [@Greenfeld2017 Lemma 2.4]), $$-2\pi i\lambda_1\widehat{\chi}_{\Omega_t}(\lambda) = \widehat{\sigma}_{A(t)}(\lambda)-\widehat{\sigma}_{B(t)}(\lambda) +\sum \langle e_1, n_F\rangle\widehat{\sigma}_{F(t)}(\lambda)$$ where the sum is over all facets $F(t)$ of $\Omega_t$ except $A(t)$ and $B(t)$. Define $G_t(\lambda)$ to be the sum. By [@Greenfeld2017 Lemma 2.6], $$\label{eqsigma} |\widehat{\sigma}_{F(t)} (\lambda)|\leq \frac{{\operatorname{V}_{d-2}}(\partial F(t))}{2\pi}\cdot\frac{|\lambda|^{-1}}{|\sin \theta_{\lambda, n_F}|} \le \frac{{\operatorname{V}_{d-2}}(\partial F)}{2\pi}\cdot\frac{|\lambda|^{-1}}{|\sin \theta_{\lambda, n_F}|}$$ where $\theta_{\lambda,n_F}$ is the angle between $\lambda\in{{\mathbb{R}^{d}}}\backslash \{0\}$. The second inequality follows from the fact that the facet $F(t)$ is either empty, a subset of the facet $F$, or a subset of the facet $F+t$, so ${\operatorname{V}_{d-2}}(\partial F(t)) \le {\operatorname{V}_{d-2}}(\partial F)$. If $\omega$ is sufficiently small, then for $\lambda\in C(\omega)$, $\theta_{\lambda,n_F}$ is bounded away from 0 and $\pi$ for all $n_F$, so inside the cone $C(\omega)$, summing up all $F$ in (\[eqsigma\]) shows $G_t(\lambda)$ is bounded by $C|\lambda|^{-1}$ as $|\lambda_1|\to\infty$. As $n_F$ does not depend on $t$, $C$ does not depend on $t$. We now return to the main problem. Let $g \in L^2({\mathbb R}^d)$. The [*short time Fourier transform*]{} (STFT) is defined by $$V_gg (t,\lambda) := \int g(x)\overline{g(x-t)}e^{-2\pi i \langle\lambda, x\rangle} dx.$$ If $g = |\Omega|^{-1/2}\chi_{\Omega}$, we have $$\label{V_gg} V_gg (t,\lambda) = |\Omega|^{-1}\widehat{\chi}_{\Omega\cap (\Omega+t)} (\lambda) = |\Omega|^{-1}\widehat{\chi}_{\Omega_t} (\lambda).$$ We observe a Gabor system ${\mathcal G}(g,\Lambda)$ forms an orthonormal basis if and only if the following holds 1. (Mutual Orthogonality) $\Lambda - \Lambda \subset \{(t,\lambda): V_gg(t,\lambda) =0\}$, and 2. (Completeness) ${\mathcal G}(g,\Lambda)$ is complete in $L^2({\mathbb R}^d)$. (see [@Gabardo2015; @Agora2017] for a complete derivation). Furthermore, if ${\mathcal G}(g,\Lambda)$ forms an orthonormal basis, due to the continuity of $V_gg$ at the origin, $\Lambda$ must be [*uniformly discrete*]{}, i.e. there exists $\delta>0$ such that every ball of radius $\delta$ intersects $\Lambda$ at at most one point. On the other hand, $\Lambda$ is [*relatively dense*]{} in ${\mathbb R}^{2d}$ in the sense that there exists $R>0$ such that any balls of radius $R$ must intersect $\Lambda$ since the density of $\Lambda$ on ${\mathbb R}^{2d}$ must equal one (see [@Ramanathan1995]). Let $S(r)=\{te_1+w:t\in\mathbb{R}, w\in\mathbb{R}^{d}, |w|< r\}$ be the cylinder along the $x_1$-axis. \[nonzero\] Suppose $\Omega$ is a non-symmetric convex polytope on ${{\mathbb{R}^{d}}}$ and $g = |\Omega|^{-1/2}\chi_{\Omega}$. There exist $\epsilon>0$, $R>0$ and $\delta>0$, all independent of $t$, such that $$V_gg (t,\lambda)\neq 0,\quad \forall\lambda\in S(2\delta)\setminus B_{R}(0)$$ for all $|t|<\epsilon$, Take $\epsilon>0$ from Theorem $\ref{epsi}$, and consider $|t|\leq \epsilon$. Let $A(t)$ be the non-symmetric facet of $\Omega_t$ and let $B(t)$ be its parallel facet. Using an affine transformation, assume $A(t)$ and $B(t)$ lie on the hyperplanes $\{x_1=0\}$ and $\{x_1=1\}$ respectively, and let $\eta:=\min_{|t|\leq\epsilon} |{\operatorname{V}_{d-1}}(A(t))-{\operatorname{V}_{d-1}}(B(t))|$. By Theorem \[epsi\], $\eta>0$. We have $$\widehat{\sigma}_{A(t)}(\lambda) = \widehat{\chi}_{A(t)}(\lambda_2,...,\lambda_d) \quad\mbox{and}\quad\widehat{\sigma}_{B(t)}(\lambda) = e^{2\pi i\lambda_1}\widehat{\chi}_{B(t)}(\lambda_2,...,\lambda_d)$$ where $\chi_{B(t)}$ and $\chi_{A(t)}$ are the characteristic functions of the orthogonal projections of $B(t)$ and $A(t)$ onto $(x_2,...,x_d)$ respectively. Moreover, we can deduce $$\begin{aligned} \widehat{\chi}_{A(t)}(0)&={\operatorname{V}_{d-1}}(A(t))\\ \widehat{\chi}_{A(t)}-\widehat{\chi}_{A(0)}&=\widehat{\chi}_{A(t)\Delta A(0)} \end{aligned}$$ where $\Delta$ is the symmetric difference. This implies that $$|\widehat{\chi}_{A(t)}(\lambda')-\widehat{\chi}_{A(0)}(\lambda')|\leq {\operatorname{V}_{d-1}}(A(t)\Delta A(0)) \to 0\mbox{ as }t\to 0, \ \forall \lambda'\in{\mathbb R}^{d-1}$$ so $\widehat{\sigma}_{A(t)}$ converges uniformly to $\widehat{\sigma}_{A(0)}$ on ${\mathbb R}^{d-1}$. Similarly, $\widehat{\sigma}_{B(t)}$ converges uniformly to $\widehat{\sigma}_{B(0)}$. Thus, by uniformity, we can choose $\delta>0$, independent of $t$, such that $$|\widehat{\sigma}_{A(t)}(\lambda)-\widehat{\sigma}_{B(t)}(\lambda)|\geq \eta$$ in the cylinder $S(2\delta)$. Using and Lemma \[bound\], we can choose $\omega>0$ and $C>0$, independent of $t$, such that $$2\pi |\Omega||\lambda_1||V_gg(t,\lambda)| \ge \eta - |G_t(\lambda)| \ge \eta - \frac{C}{|\lambda_1|}$$ in the cone intersection $C(\omega)\cap S(2\delta)$. Taking $R$ large so that $S(2\delta)\setminus B_{R}(0)\subseteq C(\omega)\setminus B_{R}(0)$ and $$\eta - \frac{C}{|\lambda_1|}>0 \ \mbox{on} \ S(2\delta)\setminus B_{R}(0),$$ we see that $$V_gg (t,\lambda)\neq 0,\quad \lambda\in S(2\delta)\setminus B_{R}(0)$$ for any $|t|<\epsilon$. Since the constant $C$ and $\omega$ are taken independently of $t$, $R$ is independent of $t$, so we are done. We now give the proof for Theorem $\ref{main1}$. We argue by contradiction. Suppose ${\mathcal G}(g,\Lambda)$ forms a Gabor orthonormal basis, and let $\epsilon, \delta, R$ be as defined in the previous lemma. [*Claim:*]{} For any $\tau,x \in{{\mathbb{R}^{d}}}$, $\operatorname{card}(\Lambda \cap [(B_{\epsilon/2}(x)\times (S(\delta)+\tau)])<\infty$ where $\operatorname{card}(\cdot)$ denotes cardinality. Suppose not. As $\Lambda$ is uniformly discrete, one can find $v=(t,\lambda)$ and $v'=(t',\lambda')\in\Lambda \cap [(B_{\epsilon/2}(\nu)\times (S(\delta)+\tau)]$ with $|\lambda-\lambda'|>R$. But $|t-t'|<\epsilon$ and $\lambda-\lambda'\in S(2\delta)$, Lemma \[nonzero\] tells us that we must have $$V_gg (t-t',\lambda-\lambda')\neq 0.$$ This contradicts the mutual orthogonality of $\Lambda$. Thus, $|\lambda'-\lambda|\le R$, otherwise, $\lambda'-\lambda\in S(2\delta)\setminus B_{R}$ which implies $V_gg (t,\lambda)\neq 0$, a contradiction to the mutual orthogonality. This establishes the claim. Now since $\Lambda$ is a relatively dense set, there is a radius $\delta^*>0$ such that every $2d$-ball of radius $\delta^*$ non-trivially intersects $\Lambda$. Consider the set $B_{\delta^*}^d(0)\times S(\delta^*)$ ($d$ denotes the $d$-dimensional ball) covered by finitely many cylinders $B_{\epsilon/2}^d(\nu_i)\times (S(\delta)+\tau_j), 1\leq i,j\leq N$. Then $\operatorname{card}(\Lambda\cap [B_{\delta^*}^d(0)\times S(\delta^*)])<\infty$. However, this implies that $B_{\delta^{\ast}}^d(0)\times S(\delta^*)$ contains a $2d$-ball of radius $\delta^*$ that does not intersect $\Lambda$, a contradiction to the relative density. It follows that such $\Lambda$ does not exist and our proof is complete. \[remark\_end\] There is an approach to the Fuglede conjecture used in [@Kolountzakis2000] which considers the Fourier transform of the function $f = |\widehat{\chi_{\Omega}}|^2$. This transform is equal to $\chi_{\Omega}\ast\chi_{-\Omega}$, so $\widehat{f}$ has compact support, allowing the use of [@Kolountzakis2000 Theorem 2] to obtain a conclusion about the support of the Fourier transform of $\delta_{\Gamma}$ (as a tempered distribution). Considering $f = |V_gg|^2$ on ${\mathbb R}^{2d}$ with $g = \chi_{\Omega}$, $$(|V_gg|)^{\widehat{}} (t,\xi) = V_gg (\xi,-t)$$ (see [@Grochenig2014 Equation (11) in page 873]), there is no compactly supported Fourier transform (since the time side is unbounded), so the method in [@Kolountzakis2000] cannot be realized without some non-trivial adjustment. Acknowledgements ================ This work was an undergraduate research project in 2015-16 supported by the Office of Research and Sponsorship Programs (ORSP) in San Francisco State University (Grant No. ST 659). Both authors would like to thank the support from ORSP making this research possible. The authors would also like to thank professor Joseph Gubeladze for providing Example 2.6.
--- abstract: | The effect of environment on galaxy formation poses one of the best constraints on the interplay between mass assembly and star formation in galaxies. We present here a detailed study of the stellar populations of a volume-limited sample of early-type galaxies from the [*Sloan Digital Sky Survey*]{}, across a range of environments – defined as the mass of the host dark matter halo, according to the groups catalogue of Yang et al. The stellar populations are explored through the SDSS spectra, via projection onto a set of two spectral vectors determined from Principal Component Analysis. This method has been found to highlight differences not seen when using standard, model-dependent comparisons of photo-spectroscopic data. We find the velocity dispersion of the galaxy to be the main driver behind the different star formation histories of early-type galaxies. However, environmental effects are seen to play a role (although minor). Our Principal Components allow us to distinguish between the effects of environment as a change in average age (mapping the time lapse of assembly) or the presence of recent star formation (reflecting environment-related interactions). Galaxies populating the lowest mass halos have stellar populations on average $\sim$1 Gyr younger than the rest of the sample. The fraction of galaxies with small amounts of recent star formation is also seen to be truncated when occupying halos more massive than M$_{\rm H}{\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}}3\times 10^{13}$M$_\odot$. The sample is split into satellite and central galaxies for a further analysis of environment. Small but measurable differences are found between these two subsamples. For an unbiased comparison, we have to restrict this analysis to a range of halo masses over which a significant number of central and satellite galaxies can be found. Over this mass range, satellites are [*younger*]{} than central galaxies of the same stellar mass. The younger satellite galaxies in M$_{\rm H}\sim 6\times 10^{12}$M$_\odot$ halos have stellar populations consistent with the central galaxies found in the lowest mass halos of our sample (i.e. M$_{\rm H}\sim 10^{12}$M$_\odot$). This result is indicative of galaxies in lower mass halos being accreted into larger halos. author: - | Ben Rogers$^1$, Ignacio Ferreras$^2$[^1], Anna Pasquali$^3$, Mariangela Bernardi$^4$, Ofer Lahav$^5$, and Sugata Kaviraj$^{2,6,7}$\ $^1$ Department of Physics, King’s College London, Strand, London WC2R 6LS\ $^2$ Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey RH5 6NT\ $^3$ Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany\ $^4$ Department of Physics and Astronomy, University of Pennsylvania, USA\ $^5$ Department of Physics and Astronomy, University College London, Gower St. London WC1E 6BT\ $^6$ Blackett Laboratory, Imperial College London, London SW 2AZ\ $^7$ Astronomy group, The Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH\ date: 'MNRAS in press – arXiv version' title: 'The Role of Environment on the Formation of Early-Type Galaxies' --- \[firstpage\] methods: statistical – galaxies: elliptical and lenticular, cD – galaxies: evolution – galaxies: formation – galaxies: halos. Introduction ============ The origin and evolution of early-type galaxies is a long debated topic, its solution involving a large array of cosmological and astrophysical processes. The current paradigm of galaxy formation is embedded in the $\Lambda$CDM cosmology, from which the structures in the Universe are built hierarchically. Under this framework, early-type galaxies are effectively a secondary stage in galaxy evolution. The first stage consists of the formation of rotationally-supported disk galaxies, built up through the accretion of gas and smaller systems. Mergers subsequently operate, creating early-type galaxies. The masses and ages of the stellar populations in early-type galaxies imply that these systems are the result of an intense starburst, followed by processes which quench star formation after which the galaxy evolves passively. One of the proposed mechanisms to stop star formation requires the removal of gas from the galaxy, involving either fast removal of cold gas (ram-pressure stripping), or a slower removal of hot diffuse gas (strangulation). These mechanisms, however, do not result in a change in kinematics and only minor changes in morphology [@wein09] and so they can possibly explain the increased fraction of S0 galaxies [e.g. @drez80]. The major formation process of red sequence galaxies is thought to operate through major mergers [e.g. @delucia06] which result in the required structural and dynamical changes. Such processes have been known for a considerable time to produce spheroidal galaxies [@toomre72; @BarnHern96; @KB03],as well as more detailed photometric and dynamical properties [@naab03; @naab06]. More recent results also suggest that minor mergers may play an increasingly important role in the build up and size evolution of massive ellipticals at relatively later times [e.g. @ks06; @bezan09; @bern09]. The subsequent quenching of star formation and evolution of the galaxy onto the red sequence, requires the gas to be removed or heated to prevent the formation of new stars. Currently models invoke feedback from active galactic nuclei (AGN), since this fits naturally into the merger scenario. Such interactions are expected to drive material to the centre of the galaxy through tidal torques, towards the central supermassive black hole (SMBH) [@dimat07]. The discovery of a correlation between the mass of the SMBH and galaxy mass [@geb00], put significant weight behind the idea and provided scope for more comprehensive theories [@hopkins06; @faber07; @somerville08]. This scenario naturally introduces an expectation of environmental dependence, since such formation process involves interactions with neighbouring galaxies and structures. A correlation with environment can arise in two forms. Firstly through the initial conditions, as these provide the impetus for the formation of the first galaxies so that objects in dense environments will form earlier than in average or low density regions [@gottlober01; @berlind03]. Secondly, in higher density regions interactions, mergers, gas stripping, etc, are more likely to take place over the lifetime of a galaxy and so galaxies in these environments will be pushed onto the red sequence at earlier times. Certainly the fact that red early-type galaxies are preferentially found in higher density environments [e.g. @drez80; @blanton05; @wein06], suggests that environment plays an important role in their evolution. Therefore, looking at environmental differences in the stellar populations of early-type galaxies offers a method by which to constrain their formation. There has been much work on studies of differences in stellar populations through many different methods; the variations in the tight correlations followed by the early-type population such as the colour magnitude relation [see e.g. @gallazzi06] and the fundamental plane [see e.g. @bern03], galaxy colours [see e.g. @blanton05], absorption line indices [see e.g. @nelan05] and the parameters of population synthesis modelling [see e.g. @bern06; @thomas05]. In all cases the effect of environment has been shown to be relatively weak, if observed at all. Thus we take a different approach, choosing a different methodology involving principal component analysis on spectral data to identify small differences between the stellar populations of early-type galaxies [@pca], in a similar style to [@igpca], over a range of environments. The environment is in most cases quantified through the projected number density of galaxies, typically the distance to the n$^{th}$ nearest neighbour. However it has been argued that more physically motivated scales of environment are the mass and the virial radius of the host dark matter halo [@kauff04; @yng05; @wein06; @blanton07] . Not only are environmental dependencies observed to act primarily over distances comparable to the virial radius of such halos [@goto03], but also the merger history of the dark matter halo is determined mainly by its present mass [@kauff04]. The mass of the host dark matter halo cannot be directly measured in most cases but can be estimated through galaxy group catalogues [e.g. @yng07]. Such catalogues also provide the halo-centric radius and can be used to easily separate the sample into central and satellite populations. The other advantage in estimating the environment through halo mass is that it allows a direct comparison between observations and theoretical models. @dekel06, @somerville08, @catt08 or @KO_08 all make model predictions of the evolution and properties of galaxies in terms of the dark matter halo mass. For example [@catt08], following the work of @birn03 and @dekel06, suggested that the downsizing observed in elliptical galaxies can be modelled by considering a critical mass halo above which gas cannot be accreted efficiently, being shock heated to the virial temperature, thus effectively shutting down star formation. This paper is structured as follows: we describe the sample of early-type galaxies used in this study as well as the details of the principal component analysis. We investigate the results of the PCA projections over a range of halo mass and velocity dispersion, we highlight the differences observed and investigate them using the stellar population models of [@BC03]. The sample is finally split into central and satellite galaxies, whose properties are compared. The satellite population is then used to determine a possible dependence on halo-centric radius. The sample ========== This work is based on the large sample of early-type galaxies of @pca. This sample is selected from the @bern06 catalogue, compiled from the Sloan Digital Sky Survey [SDSS, @SDSStec], Data Release 4 [@dr4]. It is a volume-limited sample within z$\leq$0.1 and $M_r\leq -$21. A cut with respect to signal to noise ratio (S/N) was also imposed, rejecting those spectra with S/N$\leq$15. The final sample comprises 7,134 early type galaxies. Here we extend [@pca] to investigate the effect of environment in more detail, including the information of the host dark matter halo for each galaxy, a explained below. ![Our sample of SDSS early-type galaxies is shown with the two main parameters that are used to characterise the intrinsic (velocity dispersion; $\sigma$) and environmental (host halo mass; M$_{\rm H}$) dependence of the underlying stellar populations. In the crowded regions of the figure, contour lines track the density of galaxies in the plot. The grid corresponds to the binning applied throughout the paper. The large black dots and error bars give the average and RMS within each bin in halo mass. \[fig:samp\] ](Enviro_f1.eps){width="3.3in"} We make use of the Galaxy Groups Catalogue of @yng07, which is an improved application of the @yng05 halo-based galaxy group finder to the New York University Value-Added Catalogue [@NYVAC], based on the Sloan Digital Sky Survey Data Release 4 [@SDSStec]. We cross correlate this catalogue with the original sample to find halo masses for all but 175 galaxies, leaving a total of 6,959 galaxies in the new sample used here. We have also removed a small ($\sim 150$) set of galaxies with velocity dispersions below 125 km/s, as these galaxies only appear in the bin with the lowest halo masses, and have no counterparts in more massive halos. Allowing them to be part of the lowest velocity dispersion bin would considerably reduce the average $\sigma$ in this bin, introducing a bias when compared to other halo masses. The galaxy group finder, described in @yng05 and @yng07, is an iterative process in which the membership of the groups and the relationships between the properties of the halo are refined at each step. Initially the group finder uses a friends of friends algorithm with small linking lengths to identify the centres of possible galaxy group candidates. In such groups the centre of the halo is given by the luminosity-weighted centre. The remaining isolated galaxies not associated with groups are also set as the potential centres of groups. The characteristic luminosity (L$_{19.5}$) of each candidate group is then estimated, where this is defined as the summation of the luminosity of all group members with $^{0.1}$M$_r - 5\log h{\lower.5ex\hbox{$\; \buildrel < \over \sim \;$}}-$19.5 [^2]. The L$_{19.5}$ values are corrected for survey completeness and the apparent magnitude limit of the survey at redshifts z$\geq$0.09. ![The projected radial distance from the luminosity-weighted centre of the group is shown with respect to velocity dispersion. The distance is scaled to the virial radius, as determined from the properties of the halo. The grid overlaying the sample shows the binning applied in the analysis. Black dots and error bars correspond to the average and RMS scatter of the sample within the bins in R$_{\rm proj}$.\[fig:sampr\] ](Enviro_f2.eps){width="3.3in"} Using the characteristic luminosity, the mass of the group halo is estimated from a group mass-to-light ratio. The ratio used is set at a constant value across all groups for the first iteration but is subsequently refined to be group mass dependent. However, the results are not particularly sensitive to the exact value of the ratio even if it is held fixed [@yng05]. The estimate of the group halo mass allows the derivation of other group halo properties such as the halo radius, within which the halo has an average density contrast of 180, and the virial radius, defined as the radius within which the average density is above a set value. Once the properties of the halo have been estimated, a NFW profile is used for the dark matter [@NFW97] to determine the three dimensional density contrast of the halo in redshift space. Further galaxies are subsequently assigned to the galaxy group candidates if they are within a certain distance of the centre. This process – which allows for the merging of two groups if all members satisfy the above criteria singularly – is repeated until the membership of each group remains constant. The final dark matter halo mass is then estimated from a linear relationship with respect to stellar mass, derived from semi-analytic models [@kang05]. The galaxy group differentiates between central and satellite galaxies. Those galaxies which are the most massive galaxy of the group are defined as centrals, whereas the remaining galaxies are labelled as satellites. Low mass halos can consist of a single, central galaxy. Shown in figure \[fig:samp\] is the entire sample as a function of host halo mass, M$_{\rm H}$, and central velocity dispersion, $\sigma$, of the individual galaxies. The black dots correspond to the average and RMS scatter of the sample, within the halo mass bins shown by the grid. ![image](Enviro_f3a.eps){width="3.5in"} ![image](Enviro_f3b.eps){width="3.5in"} We can see from the figure that within the lowest halo mass bin (M$_{\rm H}\sim 10^{12}h^{-1}$M$_\odot$) there is a limitation imposed by the size of the halo on the highest sigma galaxy that it can contain. This is similar to that seen in the main catalogue in reference to the maximum stellar mass [see e.g. @yang08]. The difference here is the non-trival mapping of velocity dispersion to dynamical or stellar mass, which is dependent on other factors such as structure. However note that since the results presented in this paper are consistent across the range of velocity dispersions considered, the details of such mapping would be unlikely to affect them. Also we are using velocity dispersion as a measure of the intrinsic properties of the galaxy, which is consistent with many stellar population studies in the literature [e.g. @bern03; @gallazzi05]. The figure also reveals a shift towards decreasing velocity dispersion at lower halo mass, whereby less massive halos on average contain smaller galaxies. This effect is illustrated by the solid circles and error bars in figure \[fig:samp\] which represent the average and root mean square deviation of the velocity dispersion within each bin of halo mass. This correlation between mass / luminosity and environment in terms of halo mass [@VDB08a] but also seen with respect to galaxy density [@blanton05; @croton05], means that it is important to take into consideration how such intrinsic properties change as a function of environment. It is well known that the stellar populations of a galaxy are related to its velocity dispersion [@kauff04; @thomas05; @gallazzi05], therefore such a correlation can generate spurious environmental effects. Hence, in order to carry out a robust analysis, it is important to separate out the effects of environment, from those that are caused by this selection bias. To achieve this, we investigate the difference with respect to environment only at the same $\sigma$ range. Our sample is divided into sub-samples either with respect to halo mass, or velocity dispersion. This classification is shown as a grid in figure \[fig:samp\]. The choice of the grid size is motivated by the density of the underlying sample, selecting larger bins at high mass. We also consider the variation in the properties of stellar populations with respect to the position of a (satellite) galaxy within its host halo. The dependence is measured as a function of the projected distance from the luminosity weighted centre of the group, namely the projected halo-centric radius, $R_{\rm proj}$. This will allow us to assess the effect of galaxy accretion in groups on the star formation history. We expect galaxies on the outskirts of a group to be newer members. Hence, these galaxies will be increasingly subject to the various effects of the group environment, such as ’strangulation’, ’ram-pressure stripping’ and ’harassment’ which act on the satellite galaxies of the groups [@wein06; @VDB08a; @VDB08b]. Shown in figure \[fig:sampr\] is the satellite-only sample as a function of projected distance scaled by R$_{\rm vir}$. Similar studies of satellites have found a radial mass segregation within groups [@VDB08a], with the least massive galaxies at the outskirts of the group. We find no such trend within our sample with respect to velocity dispersion. The stellar populations of different halo masses ================================================ Regarding the effect of environment on the stellar populations of elliptical galaxies, it is important to notice that the reported differences have generally been small. For example, studies of the colour-magnitude relation have only found limited and statistically weak evidence [@bern03; @gallazzi06], even more comprehensive analyses [@clemens06; @bern06] find difference of the order $\sim 1$ Gyr. In this paper we optimise the extraction of differences from spectroscopic data via principal component analysis, which has been shown to succeed in detecting differences within highly homogeneous samples [e.g. @igpca; @pca]. PCA --- Principal Component Analysis (PCA) is a multivariate technique that reduces the dimensionality of a data set. In most cases this is just used as a data compression algorithm. However, previous work [@mad03; @igpca; @pca] has shown that vital information can be extracted from the projections on to the first few principal components. In this work the variables that describe the data set are the flux values at each wavelength i.e. the spectra. The task of PCA is to generate a set of basis vectors (the principal components) from the data set, such that one can rank these vectors with respect to the variance they capture. Hence, when obtaining the “coordinates” of each galaxy by projecting their spectra on to the principal components, one can use just the very few coordinates that correspond to the principal components with the highest variance. In @pca we find that the first two components already hold valuable information regarding the average age of the stellar populations and the presence of recent star formation. We refer the reader to that paper for a detailed description of the methodology, although a brief summary is given below. The first principal component is found to be consistent with a typically old stellar population, showing a pronounced 4000Å break, significant metal absorption lines and limited Balmer line strengths. The second component is much bluer, with absorption lines dominated by the Balmer series. A correlation between both principal components is forced by the orthogonality inherent to PCA. When projecting onto the observed spectra, the components give a relation that represents the mass-metallicity-age correlation of early-type galaxies [e.g. @bern03; @thomas05]. The extended scatter in the direction of the second component i.e. with respect to an excess of blue light, is created by recent star formation. This issue is confirmed in two ways: 1) through the application of a two component stellar population model, where we find galaxies with a higher projection onto the second principal component require a higher mass fraction in young stars [@pca; @cloprs], 2) comparing the results of PCA with NUV photometry from GALEX: The NUV$-$r colour is highly sensitive to small amounts of recent star formation [@Scha06; @kav07]. Within our sample, ’NUV bright’ galaxies (NUV$-$r$\leq$4.9) present higher projections of the second principal component, compared to the sample of quiescent, ’NUV faint’ galaxies (NUV$-$r$\geq$5.9). This trend between PCA and the presence of young stars is optimised by rotating the projections on the two dimensional plane spanned by PC1 and PC2, to give two new components: $\eta$ and $\zeta$. These two components can be defined as the distance along the PC1-PC2 correlation ($\eta$) and the residual of the correlation ($\zeta$). Even though NUV is more sensitive to the presence of massive stars than optical spectra, this method is complementary to NUV studies. An advantage of using optical spectra over NUV photometry is that we can track the presence of recent star formation for a greater length of time: NUV light decays very rapidly as the most massive stars die out. In @pca we compare the PCA projections with a number of star formation histories combined with population synthesis models and conclude that $\eta$ is mainly related to the average age of the stellar populations, whereas $\zeta$ tracks the presence of recent star formation [see also @PhD]. This interpretation is carried over to this paper for the analysis of the effects of environment. ![The average of the $\eta$ and $\zeta$ components as a function of $\sigma$ within each group halo mass bin, as labelled (given in $\log($M/M$_\odot)$). The error bars correspond to the error on the mean. The small panels below the main plots show the standard deviation of $\eta$ and $\zeta$ in each bin. The thin black line is for reference only and is a least squares fit to the three most massive halos.\[fig:HaloSigma1\] ](Enviro_f4.eps){width="3.4in"} Stellar Mass vs Group Halo Mass ------------------------------- In this section we look into the dependence of the stellar properties of elliptical galaxies on both the stellar mass and the mass of the halo which the galaxy occupies. We utilise the results from PCA described above, focusing on how the average properties of the galaxy (through $\eta$) and the young stellar populations (through $\zeta$) depend on galaxy mass or halo mass. While it is true that environment plays a major role on the number density of early-type galaxies [i.e. the morphology-density relation, @drez80], here we pose a different question: “at a fixed $\sigma$, how different are the star formation histories of early-type galaxies with respect to environment?”. The left panel of figure \[fig:EZdist\] shows the distribution of the first and last bins in both velocity dispersion ([*bottom*]{}) and group halo mass ([*top*]{}). The comparison reveals that in terms of the mass of the galaxy (i.e. velocity dispersion), smaller galaxies have higher values of both $\eta$ and $\zeta$ indicating a younger age, most likely due to increased recent star formation, in agreement with the ’downsizing’ scenario [@cowie96]. However the distributions of the two extreme bins regarding environment ([*top*]{}) show a considerably smaller divergence in terms of their $\eta$ or $\zeta$ distributions. Note that the preliminary results show that those galaxies in the lowest mass halos have slightly higher values of $\eta$ and $\zeta$. ![The fraction of galaxies with a $\zeta$ value above 0.5, which is consistent with the presence of recent star formation. Each line corresponds to a range of host halo masses, as labelled (given in $\log($M/M$_\odot)$). The error bars are given by Poisson errors.\[fig:HMRSF\] ](Enviro_f5.eps){width="3.5in"} The use of a Kolmogorov-Smirnov (KS) test confirms whether the galaxies in these subsamples originate from the same distribution. This is quantified in terms of the D$_{\rm KS}$ statistic [a measure of the maximum difference of the cumulative distributions, see e.g. @nr]. The right panel of figure \[fig:EZdist\] shows the result of a Monte Carlo simulation, where we extract samples of 100 galaxies from the upper and lower mass bins, and perform the KS test in each case. This exercise is repeated 10,000 times, and the histogram of D$_{\rm KS}$ is shown when extracting galaxies from the upper and lower bins in velocity dispersion (short dashed line), in host halo mass (long dashed line) of from random sampling of the complete sample (solid line). Velocity dispersion clearly plays the dominant role; its histograms in the $\eta$ and $\zeta$ components are far from the distribution of random sampling. However, environment – or halo mass – plays a more subtle role, although a non-negligible one, especially with respect to the $\eta$ component, which does show less of an overlap with the distribution for random sampling. Whether this effect is driven by the mass bias mentioned above or whether is it a genuine effect of environment is studied in this paper. A detailed look at the \[$\eta$, $\zeta$\] distributions -------------------------------------------------------- Many authors have studied the role of environment on the scaling relationships. This includes colours [e.g. @wein06; @kav07], absorption line indices [e.g @kun01; @nelan05; @bern06] as well as derived quantities such as luminosity-weighted age, metallicity and $[\alpha/$Fe$]$ [@thomas05; @clemens06]. In a similar vein, we extend this analysis to include the $\eta$ and $\zeta$ components. Shown in figure \[fig:HaloSigma1\] are the average values of $\eta$ and $\zeta$ with respect to velocity dispersion, for a range of host halo masses. Points corresponding to the same host halo mass range are connected. The velocity dispersion value for each bin is given by the average. A reference line is also plotted in the bottom panel. It is a least squares fit to the most massive halos and is shown in all subsequent figures of $\langle\eta\rangle$ for comparison. The error bars give the uncertainty on the mean values within each bin. The RMS deviation is shown separately at the base of each panel as $\sigma(\zeta)$ ([*top*]{}) and $\sigma(\eta)$. A clear dependence is found with respect to velocity dispersion, such that galaxies with lower values of $\sigma$ have higher components $\eta$ and $\zeta$, revealing the presence of younger stellar populations. Note that as shown in @pca, principal components $\eta$ and $\zeta$ are not dependent on the velocity dispersion, and so no correction is needed (in contrast with analyses of equivalent widths). It is important to notice from figure \[fig:HaloSigma1\] that the effect of environment is very subtle. In terms of $\langle\zeta\rangle$ and in most cases $\langle\eta\rangle$, the trends of the different host halo mass ranges are indistinguishable from each other. However, galaxies in the least massive halos show a consistent and significantly higher value of $\langle\eta\rangle$, across all values of $\sigma$ (although the nature of the trend is the same for all halo mass bins). Given that the differences are small, the significance is determined through a KS test for the $\eta$ component, performed by comparing galaxies in the lowest and the highest host halo bin, across all values of $\sigma$. In the three central bins of velocity dispersion we find a probability higher than 99% that the samples are drawn from different distributions. At the highest and lowest bins of $\sigma$ the small number of galaxies prevent us from stating a similar result. Furthermore, figure \[fig:HaloSigma1\] also indicates that $\langle\zeta\rangle$ does not differentiate with respect to halo mass. This comes as a surprise, as one might have expected that recent star formation is responsible for the higher values of $\langle\eta\rangle$. To explore this issue in more detail, we use the conditional fraction of galaxies with a value of $\zeta$ above a threshold at which one needs to invoke recent star formation. We choose $\zeta\geq$0.5, since galaxies above this value both require significant fractions of young stars in a two-burst model and also have NUV luminosities consistent with recent star formation [@pca; @cloprs]. In figure \[fig:HMRSF\] we show this conditional fraction as a function of velocity dispersion for a range of host halo masses. Consistent with the simple analysis performed above, the main driving force behind the fraction of galaxies with recent star formation is galaxy mass (assuming $\sigma$ is representative of the mass of the galaxy). With respect to halo mass, the fraction of younger galaxies seem to split such that there is a consistent lower fraction for the most massive halo bins (black solid line) relative to the rest of the sample. The significance of this drop is not high in any one bin but the trend is notably consistent across all bins. This is a qualitatively similar result to that found in [@Scha06] using NUV data, although here we see that recent star formation persists in halos below M$_{\rm H}\leq 3\times 10^{13} h^{-1}$M$_\odot$ and is reduced in more massive halos. The possible mechanisms underlying this effect are discussed in the conclusions. ![Change in the $\eta$ component relative to the average relationship with $\sigma$ via SSP fitting (see text for details). A comparison of the lowest mass halo (dashed line) to the most massive halos (solid line) is shown relative to deviations from this relationship due to an increase/decrease in the average SSP age across $\sigma$ (dotted lines).\[fig:HaloSigmaSSP\]](Enviro_f6.eps){width="3.5in"} ### Modelling the $\eta$ component We now turn to quantifying the difference in $\langle\eta\rangle$ from figure \[fig:HaloSigma1\] through the application of stellar population synthesis models. Since the change of $\eta$ is not accompanied by a change in $\zeta$ or the fractional change in high-$\zeta$ galaxies, the shift is unlikely to be related to recent star formation. Furthermore, this shift is relatively small, which implies composite models, such as an exponentially decaying star formation history, may well blur any discriminating effect. Therefore, while the modelling of early-types can be pushed beyond relatively simple formation histories [@bmc], it is more robust in this case to consider simple stellar populations. The observations are compared to synthetic spectra through the equivalent widths of absorption line indices. The model spectra are extracted from the simple stellar populations of @BC03, using a @chab03 initial mass function. A detailed grid of models is constructed over a range of ages $\{1\cdots 14$ Gyr$\}$ and metallicities $-1.0\leq \log(Z/Z_{\odot})\leq +0.35$, at solar abundance ratios. We constrain the population parameters with multiple age sensitive absorption features: H$\beta$, H$\gamma$, H$\delta$, as well as the 4300Å absorption band (G4300) and the metallicity indicator $[$MgFe$]$ [@GonzPhd]. The absorption features are measured in each of the observed galaxy spectra, after being smoothed to the highest velocity dispersion within the respective bin. The equivalent widths (EWs) are estimated using the method outlined in @bmc, in which the pseudo-continuum is determined by a boosted median value of the surrounding spectrum. The specific absorption lines are targeted using a 20Åwindow. We use a standard maximum likelihood method to compare the observed and model absorption features. The model EWs are obtained in the same way, smoothing the spectra to the velocity dispersion considered. For each galaxy we use the probability-weighted age and metallicity, where the probability is defined as P$(t,Z)\sim\exp^{-0.5\Delta\chi^2}$, which gives a robust estimate of the SSP values [e.g. @gallazzi05]. The best fits give a reduced average of $\langle\chi_r^2\rangle = 1.3$. The average age and metallicity from each of the halo mass and velocity dispersion bins is used to investigate the difference seen in the average $\eta$ value in an equivalent analysis to that shown in figure \[fig:HaloSigma1\]. However, we find no significant differences between the predicted SSP ages or metallicities with respect to host halo mass across all velocity dispersions. This illustrates the power of PCA to identify small differences in the spectra of elliptical galaxies, which might be difficult for model-based methods to identify, especially at low S/N. Therefore in order to assess the (second order) effects of environment on $\eta$, we consider small perturbations with respect to the (first order) correlation with velocity dispersion. We define a fiducial relation between velocity dispersion and age/metallicity through the average parameters of each of the complete velocity dispersion bins i.e. including all halo masses. The SSP parameters of this relation are then offset with respect to the average age for each bin in velocity dispersion. The effect on the $\eta$ component of this change (computed directly on the models) is estimated and compared to the observed values – defining for each galaxy a $\Delta\eta$ as the difference between the $\eta$ value of the galaxy and that of the model corresponding to the same velocity dispersion. The model values of $\eta$ are derived in the same way as for the observed values, i.e. via projection onto the principal components followed by a rotation of the projected values. This rather simple analysis enables us to quantify the change in $\eta$. Notice that we assume only a perturbation in age, as this is by far the most dominant effect reported in the literature [@bern03; @thomas05; @nelan05]. While we are only interested in identifying the magnitude of the shift of the lowest mass halo, as a check we compare our relationship of metallicity and age with velocity dispersion to those found by other authors. We quantify this relationship through the slope of a linear fit to the fiducial relation, which was found to be: $\Delta\log (Z/Z_{\odot})/\Delta\log (\sigma) =$ 0.68 and $\Delta\log ($Age $/$Gyr$)/\Delta\log(\sigma) =$ 0.38 respectively. These values are comparable to those reported in @thomas05 (0.55, 0.24), @clemens06(0.76,$\sim$) and @nelan05(0.53,0.59). The $\sigma$ vs. $\Delta\eta$ relationship is shown in figure \[fig:HaloSigmaSSP\] for the fiducial model obtained from the best fit SSP ages and metallicities (labelled “Average”). For reference, we also show the relation when the age is shifted by 1 or 2 Gyr as labelled. We include in the figure the observed values for the lowest halo mass (dashed line) and for the rest of the sample (thick solid line). The figure shows that early-type galaxies in the lowest density regions are on average about 1 Gyr younger than those in denser environments, a result that is consistent for a wide range of velocity dispersion. This result indicates that galaxies residing in all but the lowest mass halos have similar stellar populations suggesting they are formed in similar ways at similar redshifts. This may be due to a true invariance across average/high density environments, such that above a certain density the formation process becomes uniform. Centrals and Satellites ----------------------- The star formation histories of galaxies sitting at the centre of the dark matter halos (centrals) and galaxies which orbit around the centre (satellites) are expected to depend in a different way with respect to the mass of their host halo. Specifically, satellites are more likely to be affected by the transformation mechanisms operating in the halo environment [@VDB08a; @VDB08b]. Therefore, we analyse separately the effects of environment on centrals and satellites and compare the relative differences. ![Comparison of the distribution of the central and satellite galaxy populations at the same velocity dispersion, with respect to average age ($\langle\eta\rangle$; [*bottom*]{}) and to recent star formation (f$(\zeta \geq 0.5)$; [*top*]{}). The short dashed line is the $\langle\eta\rangle$ for the lowest mass halo in the whole sample. The thin solid line in the bottom panel is the reference line from fig.\[fig:HaloSigma1\]. \[fig:satcen\_V\] ](Enviro_f7.eps){width="3.5in"} The comparison of satellite and central early-type galaxies at the same velocity dispersion is shown in figure \[fig:satcen\_V\]. The figure reveals that central galaxies (solid lines) have higher values of $\langle\eta\rangle$ ([*bottom*]{}) and f$(\zeta>0.5)$ ([*top*]{}), indicative of younger average ages [@pasq09b]. and significant recent star formation, respectively, when compared to satellites of the same mass (long dashed lines). This is consistent with @VDB08b and @wein09, who found that in terms of optical colours, centrals are bluer than satellite galaxies, indicating younger ages. Here we can see this extends to the fraction of elliptical galaxies which harbour small amounts of recent star formation as well. However, notice that this trend includes all halo masses. In §3.4.1 we show the effect of separating this central/satellite classification with respect to halo mass. We have also plotted in the bottom panel of figure \[fig:satcen\_V\] the average value of $\eta$ for galaxies in the lowest mass halos (short dashed line). At the lowest halo masses all galaxies are central. Hence, this comparison allows us to rank the importance of host halo mass against the central/satellite nature. The figure shows that of these two properties, the mass of the halo dominates, i.e. centrals in low mass halos are younger than centrals in general. We focus on this aspect in the next two sub-sections. ### Effect on average age: $\eta$ The average values of $\eta$ are plotted for the two subgroups in figure \[fig:satcen\_HMV\]. It is evident from the figure that the analysis is complicated by the segregation imposed naturally by the halo mass on the central/satellite nature of the galaxy. The sample lacks central galaxies in the most massive halos (a limit imposed by the survey volume). On the other hand, at a given velocity dispersion, satellites cannot be found at low halo masses (a trivial constraint imposed by the halo mass). This means a comparison of the two populations can only occur within certain ranges of halo mass. Therefore the middle panel of figure \[fig:satcen\_HMV\] shows the $\langle\eta\rangle$ values *only* over a range of halo masses in which a significant number of both centrals and satellite galaxies exist. The top and bottom panels show the central and satellite galaxies, respectively, over their full halo mass range. The central galaxies ([*top*]{}), show an increase in the average value of $\langle\eta\rangle$ at the lowest halo masses. This is identical to the trend shown in figure \[fig:HaloSigma1\] (notice centrals are the only type in the first halo mass bin). The result for more massive halos is consistent with the main sample. This suggests again that the effect of halo mass on the stellar populations is limited to masses below M$_{\rm H}\sim 10^{12}$M$_\odot$. On the other hand, satellite galaxies ([*bottom*]{}) show an increase in $\langle\eta\rangle$ in the group halo mass range $3\times 10^{12} \leq$M$_{\rm H}\leq 10^{13}$M$_\odot$, which is the lowest halo mass bin at which significant numbers of satellite galaxies exist. The comparison in the middle panel reveals that at the overlapping halo mass i.e. M$_{\rm H}\sim (3\times 10^{12} \cdots 10^{13})$M$_\odot$, centrals have much lower values of $\langle\eta\rangle$. Thus the increase of $\langle\eta\rangle$ found in the population of satellite galaxies would have been hidden by the dominance of centrals in this mass bin. ![$\langle\eta\rangle$ values for the central and satellite galaxy populations within each bin in velocity dispersion *and* halo mass. The central panel compares central and satellite galaxies over a range of halo masses for which sufficient numbers of both types exist. The halo mass range is as follows – given in $\log($M$_{\rm H}/$M$_\odot)$: grey dashed (11.80,12.15); grey dotted (12.15,12.50); grey solid (12.50,13.00); black dashed (13.00,13.50); black dotted (13.50,14.00); black solid (14.00,15.00). The thin solid line shown in all panels is the reference line from fig.\[fig:HaloSigma1\].[]{data-label="fig:satcen_HMV"}](Enviro_f8.eps){width="3.5in"} Given that higher $\eta$ values are consistent with younger average ages suggests that satellites are in fact younger, when compared to central galaxies of the same velocity dispersion and occupying halos of similar mass [@pasq09b]. A KS test confirms that for this halo mass range, the second and third bins in $\sigma$ of the centrals and satellites are drawn from different distributions at a confidence level of $\geq$98% and $\geq$99%, respectively. While this is seemingly in contrast with the previous results, it is important to realise what is being compared in figure \[fig:satcen\_HMV\]. A pure central - satellite split (as in figure \[fig:satcen\_V\]) will generally result in a comparison of central galaxies in low richness environments with satellites in higher density regions. This is the motivation of the split in many papers, under the assumption that the centrals of today are a reasonable approximation to the progenitors of the current satellite population, (e.g. §4.1 @VDB08b). Here we are comparing centrals in groups of significant size, to satellites in similar groups. We find a trend in three out of the four bins considered, although it is only found to be statistically significant in two. The effect is not visible at the higher velocity dispersions. ![Comparison of the distribution of central and satellite populations with respect to the fraction of galaxies with a $\zeta$ value above 0.5, which is consistent with recent star formation. The fraction is conditional i.e. calculated for each bin separately.\[fig:RSFsatcen\]. The binning with respect to halo mass is the same as in figure \[fig:satcen\_HMV\]. ](Enviro_f9.eps){width="3.5in"} Hence, we find that central galaxies ’evolve’ faster than a similar galaxy that is accreted onto a group. This probably relates to the epochs over which environmental effects are efficient. Central galaxies within more massive halos will have interacted with other galaxies during the formation of the group and will also have existed inside a more massive halo earlier than a galaxy of the same mass which is accreted into a group halo. This result is consistent with the recent work of @bern09, who found that early-type brighest cluster galaxies were $\sim1$Gyr older than the surrounding satellite early-type galaxies. Figure \[fig:satcen\_HMV\] reveals that the satellite galaxies in group halos of mass M$_{\rm H}\sim 3\times 10^{12} \cdots 10^{13}$M$_\odot$ have values of $\langle\eta\rangle$ consistent with those of the central galaxies from halos of mass M$_{\rm H}\sim 6\times 10^{11} \cdots 10^{12}$M$_\odot$. This trend is consistent with the hierarchical build up of structures expected within the standard $\Lambda$CDM cosmology. ### Effect on recent star formation: $\zeta$ Figure \[fig:RSFsatcen\] shows the fraction of galaxies with values of $\zeta$ consistent with recent star formation, with respect to central/satellite nature, in a similar manner to figure \[fig:satcen\_HMV\]. The trends of the individual populations of centrals (*top*) and satellites (*bottom*) are consistent with those found in the sample as a whole (see figure \[fig:HMRSF\]). However a comparison of the fraction of high $\zeta$ values between both centrals and satellites ([*middle*]{}) shows a significant difference. The central galaxy data (triangles) are consistently positioned at higher fractions of recent star formation even within the same halo mass range. This might not be particularly surprising: while satellites generally have their accretion of material stopped when they enter the halo, centrals do not, and so can still accrete gas. This mechanism is consistent with the scenario of @kav09 and @cloprs, whereby the recent star formation seen in elliptical galaxies is fuelled by the accretion of small clouds of gas or satellites, Effects with Halo-Centric Radius -------------------------------- Since there is evidence to suggest that the younger low halo mass galaxies are accreted as satellites, we might expect to see younger galaxies on the outskirts of groups. There is also the possibility that the halo-centric radius modulates the efficiency of environmental mechanisms. We investigate this aspect by looking at the relationship of $\eta$ and $\zeta$ as a function of the projected distance from the luminosity weighted centre of the group. ![image](Enviro_f10a.eps){width="3.4in"} ![image](Enviro_f10b.eps){width="3.4in"} Shown in the right hand panel of figure \[fig:RV\] are the satellite galaxies split into subsamples according to velocity dispersion and halo centric radius, (see fig. \[fig:sampr\]), against the average values of $\eta$ for each bin. Since we are looking at galaxies across all group halo masses, the projected distance is scaled using the virial radius of the occupied halo. As found by previous work [e.g. @VDB08a; @wein09] there is no significant trend. There is some indication that a reduction in the scatter (bottom panel) of the $\eta$ values is seen for decreasing radii, although the effect is obviously far from robust. We note that this analysis is complicated both due to the degenerate nature of the projected distance as well as to the fact that smaller halos tend to be more concentrated. These effects are likely to blur the subtle relationships associated with elliptical galaxies and environment. We also analyse in the left hand panel of figure \[fig:RV\] the conditional fractions of galaxies which have $\zeta \geq 0.5$ as a function of the scaled projected distance from the luminosity weighted centre. An inspection of the figure indicates a general trend of increasing recent star formation fractions with projected radius, consistent with the previous comparison of centrals and satellites. The stripping of a satellite surrounding gas may well explain both the trend seen here and in figure \[fig:RSFsatcen\]. Note that the recent star formation fractions seen at the outskirts of the groups are higher than those for the general satellite sample at the same stellar mass. This is consistent with the results seen in @igpca; @pca; and @cloprs, where we show that galaxies in medium density environments show increased fractions of recent star formation possibly due to interactions. The large error bars are a function of the small numbers which exist away from the group centres, yet the trend is consistent across all mass bins. To conclude, we find that the effects with halo centric distance are seen mainly in terms of $\zeta$ i.e. younger subpopulations. As a result of being accreted into the halo an elliptical galaxy will have any residual star formation halted. On the other hand, we find that the average age of the bulk of the stellar component does not depend on the distance from the centre of the halo. This is to be expected if, as argued by [@VDB08a], a large fraction of early-type galaxies transition onto the red sequence as centrals. It may be that the main effects expected are better analysed within cluster samples [e.g. @nelan05]. Discussion and Conclusions ========================== Using the large $\sim$7,000 strong sample of early-type galaxies described in @pca we have investigated the effect of environment, measured through the mass of the dark matter halo that hosts each galaxy. We make use of the previously derived PCA rotated projections, $\eta$ and $\zeta$, sensitive to the average properties and recent star formation, respectively, to identify small differences in their stellar populations. We find that the star formation histories are mainly a function of velocity dispersion. This is shown in figure \[fig:EZdist\] where a considerable difference is evident between the effects of velocity dispersion and group halo mass. When we separate out the sample to remove the mass-environment degeneracy, the stellar populations across most of the halos are indistinguishable. Therefore, while the majority of this paper has focused on the small differences observed in the elliptical galaxy population, the main conclusion is that the effect of environment on the stellar populations of early-type galaxies is [*extremely limited*]{}. However, we do find that small but nonetheless interesting differences can be detected with respect to environment. First of all, galaxies within halos with the lowest masses are estimated to be $\sim$1 Gyr younger than those in the rest of the sample. This offset is consistent with the fact that galaxies in low mass halos are exclusively centrals. Thus, they represent the lower end of the environmental density scale and so our result compares favourably with the general view that galaxies in low density regions are 1$-$2 Gyr younger [@bern98; @thomas05; @bern06; @sanchezblaz06]. We use in this paper a catalogue of groups to estimate the dark matter halo masses, allowing us to compare more naturally with theoretical modelling results. One of the more recent discoveries is that many observations [e.g. @binney04; @croton05; @dekel06; @catt08] can be well explained by assuming a critical halo mass above which the supply of cold gas is stopped by virial shock heating [@birn03; @keres05; @dekel06]. The critical halo mass is found to be M$_{\rm H}\sim 10^{12}$M$_\odot$. Modelling by @catt08 show that galaxies above and below this value are consistent with the analysis of @thomas05 with respect to galaxies in high/low density regions (see Cattaneo et al. figure 6). It is possible that we see this divide more explicitly here, since the lowest halo bin of this sample is the only one to contain galaxies in halos below or close to the critical mass halo considered here. Our sample gives a consistent $\sim$1 Gyr age difference between galaxies in halos with masses above and below this critical mass. We do not find a gradual trend from the lowest mass halo upwards, a result that may be caused by a combination of effects. The limited timescale of evolutionary signatures on the optical spectra [@harker06] and the importance of the individual galaxy mass and the nature of the group build up or the low quality of the data, may mean that the difference in SFH is only visible in the lowest halo mass where the effect is strongest. The average age is not the only parameter that varies across the sample. The fraction of galaxies with a high value of $\zeta$ (consistent with recent star formation) is higher in the four lowest mass halos. The reason for the drop above M$_{\rm H}\sim 3\times 10^{13}h^{-1}$M$_\odot$ is not entirely obvious. However, we note that this is the same halo mass at which a decline is observed in the optical AGN population [@gil07; @pasq09], possibly giving way to radio-mode AGN. We also note that gas stripping processes are more effective in more massive halos. The modelling results of @simha09 indicate that satellites of medium and low mass i.e. those containing the majority of the recent star formation, have their accretion stopped efficiently at halo masses M$_{\rm H} \sim 10^{14}$M$_\odot$. We also observe a decrease of recent star formation in the satellite population as a whole, as well as with decreasing halo centric radius, suggesting that such process works across all halo masses although with differing efficiency. We also investigated the differing effects of environment on the population of central and satellite galaxies. These galaxies are dominant in low and high mass halos, respectively, which implies a limited overlap on the parameter space spanned by M$_{\rm H}$ and $\sigma$. In the range of halo mass over which a comparison is possible, the two populations were found to have similar stellar populations at high velocity dispersion ($\sigma\geq$200 km/s). However, at lower values ($150$ km/s $\leq\sigma\leq$ 200 km/s), $\langle\eta\rangle$ is higher for satellite galaxies, as expected for younger ages. Hence, the stellar populations of central galaxies were formed earlier. Environmental effects preferentially act on satellites whereas central galaxies mainly move onto the red sequence as the result of a merger or when its halo mass surpasses the critical mass of @dekel06. Central galaxies generally sit at the peaks of the dark matter density distribution [@berlind03]. In halos with mass M$_{\rm H} \sim 3\times10^{13}h^{-1}$M$_\odot$, centrals are within groups containing significant numbers of additional (satellite) galaxies. Therefore, they are likely to have been subject to a higher level of interactions at earlier times and possibly had gas heated by infalling material and satellites [@KO_08]. Furthermore, if we assume that the central galaxy is the core member of the group, we would expect these galaxies to have crossed the threshold of @dekel06 earlier than a similar satellite, which would have been accreted later. This would imply that at least [*some*]{} satellites will have become ellipticals after being accreted onto the group, creating the lower mean ages. Therefore, we have the emerging picture that while central galaxies are quenched on average at earlier times they retain or accrete small amounts of gas with which to form small amounts of stars. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank the referee, Sadegh Khochfar, for his useful comments and suggestions. BR gratefully acknowledges funding from the RAS. SK was supported by a Research Fellowship from the Royal Commission for the Exhibition of 1851. We acknowledge use of the Delos computer cluster at King’s College London-Physics. This work makes use of the Sloan Digital Sky Survey. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. Adelman-McCarthy, J. K., et al. 2006, ApJS, 162, 38 Barnes,J.E., Hernquist, L., 1996, ApJ, 471 115. Bell, E. F., et al. 2004, ApJ, 608, 752 Berlind, A. A., et al., 2003, ApJ, 593, 1 Bernardi, M., et al. 1998, ApJ, 508, 143 Bernardi, M., et al. 2003, AJ, 125, 1866 Bernardi, M., et al. 2006, AJ, 131, 1288 Bernardi, M., et al. 2009, MNRAS, 395, 1491 Bezanson, R., et al., 2009, ApJ, 697, 1290 Binney, J., 2004, MNRAS, 347, 1093 Birnboim, Y., Dekel, A., 2003, MNRAS, 345, 349 Blanton, M. R., Eisenstein, D., Hogg, D. W., Schlegel, D. J., & Brinkmann, J., 2005, ApJ, 629, 143 Blanton, M. R., Berlind, A. A., 2007, ApJ, 664, 791 Blanton, M. R., et al. 2005, AJ, 125, 2562 Bovy, J., Hogg, David W., Moustakas, J., 2008, ApJ, 688, 198 Bruzual, G., Charlot, S., 2003, MNRAS, 344, 1000 Cattaneo, A., Dekel, A., Faber, S.M., Guiderdoni, B., 2008, MNRAS, 389, 567 Chabrier G., 2003, PASP, 115, 763 Clemens, M.S., et al. 2006, MNRAS, 370, 702 Cowie, L. L., Songaila, A., Hu, E. M. & Cohen, J. G., 1996, AJ, 112, 839 Croton D. et al. 2005, MNRAS, 365, 11 Dekel, A., Birnboim, Y., 2006, MNRAS, 368, 2 De Lucia G., Springel V., White S. D. M., Croton D., Kauffmann G., 2006, MNRAS, 366, 499 di Matteo, P., Combes, F., Melchior, A.-L., Semelin, B., 2007, A&A, 468, 61 Dressler, A. ApJ, 236, 351, 1980 Faber, S.M., et al. 2007, ApJ, 665, 265 Ferreras, I., Pasquali, A., de Carvalho, R. R., de la Rosa, I. G., Lahav, O., 2006, MNRAS, 370, 828 Fitzpatrick, E. L., 1999 PASP, 111, 63 Folkes, S.R., Lahav, O., Maddox, S. J., 1996, MNRAS, 283, 651 Fontanot, F., et al., 2009, MNRAS, 397, 1776 Gallazzi, A., Charlot, S., Brinchmann, J., White S.D.M., Tremonti, C. A., 2005, MNRAS, 362, 41 Gallazzi, A., et al. 2006, MNRAS, 370, 1106 Gebhardt, K., et al. 2000, ApJ, 539, 13 Gilmour, R., et al., 2007, MNRAS, 380, 1467 Gonzalez, J. J., 1993, Ph.D. thesis, Univ. California, Santa Cruz Goto T., et al. 2003, AAS, 203, 2602 Gottl[ö]{}ber, S., Klypin, A., Kravtsov, A. V., 2001, ApJ, 546, 223 Harker J.J., Shiavon, R., Weiner, B., Faber, S. 2006, ApJ, 647,103 Hopkins, P.F., et al., 2006, ApJS, 163, 50 Kaviraj, S. et al., 2007, ApJS, 173, 619 Kaviraj, S. et al., 2009, MNRAS, 394, 1713 Kang, X., Jing, Y. P., Mo, H. J., B[ö]{}rner, G., 2005, ApJ, 631, 21 Kauffmann G., et al., 2004, MNRAS, 353, 713 Keres, D., Katz, N., Weinberg, D. H. & Davé, R., 2005, MNRAS, 363, 2 Khochfar, S., Burkert, A., 2003, ApJ, 597, 117 Khochfar, S., Silk, J. 2006, ApJ, 648, L21 Khochfar, S., Ostriker, J.P., 2008, ApJ, 680, 54 Kuntschner, H., et al., 2001 MNRAS, 323, 615 Madgwick, D., et al, 2003, MNRAS, 343, 871 Naab, T., Burkert, A., 2003, ApJ, 597, 893 Naab, T., Khochfar, S., Burkert, A., 2006, ApJ, 636, 81 Navarro, J. F.; Frenk, C. S.; White, S. D. M. 1997, ApJ, 490, 493 Nelan, J. E., et al., 2005, 2005, ApJ, 632, 137 Pasquali, A., van den Bosch, F.C., Mo, H.J., Yang, X., Somerville, R., 2009, MNRAS, 394, 38 Pasquali, A., Gallazzi, A., Fontanot, F., van den Bosch, F.C., De Lucia, G., Mo, H.J., Yang, X., 2009, submitted to MNRAS (arXiv:0912.1853) Press W. H., Teukolsky S. A., Vetterling W. T., Flannery B. P., 1992, Numerical Recipes in C. Cambridge Univ. Press, Cambridge Rogers, B.; Ferreras, I.; Lahav, O.; Bernardi, M.; Kaviraj, S; Yi, Sukyoung K., 2007 MNRAS, 382, 750 Rogers, B., Ferreras, I., Kaviraj, S., Pasquali, A., Sarzi, M., 2009, MNRAS,399, 2172 Rogers, B., Ferreras, I., Peletier, R.F., Silk, J., 2009, MNRAS, in press, arXiv0812.2029 Rogers, B., PhD Thesis, King’s College London, 2009 Ronen, S., Aragón-Salamanca, A., Lahav, O., 1999, MNRAS, 303, 284 S[á]{}nchez-Bl[á]{}zquez, P. Gorgas, J. Cardiel, N. Gonz[á]{}lez, J. J.,2006, A&A, 457, 809 Schawinski, K., et al., 2006, ApJ, Schlegel, D. J., Finkbeiner, D. P., Davis, M., 1998, ApJ, 500, 525 Slonim, N., Somerville, R., Tishby, N., Lahav, O., 2001, MNRAS, 323, 270 Simha, V., et al., 2009, MNRAS, tmp.1179 Somerville, R.S., 2008, MNRAS, 391, 481 Thomas, D., Maraston, C., Bender, R., Mendes de Oliveira, C., 2005, ApJ, 621, 673 Trager S. C., Worthey G., Faber S. M., Burstein D., Gonzalez J. J., 1998, ApJS, 116, 1 Toomre, A., Toomre, J., 1972, ApJ, 178, 623 van den Bosch, F.C., Yang, X., Mo, H. J., 2003, MNRAS, 340,771 van den Bosch, F.C., et al., 2008, arXiv, 0805.0002 van den Bosch, F.C., et al., 2008, MNRAS, 387, 79 Worthey, G., Ottaviani, D.L., 1997, ApJ, 111, 377 W einmann, S.M., van den Bosch, F.C., Yang, X., Mo, H.J., 2006, MNRAS, 366, 2 Weinmann, S.M., er al., 2009, MNRAS, 1213, 1228 Yang, X., Mo, H. J., van den Bosch, F. C., Jing, Y.P., 2005, MNRAS, 356, 1293 Yang, X., Mo, H. J., van den Bosch, F. C., Pasquali, A., Li, C., Barden, M., 2007, ApJ, 671, 153 Yang, X., Mo, H. J., van den Bosch, F. C., 2008, ApJ, 676, 248 York, D.G., et al 2000, AJ, 120, 1579 \[lastpage\] [^1]: E-mail: [email protected] [^2]: $^{0.1}M_r$ is the SDSS-r band magnitude for which K and E corrections are applied at redshift z$=$0.1
--- abstract: 'The influence of high magnetic fields on coherent transport is investigated. A monolayer graphene quantum ring is fabricated and the Aharonov-Bohm effect is observed. For increased magnitude of the magnetic field higher harmonics appear. This phenomenon is attributed to an increase of the phase coherence length due to reduction of spin flip scattering.' address: 'Institut für Festkörperphysik, Leibniz Universität Hannover, Appelstr. 2 30167 Hannover, Germany' author: - '[D. Smirnov]{}' - '[J. C. Rode]{}' - '[R. J. Haug]{}' title: '[Suppression of decoherence in a graphene monolayer ring]{}' --- The Aharonov-Bohm (AB) effect [@AharonovBohmEffektFP1959[5]] is one of the most prominent effects to directly observe quantum interference. It was studied in numerous publications over the last years in ring shaped metals[@Ahronov-Bohm_firstExperiment] and semiconducting heterostructures[@qdring1; @qdring2; @qdring3]. Higher harmonics of the AB effect appear when the charge carriers interfere after passing the ring more than once. However the $N^{th}$ harmonic only occurs if the most decisive factor, the phase coherence length $l_{\phi}$, is in the order of $N+1$ times the system length, i.e. ring circumference. Recently the AB effect gained interest in experimental[@AB-Exp-1; @AB-Exp-2; @MyAAB-Exp-3; @Markovic-AB-Expr-4; @AB-Mirrors-Expr-5] and theoretical[@AB-Theory1; @AB-Theory2; @TheorieKleintunneling[13]; @AB-Theory3] studies for monolayer graphene[@Erscheinungspaper2004[1]]. Although higher harmonics were already reported for most of the semiconducting heterostructures, only in two publications higher harmonics are mentioned for graphene structures [@AB-Exp-1; @AB-Mirrors-Expr-5]. However in both studies specific circumstances were established: Observation of the first harmonic only at specific magnetic fields[@AB-Exp-1] or higher harmonics with superconducting mirrors at the leads of the ring[@AB-Mirrors-Expr-5]. In order to observe higher AB harmonics the phase coherence length has to increase. Therefore ways to suppress some of the decoherence mechanisms have to be further investigated. ![\[Fig1\] [(a) AFM picture of the measured graphene ring sample. (b) Four probe characterization measurement for an area near the ring versus backgate (BG) voltage. (c) Shubnikov-de Haas oscillations characterizing the sample as monolayer graphene. (d) Schematic picture of the graphene ring with a tilt angle $\mathrm{\alpha}$. (e) Aharonov-Bohm resistance measurements. (f) The filtered oscillation from the same measurement. (g) The Fourier spectrum after filtering (red) with a Lorentzian fit (black).]{}](fig1) In this letter a monolayer graphene ring is investigated. Our experimental setup allowed to vary the perpendicular and the in-plane magnetic field. The AB effect is studied in such conditions and the impact of magnetic field and its components is analyzed. The sample was fabricated using standard procedures: Graphene was placed on $330\,\mathrm{nm}\ \mathrm{Si/SiO_{2}}$ substrate via scotch tape method. The flake was identified as a monolayer by optical microscopy using the contrast shift in the red channel[@MakingGrapheneVisible]. Afterwards a ring was formed via electron beam lithography and plasma oxygen etching with an average radius of $290\,\mathrm{nm}$ and a width of $200\,\mathrm{nm}$. Figure 1(a) shows an atomic force microscope (AFM) picture of the device. Before the evaporation of Chromium/Gold contacts the sample was cleaned by AFM contact mode scanning [@AFMCleaning] to increase overall quality. The as prepared device was then loaded in a $\mathrm{He^{3}/He^{4}}$ mixing chamber with a base temperature of $100\,\mathrm{mK}$ and a total magnetic field $B_{tot}$ up to $14\,\mathrm{T}$. Furthermore the experimental setup allowed tilting of the sample in respect to $B_{tot}$ (Fig. 1(d)) leading to perpendicular magnetic field $B_{p}$ and in-plane magnetic field $B_{ip}$ components. The resistance was measured with a lock-in amplifier with a current of $5\,\mathrm{nA}$. The characterization of the sample was performed in a four-terminal setup using contacts near the ring. Figure 1(b) shows the resistance measurement versus the backgate voltage $U_{BG}$. The charge neutrality point (CNP) can be identified at $U_{BG}=32\,V$. This doping is attributed to the fabrication process. From the four-terminal measurements the mobility $\mathrm{\mu = 18.000\,cm^2/Vs}$ for electrons and $\mathrm{\mu = 20.000\,cm^2/Vs}$ for holes was calculated. This can be attributed to the used type of cleaning, which increases the mobility but doesn’t reduce doping in the same way. Figure 1(c) shows the longitudinal magnetoresistance versus the inverse magnetic field. Shubnikov-de Haas oscillations are visible with a Berry’s Phase of $\pi$ which identifies the flake as monolayer [@Berry; @Phase1; @Berry; @Phase2]. Figure 1(e) shows AB measurements of the ring for a fixed charge carrier concentration of $n_{e}=3.7\cdot 10^{16}\,\mathrm{m^{-2}}$. Due to a limited number of contacts on one side of the ring a three-terminal setup was used. Oscillations with an average visibility of $\mathrm{1.5\,\%}$ on top of the background signal are observed. These oscillations can be identified as AB effect. The period of these oscillations has a mean value of $\Delta B\mathrm{=16.7\,mT}$ for the shown data. Overall, the average AB period for different measurements is $\Delta B=16.0\pm 1.0\,\mathrm{mT}$. This corresponds to a ring radius $\mathrm{287.5\pm 7.5\,nm}$ which fits the geometry of the ring well. The background fluctuations can be identified as universal conductance fluctuations. To separate the AB oscillations from the background fluctuations the fast Fourier transform of the measured signal is multiplied with a smooth high-pass filter. The filtered AB oscillations are shown in Fig. 1(f). Figure 1(g) shows a Lorentzian fit to the Fourier spectrum. The original AB oscillations are visible, however further contributions at higher frequencies are not observed. From the ring conductance one can estimate the diffusion constant using the Einstein relation $\sigma=\nu^2eD$ with the Density of States $\nu=4\sqrt{n\pi}/h v_F$ , $v_F$ the Fermi velocity, to $D\approx\mathrm{0.01\,m^2/s}$. By estimating $\tau_{\phi}$ from[@Weak_localization_Estimation] ($\tau_\phi ^{-1}\approx k_BT\cdot \mathrm{ln}(g)/(\hbar g)$, $g=\sigma h/e^2$) we can calculate the phase coherence length $l_{\phi}=\sqrt{D\tau_{\phi}}$ to $l_{\phi}\approx 1.6\,\mathrm{\mu m}$. This estimation is in agreement with our observation. The phase coherence length $l_{\phi}$ is in the range of the ring circumference $L_C=\mathrm{1.8\,\mu m}$ which explains the observation of the original AB oscillations and the absence of higher harmonics. The low visibility of the oscillations might be resulting not only from the low phase coherence length, but also from the high number of modes in our system. However, it is comparable with AB oscillations shown in monolayer graphene in previous experiments[@AB-Exp-1; @AB-Exp-2; @MyAAB-Exp-3; @Markovic-AB-Expr-4; @AB-Mirrors-Expr-5] even for much lower charge carrier concentrations. Figure 2(a) shows the AB measurements for a fixed charge carrier concentration of $n_{e}=2.1\cdot 10^{16} \mathrm{m^{-2}}$. The sample was tilted by $25^\circ$ and the range of the total magnetic field is increased so the influence of a high magnetic field and its components can be studied. Nor Shubnikov-de Haas oscillations nor the Quantum Hall effect are observed in the magnetotransport measurement, yet the AB effect is clearly visible even for high magnetic fields (Fig. 2(b)-(d)). The absence of Landau level quantization indicates that the ring does not achieve the same quality as the rest of the flake, presumably caused by strong edge disorder. To analyze the influence of high magnetic fields on the AB effect the measurement was split up into different parts. The range of these regions (Fig. 2a) was chosen to be small enough to clearly observe the influence of $B_{tot}$ or its components but high enough to resolve clear features in the Fourier spectrum. ![\[Fig2\] [(a) Magnetotransport measurements for a tilt angle $\alpha=25^\circ$. Three regions are marked representing $\overline{B}_{tot}=3.5\,\mathrm{T}$, $7.7\,\mathrm{T}$, and $11.8\,\mathrm{T}$. (b)-(d) Filtered oscillations (as described in text). (e)-(g) Filtered Fourier spectra for the marked regions. A development of a second peak is clearly visible.]{}](fig2) Figure 2(e)-(g) shows equally scaled Fourier spectra centred around the marked regions in Fig. 2(a). The Fourier spectrum for $\overline{B}_{tot}=3.5\,\mathrm{T}$ shows a clear developed first peak corresponding to the original AB oscillations with a shoulder at higher frequencies next to it. For higher magnetic fields the shoulder develops into a clearly visible second peak (Fig. 2(f)). The spectra were fitted with a double Lorentzian fit. The average position of the second peak is $B_p^{-1}=\mathrm{115 \pm 10\,T^{-1}}$ which corresponds to a period of $\Delta B=8.75\pm 0.75\,\mathrm{mT}$. Increasing the magnetic field even further not only leads to a more visible second peak but a higher first peak as well (Fig. 2(g)). This corresponds to an amplitude gain for the original AB oscillations (Fig. 2(d)). The second peak can be explained as the first harmonic of the AB oscillations and fits the geometry of the ring with an expected period of $\Delta B_{g}\sim 8\,\mathrm{mT}$ quite well. Additionally to the observed first harmonic a shoulder-like feature is visible at $\overline{B}_{tot}=11.8\,\mathrm{T}$. It was analyzed with a further Lorentzian fit and the resulting period is $\Delta B\approx 5.5\,\mathrm{mT}$. The expected period for the second AB harmonic is $\Delta B\approx 6\,\mathrm{mT}$, therefore the observed shoulder is identified as the second AB harmonic. Due to the observation of the first and second AB harmonic in Fig. 2(g) one can estimate the coherence length to be in between two to three times the ring circumference at $\overline{B}_{tot}=11.8\,\mathrm{T}$, i.e. $l_{\phi}=4.5\,\mathrm{\mu m}$. So by increasing the magnitude of the magnetic field the phase coherence length seems to increase as well. With high enough magnetic field the phase coherence length comes in range of two times the ring circumference and the first harmonic becomes visible (Fig. 2(e),(f)). A longer phase coherence length also leads to a full development of the original AB oscillation which is observed in the development of the main peak in Fig. 2(e)-(g). The phase coherence length can be influenced by a number of scattering processes. Due to the used preparation methods a strong disorder at the edges is expected, leading to charge traps or defects. Furthermore an evenly distributed presence of defects in the $\mathrm{SiO_2}$ substrate and adatoms on top of the flake is certain due to the observed doping (Fig. 1(b)). However the absence of Landau level quantization over the ring in comparison to a clear observation next to it (Fig. 1,2) is evident for a higher impact of edge disorder on the transport through the ring. Recently it was shown[@Kats1; @Kats2] that fluctuations of the magnetic moments can occur at the edges of graphene nanoribbons resulting in spin-flip scattering. Such spin-flip scattering will lead to decoherence and a suppression of the phase coherence. Applying a magnetic field prevents this decoherence mechanism by spin polarisation. As a result the phase coherence length is increasing and leads to the observed phenomenon. We can estimate the decoherence rate reduction from the observed increase of the phase coherence length from $l_{\phi,B_{tot}\approx 0\,\mathrm{T}}=1.6\,\mathrm{\mu m}$ to $l_{\phi,B_{tot}\approx 11.8\,\mathrm{T}}=4.5\,\mathrm{\mu m}$, and the obtained value is $\tau_{diff}^{-1}\approx 3.6\,\mathrm{ns}^{-1}$. An increase of the phase coherence length with magnetic fields was also shown in experiments studying universal conductance fluctuations in in-plane magnetic field[@UCF_in-plane; @magnetic; @field] and was attributed to interaction with magnetic defects in the substrate. Although our sample geometry differs strongly from[@UCF_in-plane; @magnetic; @field], the resulting observation and decoherence rate reduction are comparable, which is a further support of the influence of spin polarisation. ![\[Fig3\] [Fourier spectra for AB measurement with different tilt angles $\alpha=25^\circ$, $35^\circ$, and $60^\circ$. The average of the total magnetic field is kept constant at $B_{tot}=6\,\mathrm{T}$ ]{}](fig3) In Fig. 2 perpendicular and parallel components of the magnetic field are variated, so the impact of only the in-plane magnetic field is not clear. Figure 3 shows the Fourier spectra for AB measurements with different tilt angles. Though the mean magnitude of the total magnetic field is kept constant the in-plane component increases with higher tilt angles. With rising tilt angles a development of the first harmonic is observed. The peaks are analyzed as described before and show the same result for their positions. As a result one can conclude that the in-plane component alone seems to increase the phase coherence length at constant total magnetic field. However the main effect seems to stem from the total magnetic field. Although the in-plane magnetic fields for $\alpha=35^\circ$ and $\alpha=60^\circ$ (Fig. 3) are comparable to $\alpha=25^\circ$ (Fig. 2(f),(g)), higher harmonics are stronger developed for higher total magnetic fields. The observed dependence on the in-plane component is hinting towards an anisotropic spin scattering, which was recently predicted in a theoretical study[@Anisotropy]. In conclusion we have reported AB effect in monolayer graphene with an experimental setup that allowed to tilt the device in respect to magnetic field. A development of the first harmonic and the observation of the second harmonic were presented with an increase of the total magnetic field. This observation is explained by an increase of the phase coherence length through spin polarisation. Additionally we show that increasing the in-plane component alone leads to a similar result, hinting towards a further anisotropic decoherence mechanism. We acknowledge discussions with V. I. Fal’ko, P. Recher and N. Ubbelohde. This work was supported by the DFG via SPP 1459, and the NTH School for Contacts in Nanosystems. [10]{} Y. Aharonov and D. Bohm, [*Phys. Rev.*]{} [**115**]{}, 485, (1959). R. A. Webb, S. Washburn, C. P. Umbach, and R. B. Laibowitz, [*Phys. Rev. Lett.*]{} [**54**]{}, 2696, (1985). R. Schuster, E. Buks, M. Heiblum, D. Mahalu, V. Umansky, and H. Shtrikman, [*Nature*]{} [**385**]{}, 417, 1997. A. Fuhrer, S. Lüscher, T. Ihn, T. Heinzel, K. Ensslin, W. Wegscheider, and M. Bichler, [*Nature*]{} [**413**]{}, 822, 2001. U. F. Keyser, C. Fühner, S. Borck, R. J. Haug, M. Bichler, G. Abstreiter, and W. Wegscheider, [*Phys. Rev. Lett.*]{} [**90**]{}, 196601, 2003. S. Russo, J. B. Oostinga, D. Wehenkel, H. B. Heersche, S. S. Sobhani, L. M. K. Vandersypen, and A. F. Morpurgo, [*Phys. Rev. B*]{} [**77**]{}, 085413, (2008). M. Huefner, F. Molitor, A. Jacobsen, A. Pioda, C. Stampfer, K. Ensslin, and T. Ihn, [*New Journal of Physic*]{} [**12**]{}, 043054, (2010). D. Smirnov, H. Schmidt, and R. J. Haug [*App. Phys. Lett.*]{} [**100**]{}, 203114, (2012). A. Rahman, J. W. Guikema, S. H. Lee, and N. Markovic [*Phys. Rev. B*]{} [**87**]{}, 081401 (R), (2013). Y. Nam, J. S. Yoo, Y. W. Park, N. Lindvall, T. Bauch, and A. Yurgens [*Carbon*]{} [**50**]{}, 5562, (2012). P. Recher, B. Trauzettel, A. Rycerz, Ya. M. Blanter, C. W. J. Beenakker, and A. F. Morpurgo, [*Phys. Rev. B*]{} [**76**]{}, 235404, (2007). R. Jackiw, A. I. Milstein, S. Y. Pi, and I. S. Terekhov, [*Phys. Rev. B*]{} [**80**]{}, 033413, (2009). J. Schelter, D. Bohr, and B. Trauzettel, [*Phys. Rev. B*]{} [**81**]{}, 195441, (2010). D. Faria, A. Latgè, S. E. Ulloa, and N. Sandler, [*Phys. Rev. B*]{} [**87**]{}, 241403, (2013). K. S. Novesolelov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, [*Science*]{} [**306**]{}, 666, (2004). F. V. Tikhonenko, D. W. Horsell,R. V. Gorbachev, and A. K. Savchenko, [*Phys. Rev. Lett.*]{} [**100**]{}, 056802, (2008). M. B. Lundeberg, R. Yang, J. Renard, and J. A. Folk [*Phys. Rev. Lett.*]{} [**110**]{}, 156601, (2013). P. Blake, E. W. Hill, A. H. Castro Neto, K. S. Novoselov, D. Jiang, R. Yang, T. J. Booth, and A. K.Geim, [*Appl. Phys. Lett.*]{} [**91**]{}, 063124, (2007). A. M. Goossens, V. E. Calado, A. Barreiro, K. Watanabe, T. Taniguchi, and L. M. K. Vandersypen, [*App. Phys. Lett.*]{}, [**100**]{}, 073110, (2012). K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, [*Nature*]{} [**438**]{}, 197, (2005). Y. Zhang, Y. Tan, H. L. Stormer, and P. Kim, [*Nature*]{} [**438**]{}, 201, (2005). O. V. Yazyev and M. I. Katsnelson, [*Phys. Rev. Lett*]{} [**100**]{}, 047209, (2008). V. K. Dugaev and M. I. Katsnelson, [*Phys. Rev. B*]{} [**90**]{}, 035408, (2014). S. Fratini, D. Gos$\mathrm{\acute a}$lbez-Mart$\mathrm{\acute i}$nez, P. Merodio C$\mathrm{\acute a}$mara, and J. Fern$\mathrm{\acute a}$ndez-Rossier [*Phys. Rev. B*]{} [**88**]{}, 115426, (2013)
--- abstract: 'The orientation of small ice crystals in cold clouds determines the reflection of light from the sun. This orientation results from the torque exerted by the fluid on the settling crystals. Here, we compute the torque acting on a small spheroid in a uniform flow by solving numerically the Navier-Stokes equations, and we compare our results with recent predictions \[Dabade [*et al.*]{} (2015)\], derived for small particle Reynolds numbers ${\rm Re}$, ${\rm Re} \ll 1$. We find that the angular dependence of the torque predicted by the theory remains qualitatively correct even when the Reynolds numbers is as high as ${\rm Re} \sim 10$. The theory describes qualitatively very well how the magnitude of the torque depends on the aspect ratio of the spheroid, for oblate and prolate particles. At larger Reynolds numbers the flow past spheroids acquires a more complicated structure, resulting in systematic deviations from the theoretical predictions. Overall, our numerical results provide an important justification of recent theoretical approaches to predict the statistics of orientation of ice-crystals settling in a turbulent flow.' address: - 'SINTEF Ocean, 7052 Trondheim, Norway' - 'Department of Engineering Mechanics, Tsinghua University, 100084 Beijing, China' - 'Department of Energy and Process Engineering, NTNU, NO-7491 Trondheim, Norway' - 'Department of Physics, Gothenburg University, 41296 Gothenburg, Sweden' - 'Univ. Lyon, ENS de Lyon, Univ. Claude Bernard, CNRS, Laboratoire de Physique, F-69342, Lyon, France' - 'Department of Physics, Gothenburg University, 41296 Gothenburg, Sweden' author: - 'F. Jiang' - 'L. Zhao' - 'H. Andersson' - 'K. Gustavsson' - 'A. Pumir' - 'B. Mehlig' title: Inertial torque on a small spheroid in a uniform flow --- Introduction ============ How does a spheroidal particle settle in a quiescent fluid? When the settling velocity is small enough, so that the fluid motion induced by the particle can be described by the Stokes approximation [@Bre83; @Kim:2005], the particle settles at an arbitrary constant orientation equal to its initial orientation. But since the initial particle orientation is marginally stable, any small perturbation must affect the particle orientation. For example, for very small particles, upon which thermal noise plays a significant role, Brownian torques induce random orientation. In addition, slight breaking of the fore-aft symmetry of the particle [@Kha89; @Can16; @roy2019inertial] gives rise to a torque causing the particle to settle at a steady angle determined by particle shape, independent of its initial orientation. These torques, induced either by thermal fluctuations or by specific fore-aft asymmetry of the particle, compete with the inertial torque arising from convective inertial corrections to the Stokes approximation. A heavy particle settling steadily in a fluid experiences an undisturbed uniform mean flow corresponding to the negative settling velocity. This mean flow exerts a convective inertial torque on the particle. Its effect depends upon the particle Reynolds number $$\label{eq:rep} {\rm Re} = Ua_{\rm max}/\nu\,.$$ Here $U$ is the settling speed of the particle, $\nu$ is the kinematic viscosity of the fluid, and $a_{\rm max}$ measures the maximal linear size of the particle – the half length of a rod or the radius of a disk. For small ${\rm Re}$, the convective inertial torque turns the spheroid so that it settles with its broad side first. Brenner & Cox [@Cox65] calculated the torque by perturbation theory in ${\rm Re}$, for nearly spherical particles in a uniform flow. A technically important point is that the convective-inertia torque induced by the flow results from a singular perturbation of the Stokes equation, so that straightforward perturbation theory in ${\rm Re}$ fails even at very small values of ${\rm Re}$. Using asymptotic matching methods [@Ben78], Khayat & Cox [@Kha89] obtained the convective-inertia torque in the slender-body limit, complementing the earlier results for nearly spherical particles. More recently, Dabade [*et al.*]{} [@Dab15] used the reciprocal theorem to calculate this torque for spheroids of arbitrary aspect ratio – disks and rods – to linear order in ${\rm Re}$. Several earlier numerical studies have been devoted to a determination of the torque acting on spheroids in a uniform flow. Hölzer and Sommerfeld [@Holzer09] used a lattice-Boltzmann method (LBM) to compute the steady-flow torque on non-spherical particles of different shapes, amongst others for a prolate spheroid ($\lambda = 3/2$) at different angles of inclination to the flow. Ouchene [*et al.*]{} [@Ouchene15; @Ouchene16] used a commercial Navier-Stokes solver to resolve the flow field around prolate spheroids with aspect ratios $\lambda$ ranging from 5/4 to 32/1. Zastawny [*et al.*]{} [@Zastawny12] considered both prolate ($\lambda$ = 5/4 and 5/2) and oblate ($\lambda$ = 1/5) spheroids by means of an immersed boundary method and Sanjeevi [*et al.*]{} [@Sanjeevi18] used a LBM approach to compute the flow field around a prolate ($\lambda$ = 5/2) and an oblate ($\lambda$ = 2/5) spheroid at various angles of inclination. Zastawny [*et al.*]{} [@Zastawny12], Ouchene [*et al.*]{} [@Ouchene16] and Sanjeevi [*et al.*]{} [@Sanjeevi18] proposed semi-empirical correlation formulae for their torque data, in the form of explicit functions of inclination angle and Reynolds number. These earlier studies provide important insight for several spheroid shapes. However, they give the torque only for certain shapes, particle inclination to the flow, and particle Reynolds number. For slender fibres more is known. Shin [*et al.*]{} performed numerical simulations, and their Fig. 5 shows that the Khayat & Cox theory works well for slender fibers up to Reynolds numbers of the order of $\sim 10$. [Figs/fig1.pdf]{} Our goal here is to validate the small-Re-model [@Cox65; @Kha89; @Dab15] for spheroids of different aspect ratios in a steady homogeneous flow, and to determine how the torque changes as the Reynolds number increases. To answer this question, we solved numerically the Navier-Stokes equations past a spheroid at rest, in a uniform and steady flow, as schematically illustrated in Fig. \[fig:schematic\](left) at several values of the Reynolds number, ${\rm Re}$, and of the particle shape (aspect ratio of the spheroid), in the case of small platelets and of small columns. Method ====== Much of the literature on viscous and convective torques on small non-spherical particles in a flow uses spheroids as model shapes because the resistance tensors that determine the motion of the particle in the fluid are known [@Kim:2005], and because fore-aft and rotational symmetry lead to a comparatively simple angular dynamics. In the following we consider spheroidal particles. Similarities and differences between the angular dynamics of spheroids and crystals with discrete rotation and reflection symmetry was discussed by Fries [*et al.*]{} [@Fri17]. We denote the symmetry axis of the spheroidal particle by ${\ensuremath{\mbox{\boldmath$n$}}}$. The length of the symmetry axis is $2a_\parallel$, and the diameter of the spheroid is $2a_\perp$. The aspect ratio of the spheroid is defined as $\lambda=a_\parallel/a_\perp$. Oblate particles (platelets) have $\lambda < 1$, while prolate particles (columns) have $\lambda >1$. The Reynolds number defined in Eq. (\[eq:rep\]) is based upon $a_{\rm max}=\mbox{max}\{a_\parallel,a_\perp\}$. We consider a small spheroidal particle at a fixed position in a steady homogeneous flow with velocity ${\ensuremath{\mbox{\boldmath$u$}}}$, the flow a small ice crystal experiences as it settles through quiescent air with settling velocity $-{\ensuremath{\mbox{\boldmath$u$}}}$. For a prolate spheroid the setup is shown in Fig. \[fig:schematic\](a). The tilt angle $\varphi$ is defined as the angle between the particle-symmetry vector ${\ensuremath{\mbox{\boldmath$n$}}}$ and $-{\ensuremath{\mbox{\boldmath$u$}}}$, for prolate as well as for oblate spheroids. For fore-aft symmetric particles, it is sufficient to consider angles $\varphi$ in the interval $[0,\tfrac{\pi}{2}]$. We computed the torque upon the particle by numerical solution of the full three-dimensional Navier-Stokes equations for incompressible flow, using a solver of the incompressible, three-dimensional Navier-Stokes equations, MGLET [@manhart2001mglet]. This code was recently used to document the computational challenges of calculating forces and torques upon rods in uniform flows [@Helge]. The method is briefly described in appendix \[app:A\]. The simulations of Ref. [@Helge] give precise results for the inertial torque for a rod of aspect ratio $\lambda=6$ in a uniform flow, at different Reynolds numbers. In the following we show results for the convective-inertial torque for different aspect ratios, not only for rods but also for disks. To quantify the convective-inertial effect for particles of different sizes and shapes we keep the Reynolds number Re based on the maximal particle dimension, $a_{\rm max}$, constant as we vary particle shape. The small-${\rm Re}$ theory [@Cox65; @Kha89; @Dab15] says that the inertial torque on a small spheroid in a uniform flow is of the form $$\begin{aligned} &{\ensuremath{\mbox{\boldmath$\tau$}}}^{({\rm Re})}= F(\lambda){\rho_{\rm f}} {U^2a_{\rm max}^3}\, ({\ensuremath{\mbox{\boldmath$n$}}}\cdot {\hat{{\ensuremath{\mbox{\boldmath$u$}}}}})({\ensuremath{\mbox{\boldmath$n$}}}\wedge {\hat{{\ensuremath{\mbox{\boldmath$u$}}}}})\, {\label{eq:torque_fluid_inertia}}\end{aligned}$$ to linear order in ${\rm Re} = U a_{\rm max}/\nu$. Here and in Eq. (\[eq:torque\_fluid\_inertia\]), $U = |{\ensuremath{\mbox{\boldmath$u$}}}|$, and $\hat{{\ensuremath{\mbox{\boldmath$u$}}}} = {\ensuremath{\mbox{\boldmath$u$}}}/U$. The shape factor $F(\lambda)$ was computed in Ref. [@Dab15], and it is shown in Fig. \[fig:schematic\](b). Also shown is the slender-body limit derived by Khayat and Cox [@Kha89], $$\label{eq:slbl} F(\lambda) \sim -5\pi/[3(\log\lambda)^2]\,,$$ as well as the near-spherical expansion [@Dab15] $$\label{eq:nsph} F(\lambda) \sim \mp 811 \pi\varepsilon/560$$ for small eccentricity $\varepsilon$. Here the eccentricity parameter is defined by $\lambda = 1+\varepsilon$ for prolate particles, and $\lambda = (1-\varepsilon)^{-1}$ for oblate particles. Up to a relative error of order $10^{-3}$ in the numerical prefactor Eq. (\[eq:nsph\]) agrees with the result of Cox [@Cox65] for nearly spherical particles, as mentioned by the authors of Ref. [@Dab15]. In the following we assume without loss of generality that gravity points along the $\hat {\bf e}_x$-axis, and that the symmetry vector ${\ensuremath{\mbox{\boldmath$n$}}}$ lies in the $\hat {\bf e}_x$-$\hat {\bf e}_y$-plane (Fig. \[fig:schematic\]). Then the torque points along the $\hat {\bf e}_z$-axis, ${\ensuremath{\mbox{\boldmath$\tau$}}} = \tau_z \hat {\bf e}_z$, where $\hat {\bf e}_z=\hat {\bf e}_x\wedge \hat {\bf e}_y$. In this case Eq. [(\[eq:torque\_fluid\_inertia\])]{} implies that the torque depends on its tilt angle $\varphi$ as $$\label{eq:theory} \tau_z^{({\rm Re})}=- \tfrac{1}{2}F(\lambda){\rho_{\rm f}} {U^2a_{\rm max}^3}\, \sin2\varphi\,.$$ The torque $\tau_z$ is zero both when $\varphi = 0$, corresponding to $\hat{\bf n}$ parallel to $\hat{{\ensuremath{\mbox{\boldmath$u$}}}}$, or when $\varphi = \pi/2$, when $\hat{\bf n} $ and $\hat{{\ensuremath{\mbox{\boldmath$u$}}}}$ are perpendicular to each other. This is a consequence of the symmetry of the problem. The $\sin( 2 \varphi )$-dependence in Eq.  is the simplest possible angular dependence that respects these constraints. This is exactly the result of the small-Re perturbation theory at order Re [@Dab15]. We see that the angular dependence is symmetric around the tilt angle $\varphi = \pi/4$. The sign of the torque is such that $\varphi = \tfrac{\pi}{2}$ is stable for prolate particles (rods), whereas $\varphi = 0$ is stable for oblate particles (disks). In the next Section we summarise our numerical results, that show how the torque varies as a function of particle shape, Reynolds number, and tilt angle. Results {#sec:results} ======= [Figs/fig2.pdf]{} We de-dimensionalise the torque as $$\label{eq:dimlesstau} \tau_z' = \frac{\tau_z}{\rho_{\rm f} U^2 a_{\rm max}^3}\,.$$ Our initial configuration is symmetric w.r.t. reflection in the $x$-$y$ plane \[Fig. \[fig:schematic\](left)\]. We have checked that the flow remains symmetric and steady for all simulations described in this article, for Reynolds numbers up to ${\rm Re}=30$. Fig. \[fig:torque\] shows our simulation results for the dimensionless torque for prolate and oblate spheroids (appendix \[app:B\]), compared with the small-Re theory (\[eq:theory\]) which reads in dimensionless form: $$\label{eq:theory2} \tau_z' =- \tfrac{1}{2} F(\lambda) \sin2\varphi\,.$$ This theory is shown as a thick solid line. Panel (a) contains the results for a prolate spheroid with $\lambda =6$ as a function of tilt angle, for different particle Reynolds numbers (symbols). Filled symbols correspond to data from Table 5 in Ref. [@Helge]. Thin solid lines are fits to the theoretically predicted angular dependence, proportional to $\sin 2\varphi$. We see that the numerical results for the smallest Reynolds number, ${\rm Re}=0.3$, agree quite well with the theory, the deviation is about 20%. For larger Reynolds numbers the deviations are larger, but the angular dependence is still accurately predicted by the theory, only the amplitude becomes smaller. Panel (b) shows results for $\lambda =\tfrac{1}{6}$. The results are qualitatively similar to those obtained for $\lambda=6$, but there are two important differences. First, we have no data points for ${\rm Re}=0.3$. The smallest Reynolds-number simulations are very costly because one must use a large domain size at the same time as a small spatial mesh [@Helge]. This is particularly challenging for disks because a finer mesh is needed to resolve the flow in the vicinity of the strongly curved periphery of flat disks. Second, for disks the $\varphi$-dependence develops an asymmetry around $\varphi=45^\circ$ at larger values of Re. Panel (c) shows the torque at $\varphi=45^\circ$ as a function of particle aspect ratio in comparison with Eq. (\[eq:theory2\]). We infer that the theory describes the shape dependence of the inertial torque well, quantitatively at ${\rm Re}=0.3$, and qualitatively at larger Re. Panel (d) shows the relative difference between the numerical results and Eq. (\[eq:theory2\]) as a function of Reynolds number, for $\lambda = 1/6$ and $6$. The deviations decrease quite slowly as Re decreases at the Reynolds numbers for which we have numerical data. Since we lack data at small values of Re, it is difficult to infer the order of the next term in the expansion in Re. The next order could be ${\rm Re}^{3/2}$, giving rise to a ${\rm Re}^{1/2}$-correction to $\tau_z^{({\rm Re})}$ (dashed line), but there could also be logarithmic corrections, as in the small-Re theory for the drag coefficient [@veysey2007renormalization]. [Figs/fig3.pdf]{} Fig. \[fig:torque\_disk\_large\_Re\] quantifies the asymmetry of the $\varphi$-dependence of the torque around $\varphi=45^\circ$ that develops for disks at larger Reynolds numbers. What is the origin of this asymmetry? To understand the mechanism, we visualise the fluid-velocity field around a disk with aspect ratio $\lambda=1/3$ at $\varphi =30^\circ$ and $60^\circ$ in Fig. \[fig:disturbance\]. We observe that the streamlines closely follow the surface of the spheroid in panels (a) and (b). This reflects that flow remains attached to the surface of this oblate spheroid at small Reynolds numbers. At ${\rm Re}=30$, by contrast, the flow separates as the oblate spheroid meets the flow with its broad side, resulting in quite different flow patterns for $\varphi =30^\circ$ and $60^\circ$. This certainly contributes to the asymmetry of the torque. [Figs/fig4.pdf]{} Discussion and conclusions ========================== We performed numerical simulations determining the torque on oblate and prolate spheroids that settle steadily in a quiescent fluid. Our results show that the small-Re theory [@Cox65; @Kha89; @Dab15] works reasonably well for small Reynolds numbers. The shape dependence remains qualitatively correct for the largest Reynolds numbers we have considered, Re$=30$. But in general the torque is smaller than the small-Re theory [(\[eq:torque\_fluid\_inertia\])]{} predicts. For example, Fig. \[fig:torque\](b) shows that the maximal ${\rm Re}\!=\!30$-torque on a disk is smaller than the small-Re prediction by about a factor of two. What does this imply for the angular dynamics of ice platelets settling in turbulent clouds? When particle inertia is negligble, theory suggests [@Men17; @Kramel; @Gus19] that the variance of the tilt angle is inversely proportional to the maximal value of the torque squared. This means that at ${\rm Re}\!=\!30$ the standard deviation of the tilt angle is larger than predicted by the theory, by a factor of two. The small-Re theory for the torque exhibits a symmetry around $\varphi = 45^\circ $. For prolate particles our numerical simulations exhibit this symmetry quite accurately even at the largest Re we simulated. For disks, by contrast, this symmetry is clearly broken at Re$=30$ (and this may imply that the maximum of the torque is not precisely at $\varphi=45^\circ$). We computed the disturbance flow and saw that the flow detaches from the particle when it faces the flow with its broadest side first, causing the torque asymmetry. It is likely that this asymmetry in the $\varphi$-dependence is a precursor of a bifurcation, as the Reynolds number increases. Indeed, experiments show that there is a transition for a disk. It settles with its broad side down at small Re, but exhibits other kinds of periodic or chaotic lateral and angular dynamics at larger Re, due to interactions between the disk and the induced vortex street. A bifurcation to periodic angular dynamics occurs at ${\rm Re}\sim 100$ [@willmarth1964steady; @field1997chaotic]. Unsteady dynamics and symmetry breaking occur also for rods, as the simulations summarised in Ref. [@Jiang14] show. In the present work, and in particular when solving numerically the Navier-Stokes equations, we assumed that the settling particle experiences a homogeneous flow. If by contrast the fluid is in motion, then fluid-velocity gradients give rise to additional torques. In the Stokes approximation these torques were first calculated by Jeffery [@Jef22], and they compete with the torque due to fluid inertia. As shown in Refs. [@subramanian2005; @einarsson2015a], the magnitude of the fluid-inertia torque in a linear flow depends upon the shear Reynolds number ${\rm Re}_s = a_{\rm max}^2s/\nu$ where the shear rate $s$ is an estimate of the magnitude of the fluid-velocity gradients. When ${\rm Re} \gg \sqrt{{\rm Re}_s}$ then the shear-induced inertial torque is negligible compared to inertial corrections due to the slip velocity, at least for steady flows [@Candelier2018]. Lopez & Guazzelli [@Lop17] measured the angular dynamics of rods settling in a steady vortex flow. Their model takes into account the torque induced by fluid inertia for fibers in the slender body limit [@Kha89], but neglects shear-induced torques, and it qualitatively explains their experimental results. How the alignment of spheroids settling through a fluid is affected by unsteady fluid motion is not known in general. Yet the extent to which turbulence destroys alignment of settling particles has important consequences in the atmospheric sciences, where reflection of polarised light reveal small orientation fluctuations of small ice crystals [@Pru78; @chen1994theoretical] settling in turbulent clouds [@Bre04]. A model for this effect [@Kle95; @Kramel; @Men17; @Lop17; @Gus19; @Sha19] assumes that the fluid torque on a settling crystal can be approximated by the superposition of the Jeffery torque due to the turbulent fluid-velocity gradients, and the small-Re expression for the convective inertial torque, Eq. (\[eq:theory\]). The model predicts that the settling particles tend to orient so as to maximize their drag [@Kramel; @Men17; @Gus19; @Sha19]. Typical ice-crystal sizes in clouds range from 150 $\mu$m to 1 mm. Given the settling speed of such crystals in air (see Table 10.3a in Ref. [@Pru78]), this corresponds to Reynolds numbers of the order of 2 to 15. The model analysed in Refs. [@Kle95; @Kramel; @Men17; @Lop17; @Gus19; @Sha19] nevertheless uses the small-Re expression for the torque, Eq. (\[eq:theory\]), for two reasons. Firstly, higher Re-corrections are not known except in the slender-body limit [@Kha89]. Secondly, the empirical correlations for the torque have been verified only for certain Reynolds numbers, particle shapes, and inclinations of the particle to the flow, as mentioned in the Introduction. The results reported here support one of the assumptions that enter the theory [@Kle95; @Kramel; @Men17; @Lop17; @Gus19; @Sha19] for the angular dynamics of ice platelets settling in turbulent clouds \[the shape and angular dependence of the steady inertial torque, Eq. (\[eq:theory\])\]. Experiments at large Re (${\rm Re}\sim 1000$) compare the trajectories and velocities of platelets settling in a quiescent fluid, with those settling in a turbulent background flow [@esteban2020disks]. The authors find that the background turbulence has a significant effect upon the settling dynamics. This is expected because the fluid-velocity gradients give rise to Jeffery torques, as mentioned above. In addition, fluctuations in the translational slip velocity appear to have a profound effect on the orientation of the settling particle [@Gus19]. But fluid-velocity gradients must also affect the form of the convective inertial torque, discussed here for a quiescent fluid. This effect is not taken into account in the model used in Refs. [@Kle95; @Kramel; @Men17; @Lop17; @Gus19; @Sha19]. To validate the model, it would be of interest to conduct experiments at smaller Reynolds numbers, so that one can compare and contrast with the predictions of Refs. [@Gus19] for example. We intend to run fully resolved simulations of particles settling in turbulence in order to justify and refine the model. But this remains a challenge for the future. [10]{} J. Happel and H. Brenner. . Martinus Nijhoff Publishers, Hague, Netherlands, 1983. 553p. S. Kim and S. J. Karrila. . Butterworth-Heinemann, Boston, 1991. R.E. Khayat and R.G. Cox. Inertia effects on the motion of long slender bodies. , 209:435–462, 1989. F. Candelier and B. Mehlig. Settling of an asymmetric dumbbell in a quiescent fluid. , 802:174–185, 2016. A. Roy, R. J. Hamati, L. Tierney, D. L. Koch, and G. A. Voth. Inertial torques and a symmetry breaking orientational transition in the sedimentation of slender fibres. , 875:576, 2019. R.G. Cox. The steady motion of a particle of arbitrary shape at small [R]{}eynolds numbers. , 23:625–643, 1965. C. M. Bender and S. A. Orszag. . McGraw-Hill, New York, USA, 1978. V. Dabade, N. K. Marath, and G. Subramanian. Effects of inertia and viscoelasticity on sedimenting anisotropic particles. , 778:133–188, 2015. A. Hölzer and M. Sommerfeld. Lattice [Boltzmann]{} simulations to determine drag, lift and torque acting on non-spherical particles. , 38:572–589, 2009. R. Ouchene, M. Khalij, A. Taniére, and B. Arcen. Drag, lift and torque coefficients for ellipsoidal particles: from low to moderate particle [R]{}eynolds numbers. , 113:53–64, 2015. R. Ouchene, M. Khalij, A. Taniére, and B. Arcen. A new set of correlations of drag, lift and torque coefficients for non-spherical particles at large [R]{}eynolds numbers. , 303:33–43, 2016. M. Zastawny, G. Mallouppas, F. Zhao, and B. van Wachem. Derivation of drag and lift forces and torque coefficients for non-spherical particles in flow. , 39:227–239, 2012. S.K.P. Sanjeevi, J.A.M. Kuipers, and J.T. Padding. Drag, lift and torque correlations for non-spherical particles from [Stokes]{} limit to high [R]{}eynolds numbers. , 106:325–337, 2018. J. Fries, J. Einarsson, and B. Mehlig. Angular dynamics of small crystals in viscous flow. , 2:014302, 2017. M. Manhart, F. Tremblay, and R. Friedrich. a parallel code for efficient [DNS]{} and [LES]{} of complex geometries. 2001. H. I. Andersson and F. Jiang. Forces and torques on a prolate spheroid: low-[R]{}eynolds number and attack angle effects. , 230:431, 2019. John Veysey and Nigel Goldenfeld. Simple viscous flows: From boundary layers to the renormalization group. , 79:883–927, 2007. U. Menon, A. Roy, S. Kramel, G. Voth, and D. Koch. Theoretical predictions of the orientation distribution of high-aspect-ratio, inertial particles settling in isotropic turbulence. , 2017. S. Kramel. . PhD thesis, Wesleyan University, 2017. K. Gustavsson, M. Z. Sheikh, D. Lopez, A. Naso, A. Pumir, and B. Mehlig. Effect of fluid inertia on the orientation of a small prolate spheroid settling in turbulence. , 21:083008, 2019. W. W. Willmarth, N. E. Hawk, and R. L. Harvey. Steady and unsteady motions and wakes of freely falling disks. , 7:197–208, 1964. S. B. Field, M. Klaus, M. G. Moore, and F. Nori. Chaotic dynamics of falling disks. , 388:252–254, 1997. F. Jiang, J. P. Gallardo, and H. I. Andersson. The laminar wake behind a 6:1 prolate spheroid at 45$^o$ incidence angle. , 26:113602, 2014. G. B. Jeffery. The motion of ellipsoidal particles immersed in a viscous fluid. , 102:161, 1922. G. Subramanian and D. L. Koch. Inertial effects on fibre motion in simple shear flow. , 535:383–414, 2005. J. Einarsson, F. Candelier, F. Lundell, J.R. Angilella, and B. Mehlig. Rotation of a spheroid in a simple shear at small [R]{}eynolds number. , 27, 2015. 063301. F. Candelier, B. Mehlig, and J. Magnaudet. Time-dependent lift and drag on a rigid body in a viscous steady linear flow. , 864:554–595, 2019. D. Lopez and E. Guazzelli. Inertial effects on fibers settling in a vortical flow. , 2:024306, 2017. H. R. Pruppacher and J. D. Klett. . Kluwer Academic Publishers, Dordrecht, The Nederlands, 1997. 954p. J.-P. Chen and D. Lamb. The theoretical basis for the parameterization of ice crystal habits: Growth by vapor deposition. , 51:1206–1222, 1994. F.-M. Breon and B. Dubrulle. Horizontally oriented plates in clouds. , 61:2888–2898, 2004. J. D. Klett. Orientation model for particles in turbulence. , 52:2276–2285, 1995. M. Z. Sheikh, K. Gustavsson, D. Lopez, E. Leveque, B. Mehlig, A. Pumir, and A. Naso. Importance of fluid inertia for the orientation of spheroids settling in turbulent flow. , 886:A9, 2020. L. B. Esteban, J. S. Shrimpton, and B. Ganapathisubramani. Disks settling in turbulence. , 883:A58, 2020. J. H. Williamson. Low-storage runge-kutta schemes. , 56:48–56, 1980. H. L. Stone. Iterative solution of implicit approximations of multidimensional partial differential equations. , 5:530–558, 1968. H. I. Andersson, F. Jiang, and V. L. Okulov. Chapter 9: Instabilities in the wake of an inclined prolate spheroid. , 50:311–352, 2019. Description of simulations {#app:A} ========================== MGLET is a finite-volume code that directly solves the full time-dependent three-dimensional Navier-Stokes equations for incompressible fluids. The computational domain is discretised on a multi-level staggered Cartesian mesh with cubic grid cells. A third-order explicit low-storage Runge-Kutta scheme [@Williamson80] is used for the time dependence. Stone’s strongly implicit procedure [@Stone68] is applied for pressure correction in each time step. To represent the curved particle surface in the Cartesian mesh, MGLET uses a direct-forcing immersed boundary method, representing no-slip and impermeable boundary condition at the particvle surface. The code has been extensively validated for various flows in a wide Reynolds number range, among which [@Jiang14; @Andersson19; @Helge] are in the low Re-regime and most relevant to the present study. All simulations in the present study used the largest practically possible computational domain ($205 a_{\rm min} \times 205 a_{\rm min} \times 205 a_{\rm min}$) with $a_{\rm min} = \mbox{min}\{a_\parallel,a_\perp\}$, see Ref. [@Helge]. The minimum grid cell size is $0.02a_{min}$. The relatively fine mesh and large computational domain lead to big mesh size (40 – 50 million grid cells), and the explicit time-evolution scheme leads to very small time step size when Re is very low. These challenges are discussed in Ref. [@Helge]. We define the Reynolds number using $a_{\rm max} = \mbox{max}\{a_\parallel,a_\perp\}$, Eq. (\[eq:rep\]). The authors of Ref. [@Helge] define the Reynolds number (${\rm Re}_D$ in their notation) in terms of the short-axis length $D$, equal to $2a_{\rm min} $. For the aspect ratio $\lambda =6$ studied in Ref. [@Helge] we have ${\rm Re} = \tfrac{\lambda}{2} {\rm Re}_D = 3 Re_{\rm D}$. To determine the effect of particle shape, we varied the aspect ratio keeping $a_{\rm min}$ constant. It follows that the relation between ${\rm Re}_D$ and ${\rm Re}$ defined in Eq. (\[eq:rep\]) is $$\label{eq:comparison} {\rm Re} = \tfrac{1}{2} {\rm Re}_D\left\{ \begin{array}{ll} \lambda \quad& \mbox{for $\lambda > 1$,}\\ \lambda^{-1} \quad &\mbox{for $\lambda < 1$.} \end{array}\right .$$ The authors of Ref. [@Helge] also define a second Reynolds number, ${\rm Re}_p$ in their notation, in terms of the sphere-equivalent diameter $d_0 = 2a_0$. Since the volume of the spheroid is $\tfrac{4\pi}{3} a_\parallel a_\perp^2$, we have that $a_0 = (a_\parallel a_\perp^2)^{1/3} = \lambda^{1/3} a_\perp = \lambda^{-2/3} a_\parallel$. The authors of Ref. [@Helge] de-dimensionalise the torque by dividing by $\tfrac{1}{2} \rho_{\rm f} U^2 \tfrac{\pi}{8} d_0^3$. To compare with their results for $\lambda = 6$ we use $$d_0 = 2a_0 = 2 a_{\rm max} \left\{ \begin{array}{ll} \lambda^{-2/3} \quad& \mbox{for $\lambda > 1$,}\\ \lambda^{1/3} \quad &\mbox{for $\lambda < 1$.} \end{array}\right .$$ The ratio of normalisation factors is $$\frac{\tfrac{1}{2}\rho_{\rm f}U^2 \tfrac{\pi}{8} d_0^3}{\rho_{\rm f}U^2a_{\rm max}^3} =\frac{\pi}{2} \left\{ \begin{array}{ll} \lambda^{-2} \quad& \mbox{for $\lambda > 1$,}\\ \lambda \quad &\mbox{for $\lambda < 1$.} \end{array}\right .$$ Summary of simulation results {#app:B} ============================= ${\rm Re}=0.3$\ 15 30 45 60 75 ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- -- -- 6 [[-]{}]{}0.112 [[-]{}]{}0.196 [[-]{}]{}0.226 [[-]{}]{}0.199 [[-]{}]{}0.114 3 [[-]{}]{}0.340 2 [[-]{}]{}0.393 $\tfrac{1}{2}$ -0.707 $\tfrac{1}{3}$ -0.853 $\tfrac{1}{6}$ : \[tab:torque\] Numerical results (MGLET) for torque $\tau_z' = \tau_z/( \rho_{\rm f} U^2 a_{\rm max}^3)$ upon a spheroid in a uniform flow, as a function of tilt angle $\varphi$, Reynolds number Re, and particle aspect ratio $\lambda$. ${\rm Re}=3$\ 15 30 45 60 75 ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- -- -- -- 6 [[-]{}]{}0.076 [[-]{}]{}0.133 [[-]{}]{}0.159 [[-]{}]{}0.135 [[-]{}]{}0.078 3 [[-]{}]{}0.120 [[-]{}]{}0.211 [[-]{}]{}0.244 [[-]{}]{}0.213 [[-]{}]{}0.122 2 [[-]{}]{}0.145 [[-]{}]{}0.251 [[-]{}]{}0.291 [[-]{}]{}0.255 [[-]{}]{}0.145 $\tfrac{1}{2}$ -0.283 -0.487 -0.558 -0.487 -0.275 $\tfrac{1}{3}$ -0.340 -0.586 -0.681 -0.586 -0.335 $\tfrac{1}{6}$ -0.340 -0.628 -0.746 -0.649 -0.369 : \[tab:torque\] Numerical results (MGLET) for torque $\tau_z' = \tau_z/( \rho_{\rm f} U^2 a_{\rm max}^3)$ upon a spheroid in a uniform flow, as a function of tilt angle $\varphi$, Reynolds number Re, and particle aspect ratio $\lambda$. ${\rm Re}=30$\ 15 30 45 60 75 ---------------- ---------------- ---------------- ----------------- ---------------- ---------------- -- -- -- 6 [[-]{}]{}0.033 [[-]{}]{}0.057 [[-]{}]{}0.065 [[-]{}]{}0.057 [[-]{}]{}0.033 3 [[-]{}]{}0.068 [[-]{}]{}0.117 [[-]{}]{}0.136 [[-]{}]{}0.119 [[-]{}]{}0.068 2 [[-]{}]{}0.094 [[-]{}]{}0.161 [[-]{}]{}-0.185 [[-]{}]{}0.161 [[-]{}]{}0.090 $\tfrac{1}{2}$ -0.220 -0.369 -0.416 -0.353 -0.196 $\tfrac{1}{3}$ -0.267 -0.450 -0.497 -0.408 -0.225 $\tfrac{1}{6}$ -0.293 -0.503 -0.547 -0.432 -0.233 : \[tab:torque\] Numerical results (MGLET) for torque $\tau_z' = \tau_z/( \rho_{\rm f} U^2 a_{\rm max}^3)$ upon a spheroid in a uniform flow, as a function of tilt angle $\varphi$, Reynolds number Re, and particle aspect ratio $\lambda$.
--- abstract: 'The status of the measurements and the theoretical developments concerning the hadronic structure of the photon are briefly summarised.' --- [**MPI-PhE/2002-14**]{} [**Photon and electron structure\ from interactions[^1]**]{} Richard Nisius, Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),\ Föhringer Ring 6, D-80805 München, Germany, [E-mail:[email protected]]{}. Introduction {#intro} ============ For more than 20 years measurements of photon structure functions give deep insight into the rich structure of a fundamental gauge boson, the photon. A recent review on this subject can be found in [@NIS-9904]. The main idea is that by measuring the differential cross-section $$\begin{aligned} \frac{d^2\sigma_{{\rm e}\gamma\rightarrow {\rm e} X}}{dxdQ^2} &=& \frac{2\pi\aemsq}{x\,Q^{4}} \left[\left( 1+(1-y)^2\right) \ftxq - y^{2} \flxq\right] \end{aligned}$$ one obtains the photon structure function , see Figure \[fig01\] for an illustration. Here and are the absolute values of the four momentum squared of the virtual and quasi-real photons, with $\psq \ll\qsq$. The symbols $x$ and $y$ denote the usual dimensionless variables of deep-inelastic scattering, and is the fine structure constant. The flux of the incoming photons, , where $z$ is the fraction of the electron energy carried by the photon, is usually taken from the equivalent photon approximation, EPA. In leading order, the structure function is proportional to the parton content of the photon and therefore reveals the structure of the photon. [![[*Deep-inelastic electron photon scattering.*]{} []{data-label="fig01"}](fig01.eps "fig:"){width="0.5\linewidth"}]{} In the region of small $y$ studied ($y\ll 1$), the contribution of the term proportional to is small, and is usually neglected. Because the energy of the quasi-real photon is not known, $x$ has to be derived by measuring the invariant mass of the hadronic final state $X$, which is a source of significant uncertainties, and makes measurements of mainly limited by the systematic error, except for large values of . At this conference new measurements of the hadronic structure function and its charm component have been presented. They are discussed, together with the most recent fits to the data and the prospects for measurements of at a future Linear Collider. In addition, an attempt by DELPHI is presented to investigate the photon structure by measuring the electron structure function . Photon structure function {#f2had} ========================= The improvement in the measurement of since the first result by PLUTO in 1981 is quite impressive, see Figure \[fig02\]. The analysis of the LEP data has extended the kinematic coverage by about two orders in magnitude, both to larger and to smaller $x$. In addition, due to continuous improvements of the analyses and a LEP combined effort to obtain a better description of the data by the Monte Carlo models, the precision of the measurements has been improved considerably. [![[*The improvements in at LEP.*]{} []{data-label="fig02"}](fig02.eps "fig:"){width="0.7\linewidth"}]{} For this conference the final OPAL result for the measurement of the hadronic structure function at high has been available. This measurement is based on the complete LEP2 data and extends the measurement of to $\qzm = 780$ , the largest scale ever probed. As can be seen from Figure \[fig03\] the measured is rather flat, and, within errors, the parameterisations from GRSc [@GLU-9902], SaS1D [@SCH-9501] and WHIT [@HAG-9501] are in agreement with the data. [![[*The measurement of at high .*]{} []{data-label="fig03"}](fig03.eps "fig:"){width="0.7\linewidth"}]{} The already available preliminary result from DELPHI [@TIA-0101] at a slightly lower basically shows the same trend. Since the measurement at high is mainly limited by the statistical error, it is very desirable to combine the OPAL result with the measurements to come from the other LEP experiments. To facilitate the combination, the analyses should be performed for the same bins in $x$ and . Also the investigation of the evolution of with in ranges of $x$ has been continued using the LEP2 data. [![[*The evolution of from OPAL.*]{} []{data-label="fig04"}](fig04.eps "fig:"){width="0.7\linewidth"}]{} For medium values of $x$, the precision of the OPAL results based on LEP1 data has been improved considerably by using the large luminosity available at LEP2 energies. With the present level of precision the data start to challenge the existing parameterisations of . Given this, several theoretical as well as experimental issues have to be addressed in more detail when interpreting the data. Examples are the suppression of with and radiative corrections to the deep-inelastic scattering process. A summary of the present status of all measurements of the evolution of is shown in Figure \[fig05\]. [![[*The world data on as a function of in bins of $x$.*]{} []{data-label="fig05"}](fig05.eps "fig:"){width="0.7\linewidth"}]{} Fits to data {#fit} ============ There have been recent fits [@ALB-0201] to based on all available data, except for the TPC/2$\gamma$ results. To facilitate the analysis, some simplifications are made in the treatment of the experimental results. In the analysis the correlation matrix of the various points is used if it is provided by the experiment. However, the systematic errors are treated as uncorrelated, the effect and the radiative corrections are neglected, and finally, in case of asymmetric errors, the data points are moved to the central value and symmetric error are assumed. Given the precision of the data mentioned above, this procedure should most likely be improved for future analyses. The fits are done in a fixed flavour scheme with $uds$ as active flavours and charm treated as a Bethe-Heitler contribution with $\mc=1.5\pm 0.1$ . Two different types of fits are performed in leading and next-to-leading order, NLO, using the DIS$_\gamma$ and $\scriptstyle\overline{\rm MS}$ schemes. In the first fit the hadron-like part of the photon structure is neglected, and only the point-like part, which is evolved from a starting scale of $\qnsq=\lamsq$, is taken into account. Consequently the only free parameter of the fit is . Since the hadron-like part dominates at low $x$ and , only the data in the region $x>0.45$ and $\qsq>59~\gevsq$ are used in this fit. The second fit uses all data, takes into account both components and fits for $(N,\,\alpha,\,\beta,\,\al,\,\qnsq)$ using the assumptions, $uds(\qnsq)=N x^\alpha(1-x)^\beta$ and $g(\qnsq)=0$. Both types of fits give a reasonable description of the data. Within the assumptions made, the quoted theoretical precision on is about 3$\%$ and predicted to shrink to less than 2$\%$ for values larger than 300 . Prospects for measurements {#prosper} ========================== The prospects of future investigations of the photon structure in the context of the planned Linear Collider programme are very promising. The Linear Collider will extend the available phase space, as shown in Figure \[fig06\], for the measurement of the evolution of at medium $x$, see [@NIS-9904]. [![[*The expected measurement of at a future Linear Collider.*]{} []{data-label="fig06"}](fig06.eps "fig:"){width="0.67\linewidth"}]{} The higher beam energy and luminosity compared to LEP also allows for the investigations of novel features like the measurement of the flavour decomposition of by exploring the exchange of $W$ and $Z$ bosons [@GEH-9901]. The charm contribution to {#f2ch} ========================== The final OPAL result [@OPALPR354] has been presented of the measurement of the charm component using mesons to identify charm quarks. Compared to the first OPAL result on , this analysis is based on improved Monte Carlo models and higher statistics, leading to a more precise measurement presented in Figure \[fig07\]. [![[*The measurement of from OPAL.*]{} []{data-label="fig07"}](fig07.eps "fig:"){width="0.7\linewidth"}]{} In a similar way to the structure function for light quarks, receives contributions from the point-like and the hadron-like components of the photon structure. These two contributions are predicted [@LAENEN] to have different dependences on $x$, with the hadron-like component dominating at very low values of $x$ and the point-like part accounting for most of at $x>0.1$. For $x>0.1$ the OPAL measurement is described by perturbative QCD at next-to-leading order. For $x<0.1$ the measurement is poorly described by the NLO prediction using the point-like component alone, and therefore the measurement suggests a non-zero hadron-like component of . Increased statistics and a better understanding of the dynamics for $x<0.1$ are needed to get a more precise result in this region. Also here, to increase the statistics, it would be advantageous to combine the data from all four LEP experiments taken in the same phase space. Electron structure function {#elec} =========================== For this conference there has been an attempt by DELPHI to access the photon structure by measuring the electron structure function. In this analysis, instead of measuring the electron-photon scattering cross-section by utilising the EPA to account for the flux of the quasi-real photons, the cross-section of electron-electron scattering is studied as functions of and , the fractional momentum of the parton with respect to the electron. This quantity is related to $x$ by $\xe=zx$. The main advantage of this approach is experimental, i.e. the incoming particle probed by the virtual photon is the electron and not the photon. This means that its energy is known, and therefore can be obtained without measuring $W$. However, there is also a disadvantage in this measurement. The photon structure is obscured because, e.g. the region of low values of receives contributions from both, the region of large momentum fraction $x$ and low scaled photon energy $z$, and the region of small $x$ and large $z$. The measurement of is performed with a precisions of about 3-20$\%$ both for the statistical and the systematic error. So far, no radiative corrections and no bin-centre corrections are applied. The preliminary DELPHI result is consistent with several existing parameterisations of , obtained from and by convolution. The usual exception is the LAC1 parameterisation, which is disfavoured. This investigation serves as a valuable cross-check of the measurements, but does not give more insight into the photon structure. Conclusion {#concl} ========== Given the large statistics available at LEP2 energies, the region of phase space covered in the investigations of the structure of the photon is constantly increasing. Despite these large luminosities, for some of the measurements the results are still limited by the statistical error and a combination of the results from several experiments is desirable. This is particularly true for the measurement of at large and the determination of . For the first time, the high precision data from LEP have been used in NLO fits to results. With an improved treatment of the experimental data, and the inclusion of the jet data from HERA to better constrain the gluon distribution in the photon, there are good prospects to achieve the first parametrisation of the parton distributions of the photon based on LEP and HERA data in the near future. [10]{} R. Nisius, Phys. Rep. [**332**]{}, 165–317 (2000), updated figures available at:\ [*http://www.mppmu.mpg.de/$\,\tilde{}\,$nisius/welcomeaux/struc.html*]{}. M. Gl[ü]{}ck, E. Reya, and I. Schienbein, Phys. Rev. [**D60**]{}, 054019 (1999). and T. Sj[ö]{}strand, Z. Phys. [**C68**]{}, 607–624 (1995). K. Hagiwara et al., Phys. Rev. [**D51**]{}, 3197–3219 (1995). I. Tyapkin, DELPHI Collab., in proceedings of PHOTON 2001, Ascona. , [M. Klasen]{}, and [S. Söldner-Rembold]{}, Phys. Rev. Lett. [**89**]{}, 122004 1–4 (2002). , H. Spiesberger, and [P.M. Zerwas]{}, Phys. Lett. [**B469**]{}, 259–262 (1999). G. Abbiendi et al., OPAL Collab., Phys. Lett. [**B539**]{}, 13–24 (2002). E. Laenen et al., Phys. Rev. [**D49**]{}, 5753–5768 (1994); E. Laenen and S. Riemersma, Phys. Lett. [**B376**]{}, 169–176 (1996). [^1]: Invited talk presented at the ICHEP02 conference, Amsterdam, 25 July 2002.
--- abstract: 'We show that, for $\beta \ge 1$, the semigroups of $\beta$-Laguerre and $\beta$-Jacobi processes of different dimensions are intertwined in analogy to a similar result for $\beta$-Dyson Brownian motion recently obtained in [@RamananShkolnikov]. These intertwining relations generalize to arbitrary $\beta \ge 1$ the ones obtained for $\beta=2$ in [@InterlacingDiffusions] between $h$-transformed Karlin-McGregor semigroups. Moreover, they form the key step towards constructing a multilevel process in a Gelfand-Tsetlin pattern leaving certain Gibbs measures invariant. Finally, as a by product, we obtain a relation between general $\beta$-Jacobi ensembles of different dimensions.' author: - THEODOROS ASSIOTIS title: '**INTERTWININGS FOR GENERAL $\beta$-LAGUERRE AND $\beta$-JACOBI PROCESSES**' --- Introduction ============ The aim of this short note is to establish intertwining relations between the semigroups of general $\beta$-Laguerre and $\beta$-Jacobi processes, in analogy to the ones obtained for general $\beta$-Dyson Brownian motion in [@RamananShkolnikov] (see also [@GorinShkolnikov]). These, also generalize the relations obtained for $\beta=2$ in [@InterlacingDiffusions] when the transition kernels for these semigroups are given explicitly in terms of $h$-transforms of Karlin-McGregor determinants. We begin, by introducing the stochastic processes we will be dealing with. Consider the unique strong solution to the following system of $SDEs$ with $i=1,\cdots,n$ with values in $[0,\infty)^n$, $$\begin{aligned} \label{BESQsde} dX_i^{(n)}(t)=2\sqrt{X_i^{(n)}(t)}dB_i^{(n)}(t)+\beta\left(\frac{d}{2}+\sum_{1\le j \le k, j \ne i}^{}\frac{2X_i^{(n)}(t)}{X_i^{(n)}(t)-X_j^{(n)}(t)}\right)dt,\end{aligned}$$ where the $B_i^{(n)}$, $i=1,\cdots, n,$ are independent standard Brownian motions. This process, was introduced and studied by Demni in [@DemniBESQ] in relation to Dunkl processes, (see for example [@DunklProcesses]) where it is referred to as the $\beta$-Laguerre process, since its distribution at time $1$, if started from the origin, is given by the $\beta$-Laguerre ensemble (see Section 5 of [@DemniBESQ]). We could, equally well, have called this the $\beta$-squared Bessel process, since for $\beta=2$ it exactly consists of $n$ $BESQ(d)$ diffusion processes conditioned to never collide as first proven in [@O; @Connell] but we stick to the terminology of [@DemniBESQ]. Similarly, consider the unique strong solution to the following system of $SDEs$ in $[0,1]^n$, $$\begin{aligned} \label{Jacobisde} dX_i^{(n)}(t)=2\sqrt{X_i^{(n)}(t)(1-X_i^{(n)}(t))}dB_i^{(n)}(t)+\beta\left(a-(a+b)X_i^{(n)}(t)+\sum_{1\le j \le k, j \ne i}^{}\frac{2X_i^{(n)}(t)(1-X_i^{(n)}(t))}{X_i^{(n)}(t)-X_j^{(n)}(t)}\right)dt,\end{aligned}$$ where, again, the $B_i^{(n)}$, $i=1,\cdots, n,$ are independent standard Brownian motions. We call this solution the $\beta$-Jacobi process. It was first introduced and studied in [@DemniJacobi] as a generalization of the eigenvalue evolutions of matrix Jacobi processes and whose stationary distribution is given by the $\beta$-Jacobi ensemble (see Section 4 of [@DemniJacobi]): $$\begin{aligned} \mathcal{M}^{Jac,n}_{a,b,\beta}(dx)=C_{n,a,b,\beta}^{-1}\prod_{i=1}^{n}x^{\frac{\beta}{2}a-1}_{i}(1-x_i)^{\frac{\beta}{2}b-1}\prod_{1\le i < j \le n}^{}|x_j-x_i|^{\beta}dx,\end{aligned}$$ for some normalization constant $C_{n,a,b,\beta}$. We now give sufficient conditions that guarantee the well-posedness of the $SDEs$ above. For $\beta \ge 1$ and $d\ge 0$ and $a,b\ge 0$, (\[BESQsde\]) and (\[Jacobisde\]) have a unique strong solution with no collisions and no explosions and with instant diffraction if started from a degenerate (i.e. when some of the coordinates coincide) point (see Corollary 6.5 and 6.7 respectively of [@Graczyk]). In particular, the coordinates of $X^{(n)}$ stay ordered. Thus if, $$\begin{aligned} X^{(n)}_1(0) \le \cdots \le X^{(n)}_n(0),\end{aligned}$$ then with probability one, $$\begin{aligned} X^{(n)}_1(t) < \cdots < X^{(n)}_n(t), \ \forall \ t>0.\end{aligned}$$ From now on, we restrict to those parameter values. It will be convenient to define $\theta=\frac{\beta}{2}$. We write $P^{(n)}_{d,\theta}(t)$ for the Markov semigroup associated to the solution of (\[BESQsde\]). Similarly, write $Q^{(n)}_{a,b,\theta}(t)$ for the Markov semigroup associated to the solution of (\[Jacobisde\]). Furthermore, denote by $\mathcal{L}^{(n)}_{d,\theta}$ and $\mathcal{A}^{(n)}_{a,b,\theta}$ the formal infinitesimal generators for (\[BESQsde\]) and (\[Jacobisde\]) respectively, given by, $$\begin{aligned} \mathcal{L}^{(n)}_{d,\theta}&=\sum_{i=1}^{n}2z_i\frac{\partial}{\partial z^2_i}+2 \theta \sum_{i=1}^{n}\left(\frac{d}{2}+\sum_{1\le j \le k, j \ne i}^{}\frac{2z_i}{z_i-z_j}\right)\frac{\partial}{\partial z_i},\\ \mathcal{A}^{(n)}_{a,b,\theta}&=\sum_{i=1}^{n}2z_i(1-z_i)\frac{\partial}{\partial z^2_i}+2 \theta \sum_{i=1}^{n}\left(a-(a+b)z_i+\sum_{1\le j \le k, j \ne i}^{}\frac{2z_i(1-z_i)}{z_i-z_j}\right)\frac{\partial}{\partial z_i}.\end{aligned}$$ With $I$ denoting either $[0,\infty)$ or $[0,1]$, define the chamber, $$\begin{aligned} W^n(I)=\{x=(x_1,\cdots,x_n)\in I^n:x_1\le \cdots \le x_n\}.\end{aligned}$$ Moreover, for $x\in W^{n+1}$ define the set of $y \in W^{n}$ that *interlace* with $x$ by, $$\begin{aligned} W^{n,n+1}(x)=\{y=(y_1,\cdots,y_n)\in I^n: x_1\le y_1 \le x_2 \le \cdots \le y_n \le x_{n+1}\}.\end{aligned}$$ For $x\in W^{n+1}$ and $y\in W^{n,n+1}(x)$, define the *Dixon-Anderson* conditional *probability* density on $W^{n,n+1}(x)$ (originally introduced by Dixon at the beginning of the last century in [@Dixon] and independently rediscovered by Anderson in his study of the Selberg integral in [@Anderson]) by, $$\begin{aligned} \lambda^{\theta}_{n,n+1}(x,y)=\frac{\Gamma (\theta (n+1))}{\Gamma(\theta)^{n+1}}\prod_{1\le i <j \le n+1}^{}(x_j-x_i)^{1-2\theta}\prod_{1\le i <j \le n}^{}(y_j-y_i)\prod_{i=1}^{n}\prod_{j=1}^{n+1}|y_i-x_j|^{\theta-1}.\end{aligned}$$ Denote by $\Lambda^{\theta}_{n,n+1}$, the integral operator with kernel $\lambda^{\theta}_{n,n+1}$ i.e., $$\begin{aligned} (\Lambda^{\theta}_{n,n+1}f)(x)=\int_{y\in W^{n,n+1}(x)}^{}\lambda^{\theta}_{n,n+1}(x,y)f(y)dy.\end{aligned}$$ Then, our goal is to prove the following theorem, which should be considered as a generalization to the other two classical $\beta$-ensembles, the $Laguerre$ and $Jacobi$, of the result of [@RamananShkolnikov] for the $Gaussian$ ensemble, \[MainTheorem\] Let $\beta \ge 1$, $d\ge 2$ and $a,b \ge 1$. Then, with $\theta=\frac{\beta}{2}$, we have the following equalities of Markov kernels, $\forall t \ge 0$, $$\begin{aligned} P^{(n+1)}_{d-2,\theta}(t)\Lambda^{\theta}_{n,n+1}&=\Lambda^{\theta}_{n,n+1}P^{(n)}_{d,\theta}(t)\label{BESQintertwining},\\ Q^{(n+1)}_{a-1,b-1,\theta}(t)\Lambda^{\theta}_{n,n+1}&=\Lambda^{\theta}_{n,n+1}Q^{(n)}_{a,b,\theta}(t) \label{Jacobiintertwining}.\end{aligned}$$ For $\beta=2$, this result was already obtained in [@InterlacingDiffusions], see in particular subsections 3.7 and 3.8 therein respectively. The general theory of intertwining diffusions (see [@PalShkolnikov]), suggests that there should be a way to realize these intertwining relations by coupling these $n$ and $n+1$ particle processes, so that they interlace. In the Laguerre case, (the Jacobi case is analogous) the resulting process $Z=(X,Y)$, with $Y$ evolving according to $P^{(n)}_{d,\theta}(t)$ and $X$ in its own filtration according to $P^{(n+1)}_{d-2,\theta}(t)$, should (conjecturally) have generator given by, $$\begin{aligned} \mathcal{L}^{n,n+1}_{\beta,d}=\sum_{j=1}^{n}2y_j\partial^2_{y_j}+\beta\sum_{j=1}^{n}\left(\frac{d}{2}+\sum_{i\ne j}^{}\frac{2y_j}{y_j-y_i}\right)\partial_{y_j}+\sum_{j=1}^{n+1}2x_j\partial^2_{x_j}+\beta\sum_{j=1}^{n+1}\left(\frac{d-2}{2}+\sum_{i\ne j}^{}\frac{2x_j}{x_j-x_i}\right)\partial_{x_j}\\ +(1-\beta)\sum_{j=1}^{n+1}\sum_{i \ne j}^{}\frac{4x_j}{x_i-x_j}\partial_{x_j}+\left(\frac{\beta}{2}-1\right)\sum_{j=1}^{n+1}\sum_{i=1}^{n}\frac{4x_j}{x_j-y_i}\partial_{x_j},\end{aligned}$$ with reflecting boundary conditions of the $X$ components on the $Y$ particles (in case they do collide). For a rigorous construction of the analogous coupled process in the case of Dyson Brownian motions with $\beta>2$, see Section 4 of [@GorinShkolnikov]. In fact, for certain values of the parameters, the construction of the process with the generator above, can be reduced to the results of $\cite{GorinShkolnikov}$ and a more detailed account will appear as part of the author’s $PhD$ thesis [@PhDThesis]. As just mentioned, such a coupling was constructed for Dyson Brownian motion with $\beta > 2$ in [@GorinShkolnikov]; and in [@InterlacingDiffusions] (see also [@Sun]) for copies of general one-dimensional diffusion processes, that in particular includes the squared Bessel (this corresponds to the $Laguerre$ process of this note) and Jacobi cases for $\beta=2$, when the interaction, between the two levels, entirely consists of local hard reflection and the transition kernels are explicit. Given such 2-level couplings, one can then iterate to construct a multilevel process in a Gelfand-Tsetlin pattern, as in [@Warren] which initiated this program (see also [@GorinShkolnikov],[@PalShkolnikov],[@InterlacingDiffusions]). For a different type of coupling, for $\beta=2$ Dyson Brownian motion, that preceded [@O; @Connell] and is related to the Robinson-Schensted correspondence, see [@O; @ConnellTams], [@O; @ConnellYor] and the related work [@BougerolJeulin]. Using Theorem \[MainTheorem\] and that $\mathcal{M}^{Jac,n}_{a,b,\beta}$ is the *unique* stationary measure of (\[Jacobisde\]), which follows from smoothness and positivity of the transition density $p^{n,\beta,a,b}_t(x,y)$, with respect to Lebesgue measure, of $Q^{(n)}_{a,b,\theta}(t)$ (see Proposition 4.1 of [@DemniJacobi]; for this to apply we further need to restrict to $a,b > \frac{1}{\beta}$) and the fact that two distinct ergodic measures must be mutually singular (see [@Walters]), we immediately get, For $\beta\ge 1$ and $a,b > 1$ and with $\theta=\frac{\beta}{2}$, $$\begin{aligned} \mathcal{M}^{Jac,n+1}_{a-1,b-1,\beta}\Lambda^{\theta}_{n,n+1}=\mathcal{M}^{Jac,n}_{a,b,\beta}.\end{aligned}$$ From (\[Jacobiintertwining\]) we obtain that $\mathcal{M}^{Jac,n+1}_{a-1,b-1,\beta}\Lambda^{\theta}_{n,n+1}$ is the unique stationary measure of $Q^{(n)}_{a,b,\theta}(t)$ Before closing this introduction we remark, that in order to establish Theorem \[MainTheorem\], we will follow the strategy given in [@RamananShkolnikov], namely we rely on the explicit action of the generators and integral kernel on the class of Jack polynomials which, along with an exponential moment estimate, will allow us to apply the moment method. We note that, although the $\beta$-Laguerre and $\beta$-Jacobi diffusions look more complicated than $\beta$-Dyson’s Brownian motion, the main computation, performed in Step 1 of the proof below, is actually simpler than the one in [@RamananShkolnikov]. #### Acknowledgements I would like to thank Jon Warren for several useful comments on an earlier draft of this note and also Neil O’Connell and Nizar Demni for some historical remarks. Financial support through the MASDOC DTC grant number EP/HO23364/1 is gratefully acknowledged. Preliminaries on Jack polynomials ================================= We collect some facts on the Jack polynomials $J_{\lambda}(z;\theta)$ which as already mentioned will play a key role in obtaining these intertwining relations. We mainly follow [@RamananShkolnikov] which in turn follows [@BakerForrester] (note that there is a misprint in [@RamananShkolnikov]; there is a factor of $\frac{1}{2}$ missing from equation (2.7) therein c.f. equation (2.13d) in [@BakerForrester]). The $J_{\lambda}(z;\theta)$ are defined to be the (unique up to normalization) symmetric polynomial eigenfunctions in $n$ variables of the differential operator $\mathcal{D}^{(n),\theta}$, $$\begin{aligned} \mathcal{D}^{(n),\theta}=\sum_{i=1}^{n}z^2_i\frac{\partial}{\partial z^2_i}+2 \theta \sum_{i=1}^{n}\sum_{1\le j \le k, j \ne i}^{}\frac{z^2_i}{z_i-z_j}\frac{\partial}{\partial z_i},\end{aligned}$$ indexed by partitions $\lambda=(\lambda_1 \ge \lambda_2\ge \cdots)$ of length $l$ with eigenvalue $eval(\lambda,n,\theta)=2B(\lambda')-2\theta B(\lambda)+2\theta(n-1)|\lambda|$ where $B(\lambda)=\sum(i-1)\lambda_i=\sum \binom{\lambda'_i}{2}$ and $\lambda'$ is the conjugate partition. With $1_n$ denoting a row vector of $n$ $1$s, we have the normalization, $$\begin{aligned} J_{\lambda}(1_n;\theta)=\theta^{-|\lambda|}\prod_{i=1}^{l}\frac{\Gamma\left(\left(n+1-i\right)\theta+\lambda_i\right)}{\Gamma\left(\left(n+1-i\right)\theta\right)}.\end{aligned}$$ Define the following differential operators, $$\begin{aligned} \mathcal{B}_1^{(n)}&= \sum_{i=1}^{n}\frac{\partial}{\partial z_i},\\ \mathcal{B}_2^{(n),\theta}&=\sum_{i=1}^{n}z_i\frac{\partial}{\partial z^2_i}+2 \theta \sum_{i=1}^{n}\sum_{1\le j \le k, j \ne i}^{}\frac{z_i}{z_i-z_j}\frac{\partial}{\partial z_i},\\ \mathcal{B}_3^{(n)}&= \sum_{i=1}^{n}z_i\frac{\partial}{\partial z_i}.\end{aligned}$$ Then the action of these operators on the $J_{\lambda}(z;\theta)$’s is given explicitly by (see [@BakerForrester] equations $(2.13a)$, $(2.13d)$ and $(2.13b)$ respectively), $$\begin{aligned} \mathcal{B}_1^{(n)}J_{\lambda}(z;\theta)&=J_{\lambda}(1_n;\theta)\sum_{i=1}^{l}\binom{\lambda}{\lambda_{(i)}}_{\theta}\frac{J_{\lambda_{(i)}}(z;\theta)}{J_{\lambda_{(i)}}(1_n;\theta)},\\ \mathcal{B}_2^{(n),\theta}J_{\lambda}(z;\theta)&=J_{\lambda}(1_n;\theta)\sum_{i=1}^{l}\binom{\lambda}{\lambda_{(i)}}_{\theta}(\lambda_i-1+(n-i)\theta)\frac{J_{\lambda_{(i)}}(z;\theta)}{J_{\lambda_{(i)}}(1_n;\theta)},\\ \mathcal{B}_3^{(n)}J_{\lambda}(z;\theta)&=|\lambda|J_{\lambda}(z;\theta),\end{aligned}$$ where $\lambda_{(i)}$ is the sequence given by $\lambda_{(i)}=(\lambda_1,\cdots,\lambda_{i-1},\lambda_i-1,\lambda_{i+1},\cdots)$ (in case $i=l$ and $\lambda_i=1$ we drop $\lambda_l$ from $\lambda$) and the combinatorial coefficients $\binom{\lambda}{\rho}_{\theta}$ are defined by the following expansion (we set $\binom{\lambda}{\lambda_{(i)}}_{\theta}=0$ in case $\lambda_{(i)}$ is no longer a non-decreasing positive sequence), $$\begin{aligned} \frac{J_{\lambda}(1_n+z;\theta)}{J_{\lambda}(1_n;\theta)}=\sum_{m=0}^{|\lambda|}\sum_{|\rho|=m}^{}\binom{\lambda}{\rho}_{\theta}\frac{J_{\rho}(z;\theta)}{J_{\rho}(1_n;\theta)},\end{aligned}$$ but whose exact values will not be required in what follows. Finally, we need the following about the action of $\Lambda^{\theta}_{n,n+1}$ on $J_{\lambda}(\cdot;\theta)$ (see [@OkounkovOlshanski] Section 6), $$\begin{aligned} \label{kernelonjack} \int_{W^{n,n+1}(x)}^{}\lambda^{\theta}_{n,n+1}(x,y)J_{\lambda}(y;\theta)dy=J_{\lambda}(x;\theta)c(\lambda,n,\theta) ,\end{aligned}$$ where, $$\begin{aligned} c(\lambda,n,\theta)=\frac{\Gamma((n+1)\theta)}{\Gamma(\theta)}\prod_{i=1}^{n}\frac{\Gamma\left(\left(n+1-i\right)\theta+\lambda_i\right)}{\Gamma\left(\left(n+2-i\right)\theta+\lambda_i\right)}.\end{aligned}$$ Proof ===== We split the proof in 4 steps, following the strategy laid out in [@RamananShkolnikov]. First, note that we can write the operators $\mathcal{L}^{(n)}_{d,\theta}$ and $\mathcal{A}^{(n)}_{a,b,\theta}$ as follows, $$\begin{aligned} \mathcal{L}^{(n)}_{d,\theta}&=2\mathcal{B}_2^{(n),\theta}+\theta d \mathcal{B}_1^{(n)},\\ \mathcal{A}^{(n)}_{a,b,\theta}&=2\mathcal{B}_2^{(n),\theta}-2\mathcal{D}^{(n),\theta}+2\theta a \mathcal{B}_1^{(n)}-2\theta(a+b)\mathcal{B}_3^{(n),\theta}.\end{aligned}$$ #### Step 1 The aim of this step is to show the intertwining relation at the level of the infinitesimal generators acting on the Jack polynomials. Namely that, $$\begin{aligned} \mathcal{L}^{(n+1)}_{d-2,\theta}\Lambda^{\theta}_{n,n+1}J_{\lambda}(\cdot;\theta)&=\Lambda^{\theta}_{n,n+1}\mathcal{L}^{(n)}_{d,\theta}J_{\lambda}(\cdot;\theta)\label{BESQgeneratorintertwining}, \\ \mathcal{A}^{(n+1)}_{a-1,b-1,\theta}\Lambda^{\theta}_{n,n+1}J_{\lambda}(\cdot;\theta)&=\Lambda^{\theta}_{n,n+1}\mathcal{A}^{(n)}_{a,b,\theta}J_{\lambda}(\cdot;\theta)\label{Jacgeneratorintertwining}.\end{aligned}$$ We will show relation (\[Jacgeneratorintertwining\]) for the Jacobi case and at the end of Step 1 indicate how to obtain (\[BESQgeneratorintertwining\]). **(LHS)**= $$\begin{aligned} &\mathcal{A}^{(n+1)}_{a-1,b-1,\theta}J_{\lambda}(x;\theta)c(\lambda,n,\theta)=c(\lambda,n,\theta)\left(2\mathcal{B}_2^{(n+1),\theta}-2\mathcal{D}^{(n+1),\theta}+2\theta (a-1) \mathcal{B}_1^{(n+1)}-2\theta(a+b-2)\mathcal{B}_3^{(n+1),\theta}\right)J_{\lambda}(x;\theta)\\ &=c(\lambda,n,\theta)\bigg[2J_{\lambda}(1_{n+1};\theta)\sum_{i=1}^{l}\binom{\lambda}{\lambda_{(i)}}_{\theta}(\lambda_i-1+(n+1-i)\theta)\frac{J_{\lambda_{(i)}}(x;\theta)}{J_{\lambda_{(i)}}(1_{n+1};\theta)}-2eval(\lambda,n+1,\theta)J_{\lambda}(x;\theta)\\ &+2\theta(a-1)J_{\lambda}(1_{n+1};\theta)\sum_{i=1}^{l}\binom{\lambda}{\lambda_{(i)}}_{\theta}\frac{J_{\lambda_{(i)}}(x;\theta)}{J_{\lambda_{(i)}}(1_{n+1};\theta)}-2\theta(a+b-2)|\lambda|J_{\lambda}(x;\theta)\bigg].\end{aligned}$$ **(RHS)**: We start by computing $\mathcal{A}^{(n)}_{a,b,\theta}J_{\lambda}(y;\theta)$. $$\begin{aligned} &\mathcal{A}^{(n)}_{a,b,\theta}J_{\lambda}(y;\theta)=\left(2\mathcal{B}_2^{(n),\theta}-2\mathcal{D}^{(n),\theta}+2\theta a \mathcal{B}_1^{(n)}-2\theta(a+b)\mathcal{B}_3^{(n),\theta}\right)J_{\lambda}(y;\theta) \nonumber\\ &=\bigg[2J_{\lambda}(1_{n};\theta)\sum_{i=1}^{l}\binom{\lambda}{\lambda_{(i)}}_{\theta}(\lambda_i-1+(n-i)\theta)\frac{J_{\lambda_{(i)}}(y;\theta)}{J_{\lambda_{(i)}}(1_{n+1};\theta)}-2eval(\lambda,n,\theta)J_{\lambda}(y;\theta)\label{linearcombination}\\ &+2\theta a J_{\lambda}(1_{n};\theta)\sum_{i=1}^{l}\binom{\lambda}{\lambda_{(i)}}_{\theta}\frac{J_{\lambda_{(i)}}(y;\theta)}{J_{\lambda_{(i)}}(1_{n};\theta)}-2\theta(a+b)|\lambda|J_{\lambda}(y;\theta)\bigg] \nonumber.\end{aligned}$$ Now, apply $\Lambda^{\theta}_{n,n+1}$ to obtain that, $$\begin{aligned} \textbf{(RHS)}&=2J_{\lambda}(1_{n};\theta)\sum_{i=1}^{l}\binom{\lambda}{\lambda_{(i)}}_{\theta}(\lambda_i-1+(n-i)\theta)c(\lambda_{(i)},n,\theta)\frac{J_{\lambda_{(i)}}(x;\theta)}{J_{\lambda_{(i)}}(1_{n+1};\theta)}-2c(\lambda,n,\theta)eval(\lambda,n,\theta)J_{\lambda}(x;\theta)\\ & +2\theta a J_{\lambda}(1_{n};\theta)\sum_{i=1}^{l}\binom{\lambda}{\lambda_{(i)}}_{\theta}c(\lambda_{(i)},n,\theta)\frac{J_{\lambda_{(i)}}(x;\theta)}{J_{\lambda_{(i)}}(1_{n};\theta)}-2\theta(a+b)|\lambda|c(\lambda,n,\theta)J_{\lambda}(x;\theta).\end{aligned}$$ Now, in order to check **(LHS)**=**(RHS)** we check that the coefficients of $J_{\lambda}$ and $J_{\lambda_{(i)}}$ $\forall i$ coincide on both sides.\ $\bullet$ First, the coefficients of $J_{\lambda}(x;\theta)$: **(LHS)**: $-2c(\lambda,n,\theta)eval(\lambda,n+1,\theta)-c(\lambda,n,\theta)|\lambda|2 \theta (a+b-2)$. **(RHS)**: $-2c(\lambda,n,\theta)eval(\lambda,n,\theta)-c(\lambda,n,\theta)|\lambda|2 \theta (a+b)$. These are equal iff: $$\begin{aligned} \frac{-2eval(\lambda,n,\theta)+2eval(\lambda,n+1,\theta)}{4\theta |\lambda|}=1,\end{aligned}$$ which is easily checked from the explicit expression of $eval(n,\lambda,\theta)$.\ $\bullet$ Now, for the coefficients of $J_{\lambda_{(i)}}(x;\theta)$: **(LHS)**: $$\begin{aligned} & 2J_{\lambda}(1_{n+1};\theta)\binom{\lambda}{\lambda_{(i)}}_{\theta}(\lambda_i-1+(n+1-i)\theta)\frac{c(\lambda,n,\theta)}{J_{\lambda_{(i)}}(1_{n+1};\theta)}+2\theta(a-1)J_{\lambda}(1_{n+1};\theta)\binom{\lambda}{\lambda_{(i)}}_{\theta}\frac{c(\lambda,n,\theta)}{J_{\lambda_{(i)}}(1_{n+1};\theta)}.\end{aligned}$$ **(RHS)**: $$\begin{aligned} & 2J_{\lambda}(1_{n};\theta)\binom{\lambda}{\lambda_{(i)}}_{\theta}(\lambda_i-1+(n-i)\theta)\frac{c(\lambda_{(i)},n,\theta)}{J_{\lambda_{(i)}}(1_{n};\theta)}+2\theta aJ_{\lambda}(1_{n};\theta)\binom{\lambda}{\lambda_{(i)}}_{\theta}\frac{c(\lambda_{(i)},n,\theta)}{J_{\lambda_{(i)}}(1_{n};\theta)}.\end{aligned}$$ These are equal iff: $$\begin{aligned} a-1=\frac{J_{\lambda}(1_{n};\theta)c(\lambda_{(i)},n,\theta)J_{\lambda_{(i)}}(1_{n+1};\theta)}{J_{\lambda_{(i)}}(1_{n};\theta)c(\lambda,n,\theta)J_{\lambda}(1_{n+1};\theta)}a+\frac{1}{\theta}\frac{J_{\lambda}(1_{n};\theta)c(\lambda_{(i)},n,\theta)J_{\lambda_{(i)}}(1_{n+1};\theta)}{J_{\lambda_{(i)}}(1_{n};\theta)c(\lambda,n,\theta)J_{\lambda}(1_{n+1};\theta)}(\lambda_i-1+(n-i)\theta)\\ -\frac{1}{\theta}(\lambda_i-1+(n+1-i)\theta).\end{aligned}$$ We first claim that, $$\begin{aligned} \frac{J_{\lambda}(1_{n};\theta)c(\lambda_{(i)},n,\theta)J_{\lambda_{(i)}}(1_{n+1};\theta)}{J_{\lambda_{(i)}}(1_{n};\theta)c(\lambda,n,\theta)J_{\lambda}(1_{n+1};\theta)}=1.\end{aligned}$$ This immediately follows from, $$\begin{aligned} \frac{J_{\lambda}(1_{n};\theta)}{J_{\lambda_{(i)}}(1_{n};\theta)}&=\theta^{-1}\frac{\Gamma\left(\left(n+1-i\right)\theta+\lambda_i\right)}{\Gamma\left(\left(n+1-i\right)\theta+\lambda_i-1\right)},\\ \frac{J_{\lambda_{(i)}}(1_{n+1};\theta)}{J_{\lambda}(1_{n+1};\theta)}&=\theta\frac{\Gamma\left(\left(n+2-i\right)\theta+\lambda_i-1\right)}{\Gamma\left(\left(n+2-i\right)\theta+\lambda_i\right)},\\ \frac{c(\lambda_{(i)},n,\theta)}{c(\lambda,n,\theta)}&=\frac{\Gamma\left(\left(n+1-i\right)\theta+\lambda_i-1\right)\Gamma\left(\left(n+2-i\right)\theta+\lambda_i\right)}{\Gamma\left(\left(n+1-i\right)\theta+\lambda_i\right)\Gamma\left(\left(n+2-i\right)\theta+\lambda_i-1\right)}.\end{aligned}$$ Hence, we need to check that the following is true, $$\begin{aligned} a-1=a+\frac{1}{\theta}(\lambda_i-1+(n-i)\theta)-\frac{1}{\theta}(\lambda_i-1+(n-i+1)\theta),\end{aligned}$$ which is obvious. Now, in order to obtain (\[BESQgeneratorintertwining\]) we only need to consider coefficients in $J_{\lambda_{(i)}}$’s (since the operators $\mathcal{D}^{(n),\theta}$ and $\mathcal{B}_3^{(n)}$ that produce $J_{\lambda}$’s are missing) and replace $a$ by $\frac{d}{2}$. To prove the analogous result for $\beta$-Dyson Brownian motions, one needs to observe, as done in [@RamananShkolnikov], that the generator of $n$ particle $\beta$-Dyson Brownian motion $L^{(n)}_{\theta}$ can be written as a commutator, namely $L^{(n)}_{\theta}=[\mathcal{B}_1^{(n)},\mathcal{B}_2^{(n),\theta}]=\mathcal{B}_1^{(n)}\mathcal{B}_2^{(n),\theta}-\mathcal{B}_2^{(n),\theta}\mathcal{B}_1^{(n)}$. #### Step 2 We obtain an exponential moment estimate, namely regarding $\mathbb{E}_{x}\left[e^{\epsilon \|X^{(n)}(t)\|}\right]$. This is obviously finite by compactness of $[0,1]^n$ in the Jacobi case. In the Laguerre case, we proceed as follows. Writing $X^{(n)}$ for the solution to (\[BESQsde\]), letting $\|\cdot\|$ denote the $l_1$ norm and recalling that all entries of $X^{(n)}$ are non-negative we obtain, $$\begin{aligned} d\|X^{(n)}(t)\|=\sum_{i=1}^{n}2\sqrt{dX_i^{(n)}(t)}dB_i^{(n)}(t)+\beta\left(\frac{d}{2}n+\sum_{i=1}^{n}\sum_{1 \le j \le n, j\ne i}^{}\frac{2X_i^{(n)}(t)}{X_i^{(n)}(t)-X_j^{(n)}(t)}\right)dt.\end{aligned}$$ Note that, $$\begin{aligned} \sum_{i=1}^{n}\sum_{1 \le j \le n, j\ne i}^{}\frac{2X_i^{(n)}(t)}{X_i^{(n)}(t)-X_j^{(n)}(t)}=2\binom{n}{2},\end{aligned}$$ and that by Levy’s characterization the local martingale $(M(t),t\ge 0)$ defined by, $$\begin{aligned} dM(t)=\frac{1}{\sqrt{\|X^{(n)}(t)\|}}\sum_{i=1}^{n}\sqrt{X^{(n)}_i(t)}dB^{(n)}_i(t),\end{aligned}$$ is equal to a standard Brownian motion $(W(t),t\ge 0)$ and so we obtain, $$\begin{aligned} d\|X^{(n)}(t)\|=2\sqrt{\|X^{(n)}(t)\|}dW(t)+\beta\left(\frac{d}{2}n+2\binom{n}{2}\right)dt.\end{aligned}$$ Thus, $\|X^{(n)}(t)\|$ is a squared Bessel process of dimension $dim_{\beta,n,d}=\beta\left(\frac{d}{2}n+2\binom{n}{2}\right)$. Hence, from standard estimates (see [@RevuzYor] Chapter IX.1 or Proposition 2.1 of [@LDPBessel]; in case that $dim_{\beta,n,d}$ is an integer the result is an immediate consequence of Fernique’s theorem ([@Fernique]) since $\|X^{(n)}(t)\|$ is the square of a Gaussian process) it follows that, for $\epsilon>0$ small enough, $\mathbb{E}_{x}\left[e^{\epsilon \|X^{(n)}(t)\|}\right]<\infty$. #### Step 3 We now lift the intertwining relation to the semigroups acting on the Jack polynomials, namely, $$\begin{aligned} P^{(n+1)}_{d-2,\theta}(t)\Lambda^{\theta}_{n,n+1}J_{\lambda}(\cdot;\theta)&=\Lambda^{\theta}_{n,n+1}P^{(n)}_{d,\theta}(t)J_{\lambda}(\cdot;\theta),\\ Q^{(n+1)}_{a-1,b-1,\theta}(t)\Lambda^{\theta}_{n,n+1}J_{\lambda}(\cdot;\theta)&=\Lambda^{\theta}_{n,n+1}Q^{(n)}_{a,b,\theta}(t)J_{\lambda}(\cdot;\theta).\end{aligned}$$ The proof follows almost word for word the elegant argument given in [@RamananShkolnikov]. We reproduce it here, elaborating a bit on some parts, for the convenience of the reader, moreover only considering the Laguerre case for concreteness. We begin by applying Ito’s formula to $J_{\lambda}(X^{(n)}(t);\theta)$ and taking expectations (note that the stochastic integral term is a true martingale since its expected quadratic variation is finite which follows by the exponential estimate of Step 2) we obtain, $$\begin{aligned} \label{integral} P^{(n)}_{d,\theta}(t)J_{\lambda}(\cdot;\theta)=J_{\lambda}(\cdot;\theta)+\int_{0}^{t}P^{(n)}_{d,\theta}(s)\mathcal{L}^{(n)}_{d,\theta}J_{\lambda}(\cdot;\theta)ds.\end{aligned}$$ Now, note that by (\[linearcombination\]), $\mathcal{L}^{(n)}_{d,\theta}J_{\lambda}(\cdot;\theta)$ is given by a linear combination of Jack polynomials $J_{\kappa}(\cdot;\theta)$ for some partitions $\kappa$ with $\kappa_i\le \lambda_i$ $\forall i \le l$ and we will write $\kappa\le \lambda$ if this holds. Each $J_{\kappa}(\cdot;\theta)$ obeys (\[integral\]) and we can evaluate $\mathcal{L}^{(n)}_{d,\theta}J_{\kappa}(\cdot;\theta)$ again therein as a linear combination of Jack polynomials indexed now by partitions $\nu$ with $\nu \le \kappa$. By iteration we obtain a system of linear ordinary integral equations the unique solution of which is given by a matrix exponential. Thus, the action of $\mathcal{L}^{(n)}_{d,\theta}$ on the *finite* dimensional vector space spanned by the Jack polynomials in consideration, namely the ones indexed by partitions $\kappa$ with $\kappa \le \lambda$, can be represented by a matrix the exponential of which if evaluated on the initial value vector $\left(J_{\kappa}(\cdot;\theta),\kappa \le \lambda\right)$ gives the solution to the system of these equations. More explicitly, if we define $f_{\kappa}(t)=P^{(n)}_{d,\theta}(t)J_{\kappa}(\cdot;\theta)$ and denote the action of $\mathcal{L}^{(n)}_{d,\theta}$ by a matrix $M_2$ indexed by partitions $\kappa \le \lambda$ then the system of these integral equations is given by, for $\kappa \le \lambda$, $$\begin{aligned} f_{\kappa}(t)=f_{\kappa}(0)+\sum_{\nu \le \lambda}^{}M_2(\kappa,\nu)\int_{0}^{t}f_{\nu}(s)ds,\end{aligned}$$ and its solution by, $$\begin{aligned} f_{\kappa}(t)=\sum_{\nu \le \lambda}^{}e^{tM_2}(\kappa,\nu)f_{\nu}(0).\end{aligned}$$ Now, the same considerations with $n$ replaced by $n+1$ and along with the following elementary fact about finite dimensional square matrices, $$\begin{aligned} M_3M_1=M_1M_2 \implies e^{tM_3}M_1=M_1e^{tM_2} \ \textnormal{for} \ t \ge 0,\end{aligned}$$ (by (\[kernelonjack\]) $\Lambda^{\theta}_{n,n+1}$ acts on the aforementioned finite dimensional vector space of Jack polynomials as a matrix, which we denote by $M_1$ and similarly the action of $\mathcal{L}^{(n)}_{d,\theta}$ and $\mathcal{L}^{(n+1)}_{d-2,\theta}$ is denoted by the matrices $M_2$ and $M_3$ respectively) give that, $$\begin{aligned} P^{(n+1)}_{d-2,\theta}(t)\Lambda^{\theta}_{n,n+1}J_{\lambda}(\cdot;\theta)&=\Lambda^{\theta}_{n,n+1}P^{(n)}_{d,\theta}(t)J_{\lambda}(\cdot;\theta).\end{aligned}$$ #### Step 4 We again follow [@RamananShkolnikov]. Recall, (see [@RamananShkolnikov] and the references therein) that we can write any *symmetric* polynomial $p$ in $n$ variables as a finite linear combination of Jack polynomials in $n$ variables. Hence, for any such $p$, $$\begin{aligned} P^{(n+1)}_{d-2,\theta}(t)\Lambda^{\theta}_{n,n+1}p(\cdot)&=\Lambda^{\theta}_{n,n+1}P^{(n)}_{d,\theta}(t)p(\cdot)\label{symmpolyintertwining1},\\ Q^{(n+1)}_{a-1,b-1,\theta}(t)\Lambda^{\theta}_{n,n+1}p(\cdot)&=\Lambda^{\theta}_{n,n+1}Q^{(n)}_{a,b,\theta}(t)p(\cdot)\label{symmpolyintertwining2}.\end{aligned}$$ Now, any probability measure $\mu$ on $W^n(I)$ will give rise to a symmetrized probability measure $\mu^{symm}$ on $I^n$ as follows, $$\begin{aligned} \mu^{symm}(dz_1.\cdots,dz_n)=\frac{1}{n!}\mu(dz_{(1)}.\cdots,dz_{(n)}),\end{aligned}$$ where $z_{(1)}\le z_{(2)}\le \cdots \le z_{(n)}$ are the order statistics of $(z_1,z_2,\cdots,z_n)$. Moreover, for every (not necessarily symmetric) polynomial $q$ in $n$ variables, with $S_n$ denoting the symmetric group on $n$ symbols, we have, $$\begin{aligned} \int_{I^n}^{}q(z)d\mu^{symm}(z)=\int_{I^n}^{}\frac{1}{n!}\sum_{\sigma \in S_n}^{}q(z_{\sigma(1)},\cdots,z_{\sigma(n)})d\mu^{symm}(z)=\int_{W^{n}(I)}^{}\frac{1}{n!}\sum_{\sigma \in S_n}^{}q(z_{\sigma(1)},\cdots,z_{\sigma(n)})d\mu(z).\end{aligned}$$ Note that now $p(z)=\frac{1}{n!}\sum_{\sigma \in S_n}^{}q(z_{\sigma(1)},\cdots,z_{\sigma(n)})$ is a symmetric polynomial (in $n$ variables). Thus, from (\[symmpolyintertwining1\]) and (\[symmpolyintertwining2\]) all moments of the symmetrized versions of both sides of (\[BESQintertwining\]) and (\[Jacobiintertwining\]) coincide. Hence, by Theorem 1.3 of [@DeJeu] (and the discussion following it) along with the fact that $(\Lambda^{\theta}_{n,n+1}f)(z)\le e^{\epsilon \|z\|_1}$ where $f(y)=e^{\epsilon \|y\|_1}$ (since all coordinates are positive) and our exponential moment estimate from Step 2 we obtain that the symmetrized versions of both sides of (\[BESQintertwining\]) and (\[Jacobiintertwining\]) coincide; where we view for each $x\in W^{n+1}$ and $t\ge 0$ $P^{(n+1)}_{d-2,\theta}(t)\Lambda^{\theta}_{n,n+1}$ and $\Lambda^{\theta}_{n,n+1}P^{(n)}_{d,\theta}(t)$ as probability measures on $W^n$. In fact, by the discussion after Theorem 1.3 of [@DeJeu], since we work in $[0,\infty)^n$ and not the full space $\mathbb{R}^n$, we need not require that the symmetrized versions of these measures have exponential moments but that they only need to integrate $e^{\epsilon \sqrt{\|z\|}}$. The theorem is now proven. *A short proof of Selberg’s generalized beta formula*, Forum Mathematicum, Vol. 3, 415-417,(1991). *PhD thesis at University of Warwick*, in preparation, (2017+). *Interlacing Diffusions*, Available from http://arxiv.org/abs/1607.07182, (2016). *The Calogero-Sutherland model and generalized classical polynomials*, Communications in Mathematical Physics, Vol. 188, 175-216,(1997) *Paths in Weyl chambers and random matrices*, Probability Theory and Related Fields, Vol. 124, Issue 4, 517-543,(2002) *$\beta$-Jacobi processes*, Advances in Pure and Applied Mathematics, Vol. 1, 325-344, (2010). *Radial Dunkl Processes : Existence and uniqueness, Hitting time, Beta Processes and Random Matrices*, Available from http://arxiv.org/abs/0707.0367, (2007). *Generalizations of Legendre’s formula $KE'-(K-E)K'=\frac{1}{2}\pi$*, Proceedings of the London Mathematical Society, Vol. 3, 206-224, (1905). *Large deviations for squares of Bessel and Ornstein-Uhlenbeck processes*, Probability Theory and Related Fields, Vol. 129, 261-289, (2004). *Integrabilite des vecteurs gaussiens*,Comptes Rendus de l’Academie des Sciences Paris A-B, A1698 - A1699, (1970). *Log-gases and random matrices*, Princeton University Press, (2010). *Multilevel Dyson Brownian motions via Jack polynomials*,Probability Theory and Related Fields, Vol. 163, 413-463, (2015) *Strong solutions of non-colliding particle systems*, Electronic Journal of Probability, Vol.19, 1-21, (2014). *Determinate multidimensional measures, the extended Carleman theorem and quasi-analytic weights*, Annals of Probability, Vol.31, No. 3, 1205-1227, (2003) *Eigenvalues of the Laguerre Process as Non-Colliding Squared Bessel Processes*, Electronic Communications in Probability, Vol. 6, 107-114, (2001). *A path-transformation for random walks and the Robinson-Schensted correspondence*, Transactions of the American Mathematical Society, **355**, (2003). *A Representation for Non-Colliding Random Walks*, Electronic Communications in Probability, Vol. 7, 1-12, (2002). *Shifted Jack polynomials, binomial formula, and applications*, Mathematical Research Letters, Vol. 4, 69-78, (1997) *Intertwining diffusions and wave equations*, Available from https://arxiv.org/abs/1306.0857, (2015). *Intertwinings of $\beta$-Dyson Brownian motions of different dimensions*, Available from https://arxiv.org/abs/1608.01597, (2016). *Continuous Martingales and Brownian Motion*, Third Edition, A Series of Comprehensive Studies in Mathematics, Vol. 293, Springer-Verlag,(1999) *Markov processes related with Dunkl operators*, Advances in Applied Mathematics, Vol. 21, 575-643,(1998). *Laguerre and Jacobi analogues of the Warren process* , Available from https://arxiv.org/abs/1610.01635, (2016). *An Introduction to Ergodic Theory*, Graduate Texts in Mathematics, Vol. 79, Springer-Verlag, (1982). *Dyson’s Brownian motions,intertwining and interlacing*, Electronic Journal of Probability, Vol.12, 573-590, (2007). [Mathematics Institute, University of Warwick, Coventry CV4 7AL, U.K.]{}<[email protected]>
--- abstract: 'We study the superfluid weight $D^s$ and Berezinskii-Kosterlitz-Thouless (BKT) transition temperatures $T_{BKT}$ in case of exotic Fulde-Ferrell (FF) superfluid states in lattice systems. We consider spin-imbalanced systems with and without spin-orbit coupling (SOC) accompanied with in-plane Zeeman field. By applying mean-field theory, we derive general equations for $D^s$ and $T_{BKT}$ in the presence of SOC and the Zeeman fields for 2D Fermi-Hubbard lattice models, and apply our results to a 2D square lattice. We show that conventional spin-imbalanced FF states without SOC can be observed at finite temperatures and that FF phases are further stabilized against thermal fluctuations by introducing SOC. We also propose how topologically non-trivial SOC-induced FF phases could be identified experimentally by studying the total density profiles. Furthermore, the relative behavior of transverse and longitudinal superfluid weight components and the role of the geometric superfluid contribution are discussed.' author: - Aleksi Julku - Long Liang - Päivi Törmä bibliography: - 'bib\_soc.bib' title: 'Superfluid weight and Berezinskii-Kosterlitz-Thouless temperature of spin-imbalanced and spin-orbit-coupled Fulde-Ferrell phases in lattice systems' --- Introduction ============ Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) superfluid states, identified by finite center-of-mass Cooper pairing momenta [@ff:1964; @larkin:1964], have gained widespread interest since their existence was predicted in the 1960s [@casalbuoni:2004]. Traditionally, FFLO states are considered in the context of spin-imbalanced degenerate Fermi gases where finite momenta of condensed Cooper pairs originate from the mismatch between the Fermi surfaces of two pairing Fermion species [@radzihovsky:2010; @kinnunen:2018]. In such spin-polarized systems magnetism and superfluidity, usually thought to be incompatible with each other, co-exist and the superfluid order parameter is spatially varying, in contrast to the conventional Bardeen-Cooper-Schrieffer (BCS) pairing states characterized by the uniform order parameter and the absence of magnetism. Realizing such spin-polarized FFLO states is challenging due to the requirement for large imbalance which in turn yields small superconducting order parameters and low critical temperatures. In recent years, a very different physical mechanism for realizing FFLO phases, namely the introduction of spin-orbit coupling (SOC) and Zeeman fields, has been investigated in many theoretical studies [@zheng:2013; @wu:2013; @liu:2013; @michaeli:2012; @xu:2014; @chen:2013; @cao:2014; @qu:2013; @zhang:2013; @huang:2017; @dong:2013; @hu:2013; @zhou:2013; @iskin:2013; @iskin:2013b; @wu:2013b; @seo:2013; @liu:2012; @qu:2014; @zheng:2014; @guo:2018; @guo:2017], for a review see [@kinnunen:2018]. The advantage of these SOC-induced FFLO states is the absence of large spin polarizations as now finite Cooper pairing momenta originate from the deformation of the single-particle band dispersions and not from the mismatch of Fermi surfaces. As large polarizations are not needed, SOC-induced FFLO states might have higher critical temperatures than conventional imbalance-induced FFLO phases. Despite many theoretical studies supporting the existence of FFLO phases, direct observation of such exotic superfluid states has been lacking [@casalbuoni:2004; @beyer:2013]. For studying the FFLO state experimentally, ultracold Fermi gas systems are promising as they provide exact control of system parameters such as the spatial dimensionality, interaction strengths between the particles, and the system geometry [@esslinger:2010; @bloch:2008; @bloch:2012; @torma:2014]. Ultracold gas experiments performed with quasi-one-dimensional population-imbalanced atomic gases have shown to be consistent with the existence of the FFLO state  [@liao:2010] but unambiguous proof is still missing. In addition to conventional spin-imbalanced quantum gas experiments, recently also synthetic spin-orbit coupling and Zeeman fields have been realized in ultracold gas experiments [@lin:2011; @wang:2012; @cheuk:2012; @zhang:2012; @qu:2013_2] which makes it possible to investigate SOC-induced FFLO states as well. As SOC-induced FFLO states have been predicted to be stable in larger parameter regime than conventional spin-imbalanced FFLO phases [@xu:2014], synthetic SOC could provide a way to realize FFLO experimentally in ultracold gas systems [@huang:2017]. Low dimensionality has been predicted to favor FFLO-pairing [@parish:2007; @koponen:2008]. However, in two and lower dimensional systems thermal phase fluctuations of the Cooper pair wave functions prevent the formation of true superfluid long-range order as stated by the Mermin-Wagner theorem [@mermin:1966]. Instead, only quasi-long range order is possible. In two dimensions, the phase transition from a normal Fermi gas to a superfluid state of quasi-long range order is determined by the Berezinskii-Kosterlitz-Thouless (BKT) transition temperature $T_{BKT}$ [@kosterlitz:1973]. Below $T_{BKT}$ the system is a superfluid and above $T_{BKT}$ superfluidity is lost. In recent years, SOC-induced FFLO phases in two-dimensional systems have gained considerable attention [@qu:2013; @zhang:2013; @xu:2014; @wu:2013; @zheng:2014; @iskin:2013b; @wu:2013b]. In these systems it has been argued that SOC accompanied with the in-plane Zeeman field would yield FFLO states. Furthermore, in  [@qu:2013; @zhang:2013] it was predicted that in the presence of the out-of-plane Zeeman field, i.e. spin-imbalance, SOC-induced FFLO states could be topologically non-trivial and support Majorana fermions. Such topological FFLO states are conceptually new and exotic superconductive phases of matter. However, these studies were performed by applying mean-field theories which do not consider the stability of FFLO states against thermal phase fluctuations in terms of the BKT transition. Superfluidity and BKT transition temperatures of BCS phases in spin-orbit-coupled Fermi gases have been theoretically investigated previously in [@lianyi:2012; @gong:2012; @devreese:2014; @rosenberg:2017] but BKT transitions of FFLO states have remained largely unstudied. As an exception, $T_{BKT}$ for FFLO states in case of a 2D *continuum* system was explicitly computed in [@yin:2014; @cao:2014; @cao:2015; @xu:2015] where it was shown that SOC is required in order to have a non-zero $T_{BKT}$ for FFLO states. However, in case of spin-orbit coupled *lattice* systems, $T_{BKT}$ of FFLO phases has not been studied before. Lattice systems are interesting since, due to Fermi surface nesting effects, the FFLO states are expected to be more stable and accessible than in continuum [@kinnunen:2018; @koponen:2007; @koponen:2008]. FFLO pairing states can be classified to two main categories: Fulde-Ferrell (FF) and Larkin-Ovchinnikov (LO) phases. In case of FF, the Cooper pair wave function $\Delta(\textbf{r})$ is a plane wave associated with a single pairing momentum so that it has a uniform amplitude but a spatially oscillating complex phase. The LO wave function, on the contrary, consists of two plane waves of opposite momenta and therefore has spatially varying amplitude. In spin-imbalanced systems without SOC, it has been shown, at the mean-field level, that in a square lattice the LO states should be slightly more energetically favorable than FF states [@baarsma:2016], whereas in the presence of SOC both FF and LO states can exist as was shown in [@xu:2014]. Moreover, in [@guo:2018; @guo:2017; @iskin:2013b] the existence of topologically non-trivial FFLO phases in square and triangular lattices was predicted. However, studies presented in [@xu:2014; @guo:2018; @guo:2017; @iskin:2013b] did not consider the stability of FFLO phases against thermal phase fluctuations. In this work we investigate the stability of FF phases in lattice systems with and without SOC by calculating the BKT transition temperature $T_{BKT}$. For a superconducting system the BKT temperature depends on the superfluid weight $D^s$ which is responsible for the dissipationless electric current and the Meissner effect - the fundamental properties of superconductors [@scalapino:1992; @scalapino:1993]. In our study we develop a general theory for obtaining $D^s$ in any kind of lattice geometry in the presence of SOC and Zeeman fields, and apply the theory to a square lattice. We show that FF states in a square lattice indeed have a finite $T_{BKT}$ with and without SOC, which is of fundamental importance as well as a prerequisite for their experimental observation. Topological FF states created by the interplay of SOC and Zeeman fields are identified with the Chern numbers $C = \{\pm 1,-2\}$, and we explain how different topological FF phases can be distinguished by investigating the momentum density profiles which are experimentally accessible quantities. Additionally, we compare the superfluid weight components in orthogonal spatial directions. We also compute the so-called geometric superfluid weight component which is just recently found new superfluid contribution that depends on the geometric properties of the single-particle Bloch functions [@peotta:2015; @liang:2017]. In our study we discard the existence of LO phases as the LO ansatzes break the translational invariance which is required for deriving the superfluid weight in a simple form. Ignoring LO states, however, is not an issue because we are interested in the stability and BKT transition temperatures of exotic superfluid states: if there exists more stable LO states than FF states that we find, it implies the BKT transition temperatures of these LO states being higher than the temperatures we obtain for FF states. Therefore, our results can be considered as conservative estimates. Furthermore, in [@xu:2014; @guo:2018] LO states were argued to exist when the superfluid pairing occurs within both helicity branches of a spin-orbit coupled square lattice. Thus, by studying the pairing amplitude profiles, we can deduce in which parts of our parameter space LO states would be more stable than the FF states we study. The rest of the article is structured as follows. In the next section we provide expressions for the superfluid weight and thus for $T_{BKT}$ in the presence of SOC in case of an arbitrary lattice geometry. In section. \[section\_three\] we apply our equations for a spin-orbit coupled square lattice and show $T_{BKT}$ for various system parameters. We also discuss the topological properties of the system, and the different components of the superfluid weight. Lastly, in section \[section\_four\] we present concluding remarks and an outlook for future research. Derivation of the superfluid weight in the presence of SOC for an arbitrary lattice geometry {#section_two} ============================================================================================ In this section we derive the expressions for the superfluid weight in the framework of BCS mean-field theory by applying linear response theory in a very similar way as was done in [@liang:2017]. We consider the following two dimensional Fermi-Hubbard Hamiltonian $$\begin{aligned} H = &\sum_{i,j,\alpha,\beta,\sigma,\sigma'}t_{i\alpha\sigma,j\beta\sigma'}c^\dag_{i\alpha\sigma}c_{j\beta\sigma'} - \sum_{i\alpha\sigma}\mu_{\sigma}c^\dag_{i\alpha\sigma}c_{i\alpha\sigma} +U\sum_{i\alpha} c^\dag_{i\alpha\uparrow}c_{i\alpha\uparrow}c^\dag_{i\alpha\downarrow}c_{i\alpha\downarrow}\end{aligned}$$ where $c^\dag_{i\alpha\sigma}$ creates a fermion in the $\alpha$-orbital of the $i$th unit cell with spin $\sigma \in \{\uparrow, \downarrow \}$. The first term describes the hopping processes which in addition to usual kinetic hopping terms ($\sigma = \sigma'$) can now also include spin-flipping terms ($\sigma \neq \sigma'$) required to take into account the spin-orbit coupling contribution. In the second term $\mu_\sigma$ is the spin-dependent chemical potential and the last term is the attractive on-site Hubbard interaction characterized by the coupling strength $U <0$. The above Hamiltonian describes any two-dimensional lattice geometry with arbitrary hopping and spin-flip terms, including the Rashba spin-orbit coupled two-component Fermi gases considered in this work. We treat the interaction term by performing the standard mean-field approximation $Uc^\dag_{i\alpha\uparrow}c_{i\alpha\uparrow}c^\dag_{i\alpha\downarrow}c_{i\alpha\downarrow} \approx \Delta_{i\alpha}c_{i\alpha\downarrow}c_{i\alpha\uparrow} + \Delta_{i\alpha}^\dag c^\dag_{i\alpha\uparrow}c^\dag_{i\alpha\downarrow}$ where $\Delta_{i\alpha} = U\langle c_{i\alpha\downarrow}c_{i\alpha\uparrow} \rangle$ is the superfluid order parameter or in other words the wavefunction of the condensed Cooper pairs. To investigate the properties of the usual BCS and exotic inhomogeneous Fulde-Ferrell superfluid phases, we let the order parameter to have the form $\Delta_{i\alpha} = \Delta_{\alpha}\exp[i \tilde{\textbf{q}} \cdot \textbf{r}_i]$, where $\tilde{\textbf{q}}$ is the Cooper-pair momentum and $\textbf{r}_i$ is the spatial coordinate of the $i$th unit cell. The momentum of the Cooper pairs in a FF phase is finite, in contrast to a normal BCS phase where the Cooper pairs do not carry momentum. By performing the Fourier transform to the momentum space $c_{i\alpha\sigma} = (1/\sqrt{N}) \sum_{\textbf{k}}e^{i\textbf{k}\cdot \textbf{r}_i}c_{\sigma\textbf{k}\alpha}$, where $N$ is the number of unit cells, one can rewrite the Hamiltonian in the form (discarding the constant terms) $$\begin{aligned} \label{mfham} H = \sum_{\textbf{k}}\Big ( & \begin{bmatrix} c_{\uparrow {\textbf{k}}}^\dag & c_{\downarrow {\textbf{k}}}^\dag \end{bmatrix} \begin{bmatrix} \mathcal{H}_\uparrow({\textbf{k}}) - \mu_\uparrow & \Lambda({\textbf{k}}) \\ \Lambda^\dag({\textbf{k}}) & \mathcal{H}_\downarrow ({\textbf{k}}) - \mu_\downarrow \end{bmatrix} \begin{bmatrix} c_{\uparrow {\textbf{k}}} \\ c_{\downarrow {\textbf{k}}} \end{bmatrix}\nonumber \\ &+ c_{\uparrow {\textbf{k}}}^\dag \Delta c_{\downarrow {\tilde{\bf{q}}}-{\textbf{k}}}^\dag + c_{\downarrow {\tilde{\bf{q}}}-{\textbf{k}}} \Delta^\dag c_{\uparrow {\textbf{k}}} \Big ),\end{aligned}$$ where $c_{\sigma{\textbf{k}}}^\dag = [c_{\sigma\textbf{k}1}, c_{\sigma\textbf{k}2},...,c_{\sigma\textbf{k}M}]$ and $\Delta = \textrm{diag}(\Delta_1,\Delta_2,...,\Delta_M)$, $M$ being the number of orbitals within a unit cell. Furthermore, $\mathcal{H}_\sigma({\textbf{k}})$ and $\Lambda({\textbf{k}})$ are the Fourier transforms of the kinetic hopping and the spin-flip terms, respectively. To write our Hamiltonian in a more compact form, let us introduce a four-component spinor $\Psi_{\textbf{k}}$ and rewrite the Hamiltonian as follows: $$\label{ham1} H = \frac{1}{2}\sum_{\textbf{k}}\Psi^\dag_{\textbf{k}}\mathcal{H}_{\textbf{k}}\Psi_{\textbf{k}},$$ where $$\begin{aligned} \label{psispinor} &\Psi_{\textbf{k}}= \begin{bmatrix} c_{\uparrow {\textbf{k}}} \\ c_{\downarrow {\textbf{k}}} \\ c_{\downarrow {\tilde{\bf{q}}}- {\textbf{k}}}^\dag \\ -c_{\uparrow {\tilde{\bf{q}}}- {\textbf{k}}}^\dag \end{bmatrix} \equiv \begin{bmatrix} \psi_{\textbf{k}}\\ i\tau^y (\psi^\dag_{{\tilde{\bf{q}}}- {\textbf{k}}})^T \end{bmatrix} \equiv \begin{bmatrix} \psi_{\textbf{k}}\\ \psi_{2,{\textbf{k}}} \end{bmatrix}, \\ \label{hamp} &\mathcal{H}_{\textbf{k}}= \begin{bmatrix} \mathcal{H}_p({\textbf{k}}) -\tilde{\mu} & \tilde{\Delta} \\ \tilde{\Delta}^\dag & - \mathcal{H}_h ({\textbf{k}}- {\tilde{\bf{q}}}) + \tau^y \tilde{\mu} \tau^y \end{bmatrix}, \\ &\mathcal{H}_p({\textbf{k}}) = \begin{bmatrix} \mathcal{H}_\uparrow({\textbf{k}}) & \Lambda({\textbf{k}}) \\ \Lambda^\dag({\textbf{k}}) & \mathcal{H}_\downarrow ({\textbf{k}}) \end{bmatrix}, \\ &\mathcal{H}_h({\textbf{k}}) = -i \tau^y \mathcal{H}_p^*(-{\textbf{k}})i \tau^y, \\ &\tilde{\Delta} = \begin{bmatrix} \Delta & 0 \\ 0 & \Delta \end{bmatrix}, \\ &\tilde{\mu} = \begin{bmatrix} \mu_\uparrow I_M & 0 \\ 0 & \mu_\downarrow I_M \end{bmatrix}.\end{aligned}$$ Here $\tau^y = \hat{\sigma}_y \otimes I_M$, where $I_M$ is a $M\times M$ identity matrix and $\hat{\sigma} = [\hat{\sigma}_x,\hat{\sigma}_y,\hat{\sigma}_z]$ are the Pauli matrices. One should note that now the single-particle Hamiltonian is not anymore simply $\mathcal{H}_\sigma$ but $\mathcal{H}_p$ in which the two spin components are coupled via $\Lambda ({\textbf{k}})$. In two dimensions the total superfluid weight $D^s$ is a $2\times2$ tensor which reads $$\begin{aligned} D^s = \begin{bmatrix} D^s_{xx} & D^s_{xy} \\ D^s_{yx} & D^s_{yy} \end{bmatrix},\end{aligned}$$ where $x$ and $y$ are the spatial dimensions. To compute the superfluid weight tensor elements $D^s_{\mu\nu}$, we exploit the fact that at the mean-field level $D^s_{\mu\nu}$ is the long-wavelength, zero-frequency limit of the current-current response function $K_{\mu\nu}$  [@scalapino:1993], that is $$\begin{aligned} \label{Ds_cc_response} D^s_{\mu\nu} &= \lim_{{\textbf{q}}\rightarrow 0} \lim_{\omega \rightarrow 0} K_{\mu\nu}({\textbf{q}},\omega) \nonumber \\ &= \lim_{{\textbf{q}}\rightarrow 0} \lim_{\omega \rightarrow 0} \Big[ \langle T_{\mu\nu} \rangle - i\int_0^\infty dt e^{i\omega t} \langle [ j^p_\mu({\textbf{q}},t),j^p_\nu(-{\textbf{q}},0)] \rangle \Big],\end{aligned}$$ where $j^p({\textbf{q}})$ and $T$ are the paramagnetic and diamagnetic current operators, respectively. The current operators can be derived by applying the Peierls substitution to the single-particle Hamiltonian $\mathcal{H}_p$ such that the hopping elements, both kinetic and spin-flipping terms, are modified by a phase factor of $\exp[-i \textbf{A}\cdot (\textbf{r}_j - \textbf{r}_i)]$ where $\textbf{A}$ is the vector potential. By assuming the phase factor to be spatially slowly varying, we can expand the Hamiltonian up to second order in $A$ to obtain $H = j^p_\mu A_\mu + T_{\mu\nu} A_\mu A_\nu/2$. In our case the $\mu$-component of the paramagnetic and diamagnetic current operators can be cast in the form $$\begin{aligned} \label{paramagnetic} j_\mu^p({\textbf{q}}) &= \sum_{\textbf{k}}\psi_{{\textbf{k}}+ {\textbf{q}}}^\dag \partial_\mu \mathcal{H}_p({\textbf{k}}+ {\textbf{q}}/2) \psi_{\textbf{k}}\nonumber \\ &= \sum_{\textbf{k}}\Psi_{{\textbf{k}}+ {\textbf{q}}}^\dag \partial_\mu \mathcal{H}({\textbf{k}}+ {\textbf{q}}/2)P_+ \Psi_{\textbf{k}}\end{aligned}$$ and $$\begin{aligned} \label{diamagnetic} T_{\mu\nu}({\textbf{q}}) &= \sum_{\textbf{k}}\psi_{\textbf{k}}^\dag \partial_\mu \partial_\nu \mathcal{H}_p({\textbf{k}}) \psi_{\textbf{k}}\nonumber \\ &= \sum_{\textbf{k}}\Psi_{\textbf{k}}^\dag \partial_\mu \partial_\nu \mathcal{H}({\textbf{k}})P_+ \Psi_{\textbf{k}}, \end{aligned}$$ where $P_+ = (I_{4M}+ \hat{\sigma}^z \otimes I_{2M})/2$ and more generally $P_\pm = (I_{4M}\pm \hat{\sigma}^z \otimes I_{2M})/2$. We are interested in computing the current-current response function $K_{\mu\nu}({\textbf{q}},\omega)$ which at the limit of ${\textbf{q}}\rightarrow 0$, $\omega=0$ yields the superfluid weight $D^s_{\mu \nu}$. To this end, we first define a Green’s function $G(\tau,{\textbf{k}}) = - \langle T \Psi_{\textbf{k}}(\tau) \Psi^\dag_{\textbf{k}}(0) \rangle$. In the Matsubara frequency space this reads $G(i\omega_n,{\textbf{k}}) = 1/(i\omega_n - \mathcal{H}({\textbf{k}}))$ which follows from the quadratic form of the Hamiltonian . Now, the current operators -, the Green’s function and the Hamiltonian all have the same structure as those for conventional BCS theory developed in [@liang:2017]. Thus one can compute, by applying the Matsubara formalism and analytic continuation, the current-current response function in a similar fashion as done in [@liang:2017]. One starts from , inserts the expressions - for the current operators, deploys the Matsubara formalism, applies the diagrammatic expansion up to first order diagrams and obtains $$\begin{aligned} \label{ds_int} K_{\mu\nu}({\textbf{q}},i\omega_n) =& \frac{1}{\beta} \sum_{\textbf{k}}\sum_{\Omega_m} \textrm{Tr} \Big[\partial_\mu \partial_\nu \mathcal{H}({\textbf{k}}) P_+ G(i\Omega_m,{\textbf{k}}) \nonumber \\ &+\partial_\mu \mathcal{H} ({\textbf{k}}+ {\textbf{q}}/2) P_+ G(i\omega_n+i\Omega_m,{\textbf{k}}+{\textbf{q}}) \nonumber \\&\times \partial_\nu \mathcal{H}({\textbf{k}}+ {\textbf{q}}/2) \hat{\gamma}_z G(i\Omega_m,{\textbf{k}}) \Big]. \end{aligned}$$ where $\beta = 1/k_B T$, $\hat{\gamma}_z = \hat{\sigma}_z \otimes I_{2M}$, and $\omega_n$ ($\Omega_m$) are bosonic (fermionic) Matsubara frequencies. From one eventually obtains (see appendix \[app:derivation\]): $$\begin{aligned} \label{sfw} D^s_{\mu\nu} =& K_{\mu\nu}({\textbf{q}}\rightarrow 0,0) \nonumber \\=& 2\sum_{{\textbf{k}},i,j} \frac{n(E_{j,{\textbf{k}}}) - n(E_{i,{\textbf{k}}})}{E_{i,{\textbf{k}}} - E_{j,{\textbf{k}}}} \Big( \langle \phi_i({\textbf{k}})| \partial_\mu \mathcal{H}({\textbf{k}}) P_+ | \phi_j({\textbf{k}}) \rangle \nonumber \\ &\times\langle \phi_j({\textbf{k}}) | P_- \partial_\nu \mathcal{H}({\textbf{k}}) | \phi_i({\textbf{k}}) \rangle \Big), \end{aligned}$$ where $n(E)$ is the Fermi-Dirac distribution and $|\phi_i({\textbf{k}}) \rangle$ are the eigenvectors of $\mathcal{H}({\textbf{k}})$ with the eigenvalues $E_{i,{\textbf{k}}}$. For $i=j$, the prefactor should be understood as $-\partial_{E_i}n(E_i)$, which vanishes at zero temperature if the quasi-particle spectrum is gapped. For gapless excitations, $-\partial_{E_i}n(E_i)$ gives finite contribution even at zero temperature. We have benchmarked our superfluid weight relation to earlier studies as discussed in appendix \[app:benchmark\]. The BKT transition temperature $T_{BKT}$ can be obtained from the superfluid weight tensor by using the generalized KT-Nelson criterion [@nelson:1977] for the anisotropic superfluid [@cao:2014; @xu:2015]: $$\begin{aligned} T_{BKT} = \frac{\pi}{8}\sqrt{\det[D^s(T_{BKT})]}.\end{aligned}$$ In the computations presented in this work $D^s$ is at low temperatures nearly a constant and therefore we can safely use the following approximation $$\begin{aligned} \label{tbkt} T_{BKT} \approx \frac{\pi}{8}\sqrt{\det[D^s(T=0)]}.\end{aligned}$$ In  [@peotta:2015; @liang:2017] it was shown that in case of conventional BCS states the superfluid weight can be divided to two parts: the so-called conventional and geometric contributions, $D^s_{\mu \nu} = D^s_{\textrm{conv},\mu \nu} + D^s_{\textrm{geom},\mu \nu}$. The conventional superfluid term $D^s_{\textrm{conv},\mu \nu}$ depends only on the single-particle energy dispersion relations, whereas the geometric part $D^s_{\textrm{geom},\mu \nu}$ comprises the geometric properties of the Bloch functions. In a similar fashion than in [@liang:2017], also in our case the superfluid weight can be split to conventional and geometric parts so that $D^s_{\textrm{conv},\mu \nu}$ is a function of the single-particle dispersions of $\mathcal{H}_p$ and $\mathcal{H}_h$, and correspondingly $D^s_{\textrm{geom},\mu \nu}$ depends on the Bloch functions of $\mathcal{H}_p$ and $\mathcal{H}_h$. The separation of $D^s$ to $D^s_{\textrm{geom}}$ and $D^s_{\textrm{conv}}$ terms is shown in appendix \[app:a\]. Rashba-spin-orbit-coupled fermions in a square lattice {#section_three} ====================================================== The above expression of the superfluid weight holds for an arbitrary multiband lattice system. Here we focus on the simplest possible case, namely the square lattice geometry where the so-called Rashba spin-orbit coupling is applied to induce Fulde-Ferrell phases. By computing the superfluid weight and thus the BKT transition temperature, one can investigate the stability of SOC-induced FF phases versus the conventional FF phases induced by the spin-imbalance. We start by writing the Hamiltonian in the form $$\begin{aligned} H =& -t\sum_{\langle i,j \rangle,\sigma} c_{i\sigma}^\dag c_{j\sigma} - \mu \sum_{i\sigma} c_{i\sigma}^\dag c_{i\sigma} + U \sum_i c_{i\uparrow}^\dag c_{i\uparrow} c_{i\downarrow}^\dag c_{i\downarrow} \nonumber \\ &+ H_{z,in} + H_{z,out} + H_{SOC},\end{aligned}$$ where the first term is the usual nearest-neighbour hopping term (we discard the orbital indices as in a square lattice there is only one lattice site per unit cell). The last three terms are the in-plane Zeeman field, out-of-plane Zeeman field and the Rashba coupling, respectively. They are $$\begin{aligned} &H_{z,in} = h_x \sum_i c_i^\dag \hat{\sigma}_x c_i \\ &H_{z,out} = h_z \sum_i c_i^\dag \hat{\sigma}_z c_i \\ &H_{SOC} = i\lambda \sum_{\langle i,j \rangle} c_i^\dag(\textbf{d}_{ij} \times \hat{\sigma})_z c_j.\end{aligned}$$ Here $\textbf{d}_{ij}$ is the unit vector connecting the nearest-neighbour sites $i$ and $j$, $\hat{\sigma} = [\hat{\sigma}_x, \hat{\sigma}_y, \hat{\sigma}_z]^{T}$ are the Pauli matrices and $c_i = [c_{i\uparrow},c_{i\downarrow}]^T$. The out-of-plane Zeeman fields can be included to the spin-dependent chemical potentials by writing $\mu_\uparrow = \mu + h_z$ and $\mu_\downarrow = \mu - h_z$. Furthermore, due to the in-plane Zeeman field and the Rashba spin-flipping terms, $\Lambda({\textbf{k}})$ in has the form $\Lambda({\textbf{k}}) = h_x - 2\lambda (\sin k_y + i \sin k_x)$. We determine the order parameter amplitude $\Delta$ and the Cooper pair momentum $\tilde{{\textbf{q}}}$ self-consistently by minimizing the grand canonical thermodynamic potential $\Omega(\Delta,\tilde{{\textbf{q}}}) = -k_B T \log[\textrm{Tr}(e^{-\beta H})]$ which in the mean-field framework at $T=0$ reads as $$\begin{aligned} \label{therm_omega} \Omega_{\textrm{M.F.}} = - \frac{\Delta^2}{U} + \frac{1}{2}\sum_{{\textbf{k}},\nu,\eta} E^\eta_{{\textbf{k}},\nu}\Theta(-E^\eta_{{\textbf{k}},\nu}),\end{aligned}$$ where $\Theta(x)$ is the Heaviside step function and $E^\eta_{{\textbf{k}},\nu}$ are the eigenvalues of $\mathcal{H}_{\textbf{k}}$. Here $\eta = \{+,- \}$ labels the quasi-particle and quasi-hole branches, respectively and $\nu = \{1,2\}$ the helicity branches split by the spin-orbit coupling. The quasi-particle branches are taken to be the two highest eigenvalues of $\mathcal{H}_{\textbf{k}}$. In we have discarded the constant term $\sum_{\textbf{k}}\textrm{Tr}[\mathcal{H}_h({\textbf{k}}- \tilde{{\textbf{q}}}) - \tau^y\tilde{\mu} \tau^y]$ which is not needed when one minimizes $\Omega_{\textrm{M.F.}}$. Consistent with previous lattice studies [@xu:2014; @guo:2018; @guo:2017], the Cooper pair momentum is in the $y$-direction, i.e. $\tilde{{\textbf{q}}} = \tilde{q}_y \hat{\textbf{e}}_y$ as the in-plane Zeeman field in the $x$-direction deforms the single-particle dispersions in the $y$-direction. We have numerically checked that the solutions with the Cooper pair momentum in the $y$-direction minimize the thermodynamic potential, as discussed in appendix \[app:qbm\]. When the correct values for $\Delta$ and $\tilde{q}_y$ are found, the superfluid weight can be computed with . We investigate the topological properties by computing the Chern number $C$ for our interacting system by integrating the Berry curvature $\Gamma^\eta_\nu({\textbf{k}})$ associated with the quasi-hole branches $\eta = -$ over the first Brillouin zone as follows: $$\begin{aligned} C &= \frac{1}{2\pi}\sum_{\nu=1}^2 \int_{-\pi}^\pi \int_{-\pi}^\pi dk_x dk_y \Gamma^{-}_\nu({\textbf{k}}).\end{aligned}$$ The explicit form for the Berry curvature can be expressed with the eigenvalues $E^\eta_{{\textbf{k}},\nu}$ of $\mathcal{H}_{\textbf{k}}$ and the corresponding eigenvectors $| n({\textbf{k}}) \rangle$, where $n = (\eta,\nu)$, in the form $$\begin{aligned} \Gamma^\eta_\nu({\textbf{k}}) = i \sum_{n \neq n'}\frac{\langle n | \partial_{k_x} \mathcal{H}_{\textbf{k}}| n' \rangle \langle n' | \partial_{k_y} \mathcal{H}_{\textbf{k}}| n \rangle - (k_x \leftrightarrow k_y)}{\big( E^\eta_{{\textbf{k}},\nu} - E^{\eta'}_{{\textbf{k}},\nu'} \big)^2}.\end{aligned}$$ Results ======= Phase diagrams and the BKT temperature -------------------------------------- By deploying our mean-field formalism we determine the phase diagrams and $T_{BKT}$ as functions of the Zeeman fields and the average chemical potential $\mu = (\mu_\uparrow + \mu_\downarrow)/2$. We fix the temperature to $T=0$ as, according to , the zero-temperature superfluid weight gives a good estimate for $T_{BKT}$. In all the computations we choose $t = 1$ and $U= -4$. Furthermore, we let $\tilde{q}_y$ to have only discrete values in the first Brillouin zone such that $\tilde{q}_y \in \{\frac{\pi n}{L},n=1,2,...L \} $, where $L$ is the length of the lattice in one direction, i.e. the total number of lattice sites is $N = L \times L$. In all of our computations we choose $L=104$ and deploy periodic boundary conditions. ![\[fig:1\](a)-(d) Cooper pair momentum $\tilde{q}_y$ and the corresponding BKT temperature $T_{BKT}$ as a function of the Zeeman fields $h_x$ and $h_z$ for the spin-orbit couplings $\lambda =0$ \[(a) and (c)\] and for $\lambda =0.75$ \[(b) and (d)\] at $\mu = 0.95$. In (a)-(b) the colors depict the magnitude of $\tilde{q}_y$ and in (c)-(d) the BKT temperature. For $\lambda =0$ all the phases are topologically trivial whereas for finite SOC there exists topologically non-trivial BCS and FF phases. Labels tFF$_{-1}$, tFF$_{-2}$ and tBCS$_{-2}$ correspond to topologically non-trivial FF and BCS phases of Chern numbers $-1$ and $-2$. In case of $\lambda = 0.75$ there exists two different FF regions, one with small Cooper pair momentum but large $T_{BKT}$ and one with larger $\tilde{q}_y$ but small $T_{BKT}$. (e) $T_{BKT}$ and $\tilde{q}_y$ as a function of $h_x$ at $h_z = 0$ for $\lambda = 0$ (purple lines) and $\lambda = 0.75$ (blue lines). Three red squares correspond to cases considered in figure \[fig:2\].](fig1_2704.png){width="0.75\columnwidth"} In figures \[fig:1\](a)-(b) the superfluid phase diagrams in terms of the magnitude of $\tilde{q}_y$ are presented as a function of $h_x$ and $h_z$ at $\mu = 0.95$ for $\lambda = 0$ and $\lambda = 0.75$, respectively, and the corresponding BKT transition temperatures $T_{BKT}$ are shown in figures \[fig:1\](c)-(d). From figure \[fig:1\](a) we see that in the absence of SOC the phase diagram is symmetric with respect to the Zeeman field orientation. This is due to the SO(2) symmetry as under the rotation $\mathcal{U} [c_{i\uparrow},c_{i\downarrow}]^T \mathcal{U}^{-1} = \frac{1}{\sqrt{2}}[c_{i\uparrow} + c_{i\downarrow},c_{i\uparrow} - c_{i\downarrow}]^T \equiv [d_{i\uparrow}, d_{i\downarrow}]^T$ the Hamiltonian remains invariant except $h_x \rightarrow h_z$ and $h_z \rightarrow h_x$. For small Zeeman fields, the BCS phase is the ground state and becomes only unstable against the FF phase for larger Zeeman field strengths. One can see from figure \[fig:1\](c) that the BKT temperature for the BCS phase is $T_{BKT} \approx 0.25t$ and roughly $T_{BKT} \approx 0.1t$ for the FF phase. This implies that conventional imbalance-induced FF phases without SOC could be observed in lattice systems, in contrast to continuum systems where it is shown that $T_{BKT} = 0$ [@yin:2014]. This is the first time that the stability against the thermal phase fluctuations of spin-imbalanced FF states in a lattice system is confirmed. Unlike in the case of without SOC, the phase diagram shown in figure \[fig:1\](b) for $\lambda = 0.75$ depends on the direction of the total Zeeman field, as SOC together with the in-plane Zeeman field breaks the $SO(2)$ symmetry. The interplay of the SOC and the Zeeman fields stabilize inhomogeneous superfluidity in larger parameter regions than in case of conventional spin-imbalanced FF states. Furthermore, by introducing SOC one is able realize topologically distinct BCS and FF phases. As with $\lambda =0$, at small Zeeman fields there exist topologically trivial BCS states. When $h_x$ is increased, the system enters non-topological FF phase and eventually for large enough $h_x$ topological FF states of $C = -1$ (tFF$_{-1}$) and $ C= -2$ (tFF$_{-2}$). By applying large $h_z$ one is able to reach topological BCS and FF phases, tBCS$_{-2}$ and tFF$_{-2}$, characterized by $C = -2$. For large enough Zeeman fields the superfluidity is lost and the system enters normal (N) state. From figure \[fig:1\](b) we see that in addition to topological classification, FF phases can be further distinguished by the magnitude of the Cooper pair momentum $\tilde{q}_y$: for intermediate Zeeman field strengths the FF state is characterized by rather small $\tilde{q}_y$, in contrast to region of large Zeeman fields where the pairing momenta are comparable to those of FF states of $\lambda = 0$. The same behavior can be seen by observing $T_{BKT}$ presented in figure \[fig:1\](d). We see that for small-$\tilde{q}_y$ region $T_{BKT}$ is around $0.3t$ and becomes only smaller for large-$\tilde{q}_y$ region where $T_{BKT}$ at largest is roughly $T_{BKT} \approx 0.17t$. Therefore, by deploying SOC, one is able to stabilize FF phases considerably against thermal phase fluctuations and increase $T_{BKT}$. This is similar to continuum studies [@cao:2014; @cao:2015; @xu:2015] where it was proposed that FF states could be observed with the aid of SOC. The difference of $\lambda = 0$ and $\lambda = 0.75$ is further demonstrated in figure \[fig:1\](e), where $T_{BKT}$ and $\tilde{q}_y$ for both the cases are plotted as a function of $h_x$ at $h_z = 0$. We see that the phase diagram becomes richer and $T_{BKT}$ is increased when SOC is deployed. ![\[fig:extra\]Schematics of single-particle dispersions in case of $\lambda =0$, $h_x = 0$ \[(a)-(b)\], $\lambda \neq 0$, $h_x = 0$ \[(c)-(d)\] and $\lambda \neq 0$, $h_x \neq 0$ \[(e)-(f)\]. The upper panels show the dispersions across the first Brillouin zone and the lower ones at $k_x = 0$. Finite SOC splits the degenerate spin-up and spin-down dispersions to two branches and finite $h_x$ deforms the dispersions non-symmetrically with respect to $k_y=0$. In the lower panels the solid blue and dash-dotted red lines depict the dispersions, the black and red arrows depict the intraband pairing momenta and the blue dotted lines the Fermi surfaces. Here only the pairing within one band is depicted but in general, depending on the Fermi level and the Zeeman fields, pairing within both bands can occur. In the presence of the interband pairing, the Cooper pair momentum can in general deviate from the $y$-direction.](fig_extra_2404.png){width="1.0\columnwidth"} ![\[fig:2\] Inter- and intraband pairing functions $|\langle c_{{\textbf{k}},n} c_{\tilde{{\textbf{q}}} - {\textbf{k}}, n'} \rangle|$ for $h_x = 0$ \[(a)-(c)\], $h_x = 0.8$ \[(d)-(f)\] and $h_x = 0.9$ \[(g)-(i)\] in case of $\lambda = 0.75$, $\mu = 0.95$ and $h_z = 0$. These three cases correspond to the three red squares in figure \[fig:1\](e). The non-interacting Fermi surfaces are depicted as red (blue) contours for the upper (lower) dispersion band.](fig_2c-_2404.png){width="1.0\columnwidth"} To understand why in the presence of SOC there exist distinct FF regions of considerably different BKT temperatures, we investigate the inter- and intraband pairing functions $\langle c_{{\textbf{k}},n} c_{\tilde{{\textbf{q}}} - {\textbf{k}}, n'} \rangle$, where $ c_{{\textbf{k}},n}$ is the annihilation operator for the $n$th Bloch function of the single-particle Hamiltonian $\mathcal{H}_p({\textbf{k}})$. In case of a square lattice, $\mathcal{H}_p({\textbf{k}})$ is a $2\times2$ matrix so we have two energy bands, called also helicity branches. As an example, in figure \[fig:extra\] the single-particle energy dispersion bands have been plotted at $h_z = 0$ for $\lambda =0$, $h_x = 0$ \[figures \[fig:extra\](a)-(b)\], $\lambda \neq 0$, $h_x = 0$ \[figures \[fig:extra\](c)-(d)\] and $\lambda \neq 0$, $h_x \neq 0$ \[figures \[fig:extra\](e)-(f)\]. Without SOC, the single particle dispersions for spin up and down components are degenerate \[figures \[fig:extra\](a)-(b)\]. By turning on the spin-orbit coupling, this degeneracy is lifted \[figures \[fig:extra\](c)-(d)\] and when also $h_x$ is applied, the dispersion becomes deformed in a non-symmetric way with respect to $k_y = 0$ \[figures \[fig:extra\](e)-(f)\]. This deformation of the dispersions results in the intraband pairing of finite momentum in the $y$-direction when $h_x$ is large enough as there exists a momentum mismatch of $\tilde{q}_y\hat{\textbf{e}}_y$ between the pairing fermions. If in addition the interband pairing occurs, the momentum mismatch can exist also in the $x$-direction and consequently the Cooper pair momentum is not necessarily in the $y$-direction. However, in the computations presented in this work $\tilde{{\textbf{q}}}$ has been numerically checked to be always in the $y$-direction. With figures \[fig:extra\](e)-(f) one can also understand the fundamental differences between conventional spin-imbalanced-induced and SOC-induced FF states in terms of spontaneously broken symmetries. Both cases break the time-reversal symmetry (TRS) spontaneously and in case of spin-imbalanced FF also the rotational symmetry within the lattice plane is spontaneously broken. In other words, for imbalance-induced FF states, it is energetically equally favorable for the Cooper pair momentum to be in the $x$- or $y$-direction. However, SOC and the in-plane Zeeman field break the rotational symmetry explicitly, and therefore the Cooper pair wavevector is forced to be in the perpendicular direction with respect to the in-plane Zeeman field as the dispersions are deformed in that direction \[figures \[fig:extra\](e)-(f)\]. Even if the in-plane Zeeman field causes the single-particle dispersion to be non-centrosymmetric, it is still not a sufficient condition to reach the FF state as can be seen in figure \[fig:1\](b) where the ground state is BCS for small enough values of $h_x$. Homogeneous BCS states can be still more favorable than FF states if for example the chemical potential is such that the shapes and the density of states of the Fermi surfaces prefer the Cooper pairing with zero momentum. However, when the in-plane Zeeman field becomes strong enough, the deformation of the dispersion results in the FF pairing. In figures \[fig:2\](a)-(i) we present $|\langle c_{{\textbf{k}},1} c_{\tilde{{\textbf{q}}} -{\textbf{k}},1} \rangle|$, $|\langle c_{{\textbf{k}},1} c_{\tilde{{\textbf{q}}} -{\textbf{k}},2} \rangle|$ and $|\langle c_{{\textbf{k}},2} c_{\tilde{{\textbf{q}}} -{\textbf{k}},2} \rangle|$ for $h_x = 0$ \[(a)-(c)\], $h_x = 0.8$ \[(d)-(f)\] and $h_x = 0.9$ \[(g)-(i)\] in case of $\lambda = 0.75$, $\mu = 0.95$ and $h_z = 0$. These three cases correspond to three red squares of figure \[fig:1\](e). For clarity, also the non-interacting Fermi surfaces are depicted as red (blue) contours for the upper (lower) branch. The case $h_x = 0$ shown in figures \[fig:2\](a)-(c) corresponds to conventional BCS phase for which intraband pairing takes place within both bands and interband pairing is vanishingly small. When $h_x$ is finite, the system enters first to the small-$\tilde{q}_y$ region \[figures \[fig:2\] (d)-(f)\] where both intraband pairing contributions are still prominent and the interband pairing is finite but small. Due to the contribution of both bands, $T_{BKT}$ is more or less the same as for $h_x = 0$, see figure \[fig:1\](e). The only qualitative difference is the asymmetric pairing profiles of $h_x = 0.8$ which causes the finite momentum pairing to be more stable than the zero-momentum BCS pairing. The situation is drastically different when the system enters to the large-$\tilde{q}_y$ region at $h_x = 0.9$ \[figures \[fig:2\] (g)-(i)\]. In contrast to cases with smaller $h_x$, the prominent intraband pairing contribution comes now from the upper band alone. As the pairing occurs only in one of the bands instead of both bands, $T_{BKT}$ is significantly lower for the large-$\tilde{q}_y$ region than for the small-$\tilde{q}_y$ phase, as seen in figure \[fig:1\](e). It should be reminded that we consider FF states only and ignore LO states. In recent real-space mean field studies [@xu:2014; @guo:2018], it was pointed out that LO states are associated with finite pairing amplitudes occurring within both bands and correspondingly FF phases are a consequence of the pairing occurring within a single helicity band only. This is easy to understand as the in-plane Zeeman field shifts the other helicity band to $+k_y$ and the other to $-k_y$ direction. Therefore, when the pairing occurs within both bands, some pairing occurs with Cooper pair momentum $+\tilde{q}_y$ and some with $-\tilde{q}_y$ which results in an LO phase. Thus, the small-$\tilde{q}_y$ region we find is likely the one where LO states are more stable than FF states and hence $T_{BKT}$ is considerably higher for LO states than for FF states. Unfortunately, accessing LO states directly is not possible with our momentum-space study as LO phases break the translational invariance which is utilized in the derivation of the superfluid weight as shown in section \[section\_two\]. For computing the superfluid weight also in case of LO ansatzes, one should derive the expressions for the superfluid weight by using real-space quantities only. ![\[fig:3\] Cooper pair momentum $\tilde{q}_y$ and the BKT temperature $T_{BKT}$ as a function of $\mu$ and $h_z$ \[(a)-(b)\] and as a function of $\mu$ and $h_x$ \[(c)-(d)\] for $\lambda = 0.75$. In (a)-(b) $h_x = 0.658$ and in (c)-(d) $h_z = 0.8$. Labels tFF$_{\pm 1}$, tFF$_{-2}$ and tBCS$_{-2}$ correspond to topologically non-trivial FF and BCS phases of Chern numbers $\pm 1$ and $-2$. Most stable FF phases are once again the ones identified by small Cooper pair momenta. As in figure \[fig:1\], also here we see various topological BCS and FF phases distinguished by different Chern numbers. The red dash-dotted line in (a)-(b) depict two of the Van Hove singularities of the square lattice system with spin-orbit coupled fermions.](fig3_2704_van_hove.png){width="1.0\columnwidth"} For completeness, in figure \[fig:3\] we provide the phase diagrams for $\tilde{q}_y$ and $T_{BKT}$ as functions of $\mu$ and $h_z$ \[figures \[fig:3\](a)-(b)\] and of $\mu$ and $h_x$ \[figures \[fig:3\](c)-(d)\] at $\lambda = 0.75$. In case of the $(\mu,h_z)$-phase diagram the in-plane Zeeman field is fixed to $h_x = 0.658$ and in case of the $(\mu,h_x)$-diagram the out-of-plane Zeeman field is $h_z = 0.8$. As in figure \[fig:1\] with $(h_x,h_z)$-diagram, also here we find various topologically non-trivial FF and BCS phases identified with the Chern numbers $C = -1$ and $C=-2$ near the half-filling. However, for higher chemical potential values we also find topological FF and BCS phases characterized by $C = 1$. Furthermore, we can once again identify FF phases with high $T_{BKT}$ but considerably small Cooper pair momenta existing near the half filling with moderately low Zeeman field values. From figures \[fig:3\](b) and (d) we see that for a non-topological FF phase $T_{BKT}$ is $0.1$-$0.3t$ at relatively large parameter regime. For topological FF states $T_{BKT}$ is somewhat lower, the maximum transition temperature being $T_{BKT} \sim 0.15t$. In previous FFLO studies [@kinnunen:2018; @koponen:2008; @baarsma:2016] it has been shown that Van Hove singularities associated with the divergent behavior of the density of states near the Fermi surface can enlarge the parameter regime of FFLO states. In our spin-orbit-coupled square lattice system there are six different Van Hove singularities for fixed $\mu$. In figures \[fig:3\](a)-(b) two of these singularities are depicted with red dash-dotted lines, the other four occurring near the depicted two. One can see that in the vicinity of the Van Hove singularities the FF phases can exist at higher values of $h_z$ than away from the singularities. However, in $(\mu,h_x)$-diagrams depicted in figures \[fig:3\](c)-(d) the Van Hove singularities are not playing a role and therefore they are not shown. ![image](fig4_1607.png){width="92.00000%"} Topological phase transitions ----------------------------- Topological phase diagrams presented here and in [@guo:2018] for a square lattice are relatively rich compared to the topological phase diagrams of Rashba-coupled 2D continuum where they are characterized by $C=1$ only. This can be explained by considering possible topological phase transitions which occur when the bulk energy gap $E_g$ between the quasi-particle eigenvalues $E^+_{{\textbf{k}},\nu}$ and quasi-holes $E^-_{{\textbf{k}},\nu}$ closes and reopens. Because of the intrinsic particle-hole symmetry present in our system, topological phase transitions can occur when the gap closes and reopens in particle-hole symmetric points [@ghosh:2010]. In continuum there exists only one particle-hole symmetric point, i.e. $\textbf{k} = (k_x,k_y) = (0,\tilde{q}_y/2)$. However, in a square lattice there are four different particle-hole symmetric points, namely ${\textbf{k}}_1 = (0,\tilde{q}_y/2)$, ${\textbf{k}}_2 = (0,-\pi + \tilde{q}_y/2)$, ${\textbf{k}}_3 = (\pi,\tilde{q}_y/2)$ and ${\textbf{k}}_4 = (\pi,-\pi + \tilde{q}_y/2)$ which yields four different gap closing equations instead of only one. Therefore, it is reasonable to find more distinct topological phases in a lattice system than in continuum. For similar reasons, topological phase diagrams studied in [@guo:2017] in case of triangular lattices possessed many distinct topological states characterized by different Chern numbers. Analytical gap-closing equations for the square lattice geometry are provided in appendix \[app:gaps\]. In figures \[fig:4\](a)-(c) we plot the minimum energy gap $E_g$ for $(h_x,h_z)$, $(\mu,h_z)$ and $(\mu,h_x)$-phase diagrams shown previously in figures \[fig:1\](b), \[fig:3\](a) and (c). One can see that $E_g$ goes to zero at the topological phase boundaries as expected. In figures \[fig:4\] (a)-(c) we also depict the fulfilled analytical gap closing conditions which match with numerically computed topological boundaries. Analytical gap closing conditions can be thus used to identify distinct topological transitions in terms of the gap closing locations in the momentum space. From figures \[fig:4\](a)-(c) we see that the Chern invariant changes by one when the gap closes in one of the particle-hole symmetric momenta. However, when the system enters from the trivial $C=0$  phase to $C=-2$ phase, the gap closes simultaneously in two different momenta. This is consistent with the theory presented in [@ghosh:2010] considering the connection between the Chern number and gap closings at particle-hole symmetric points: if the Chern number changes by an even (odd) number at a topological phase transition, then the number of gap-closing particle-hole symmetric momenta is even (odd). We further investigate the topological phase transitions in figures \[fig:4\](d)-(l), where we present the momentum density distributions $n_\textbf{k} = n_{\uparrow\textbf{k}} + n_{\downarrow\textbf{k}} = \langle c^\dag_{\uparrow {\textbf{k}}} c_{\uparrow {\textbf{k}}} \rangle + \langle c^\dag_{\downarrow {\textbf{k}}} c_{\downarrow {\textbf{k}}} \rangle$ for six different values of $\mu$, corresponding to six yellow dots depicted in figure \[fig:4\](c). The topological transition corresponding to the gap closing at ${\textbf{k}}_3$ is studied in figures \[fig:4\](d)-(e), and correspondingly closings at ${\textbf{k}}_2$ and ${\textbf{k}}_4$ are investigated in figures \[fig:4\](g)-(i) and figures \[fig:4\](j)-(l), respectively. By comparing the momentum distributions in figures \[fig:4\](d)-(e) shown for $\mu = 0.792$ and $\mu = 0.912$, we observe that once the system goes through the topological transition identified by the gap closing and reopening at ${\textbf{k}}_3$ \[white line in figure \[fig:4\](c)\], the momentum distribution changes qualitatively in the vicinity of ${\textbf{k}}_3$. This is further shown in figure \[fig:4\](f) where $n_{\textbf{k}}$ for both cases is plotted at $k_y = 0$ along the blue dash-dotted line depicted in figures \[fig:4\](d)-(e). In a similar fashion, one sees from figures \[fig:4\](g)-(i) that the topological transition corresponding to the gap closing at ${\textbf{k}}_2$ \[red line in figure \[fig:4\](c)\] is identified as an emergence of a prominent density peak around ${\textbf{k}}_2$ as clearly illustrated in figure \[fig:4\](i). A similar peak can be also observed for the topological transition corresponding to ${\textbf{k}}_4$ though less pronounced as shown in figures \[fig:4\](j)-(l). Drastic qualitative changes in the momentum distributions at the topological phase boundaries imply that one could experimentally measure and distinguish different topological phases and phase transitions in ultracold gas systems by investigating the total density distributions with the time-of-flight measurements. A similar idea to measure topological phase transitions were proposed in [@zhang:2013] in case of a simpler continuum system. Our findings show that density measurements could be applied also in lattice systems to resolve different topological phases. ![\[fig:5\](a)-(c) The difference of perpendicular superfluid weight components $D^s_{\textrm{diff}} = D^s_{yy} - D^s_{xx}$ for $(h_x,h_z)$, $(\mu,h_z)$ and $(\mu,h_x)$-phase diagrams, respectively. The white solid lines depict the boundaries between the gapped and gapless superfluid states. The red dash-dotted lines correspond to phase boundaries shown in figures \[fig:1\] and \[fig:3\]. (d)-(f) The geometric contribution $D^s_{\textrm{geom}}$ for $(h_x,h_z)$, $(\mu,h_z)$ and $(\mu,h_x)$-phase diagrams. The inset in (f) shows the total superfluid weight $D^s$ (red line) and $D^s_{\textrm{geom}}$ (blue line) for $h_x =0$. In all three cases the geometric contribution is smaller than the total superfluid weight and more or less vanishes when the system enters the large-$\tilde{q}_y$ FF regime.](fig5_2704.png){width="0.9\columnwidth"} Components of the superfluid weight ----------------------------------- As the single particle energy dispersions are deformed in the $y$-direction but not in the $x$-direction, the rotational symmetry of the lattice is broken. This manifests itself as different superfluid weight components in the $x$- and $y$-directions, i.e. $D^s_{xx} \neq D^s_{yy}$. As the Cooper pair momentum is in the $y$-direction, we call $D^s_{yy}$ as the longitudinal and $D^s_{xx}$ as the transverse component. Because $D^s_{xx} \neq D^s_{yy}$, the system has different current response in these directions when exposed to an external magnetic field. Therefore, it is meaningful to investigate the difference of the longitudinal and transverse components, $D^s_{\textrm{diff}} \equiv D^s_{yy} - D^s_{xx}$, to see how it behaves as a function of our system parameters. We focus only on the diagonal elements of $D^s$ as the off-diagonal elements in our case are always zero, i.e. $D^s_{xy} = D^s_{yx} = 0$. In figures \[fig:5\](a)-(c) we present $D^s_{\textrm{diff}}$ for $(h_x,h_z)$, $(\mu,h_z)$ and $(\mu,h_x)$-phase diagrams, respectively, shown above in figures \[fig:1\](b), \[fig:3\](a) and \[fig:3\](c). In all three cases, $D^s_{\textrm{diff}}$ more or less vanishes in large parts of the phase diagrams. However, especially when entering the large-$\tilde{q}_y$ FF region from the small-$\tilde{q}_y$ region, $D^s_{\textrm{diff}}$ reaches local minima and becomes negative. On the other hand, from figures \[fig:5\](b)-(c) we see that there also exists a parameter region where $D^s_{\textrm{diff}}$ is positive and that the tFF$_{-2}$-phase in figure \[fig:5\](c) near half-filling is clearly distinguishable from the neighboring phases. Therefore, by measuring $D^s_{\textrm{diff}}$ one could in principle distinguish some of the phase transitions existing in the system. It is interesting to note that, in the presence of SOC, the transverse component can be larger than the longitudinal component, in contrast to 2D continuum where the absence of SOC results in the vanishing transverse component and thus the vanishing BKT temperature $T_{BKT} =0$ [@yin:2014]. In addition to $D^s_{\textrm{diff}}$, in figures \[fig:5\](a)-(c) we also plot with solid white lines the boundaries of gapped and gapless superfluid phases. Consistent with previous literature [@zhang:2013; @qu:2013; @cao:2014; @cao:2015], we call the system gapless (or nodal) if one or more of the Bogoliubov quasi-hole branches reach the zero-energy in some part of the momentum space, i.e. the quasi-particle excitation energy vanishes for some momenta. Note that this does not (necessarily) mean that the topological energy gap $E_g$ closes as $E_g$ is the difference of the highest quasi-hole and the lowest quasi-particle energy at the same momentum ${\textbf{k}}$ such that both are also the eigenvalues of $\mathcal{H}_{\textbf{k}}$, whereas the highest quasi-hole and the lowest quasi-particle energy are not necessarily at the same momentum. From figures \[fig:5\](a) and (c) we see that the system stays gapped at low in-plane Zeeman field strengths which is consistent with continuum results [@cao:2014; @cao:2015]. For larger $h_x$ the system becomes eventually gapless and one can observe topologically trivial and non-trivial nodal FF phases. By comparing figures \[fig:1\](b), \[fig:3\](a) and \[fig:3\](c) to figures \[fig:5\](a)-(c) we can make a remark that FF states with small momenta $\tilde{q}_y$ are gapped. Furthermore, we observe from figures \[fig:5\](a)-(c) that the transitions between the gapped and gapless states at moderate Zeeman fields and chemical potentials coincide with the prominent minima of $D^s_{\textrm{diff}}$. This is consistent with the findings of [@cao:2015] where it was shown that the longitudinal component exhibits a clear minimum when the system becomes gapless. However, in figures \[fig:5\](b)-(c) we see the system reaching a gapped region again at large enough $\mu$ without such a drastic change of $D^s_{\textrm{diff}}$ than at smaller values of $\mu$. In addition to different spatial components, one can also investigate the role of the geometric superfluid weight contribution $D^s_{\mathrm{geom}}$ which is presented for $(h_x,h_z)$, $(\mu,h_z)$ and $(\mu,h_x)$-phase diagrams in figures \[fig:5\](d)-(f). We see that for BCS states and gapped FF states of small Cooper pair momenta, the geometric contribution is notable but is otherwise vanishingly small. In all the cases the geometric contribution is relatively small compared to the total superfluid weight $D^s$ which is, as an example, illustrated in the inset of figure \[fig:5\](f) where $D^s_{\mathrm{geom}}$ and $D^s$ are both plotted for $h_x = 0$. At largest, the geometric contribution is responsible up to $18$ percent of the total superfluid weight which is fairly similar to what was reported in [@iskin:2017], where the geometric part was found to contribute up to a quarter of the total superfluid weight in case of a spin-orbit-coupled 2D BCS continuum model. In more complicated multiband lattices, such as honeycomb lattice or Lieb lattice (which also possesses a flat band), the geometric contribution in the presence of SOC might be more important than in our simple square lattice example as the geometric contribution is intrinsically a multiband effect [@peotta:2015]. Conclusions and outlook {#section_four} ======================= In this work we have investigated the stability of exotic FF superfluid states in a lattice system by computing the superfluid weight and BKT transition temperatures systematically for various system parameters. The derivation of the superfluid weight is based on the linear response theory and is an extension of the previous studies of [@liang:2017; @peotta:2015] where only BCS ansatzes without spin-flipping terms were considered. Our method applies to BCS and FF states in the presence of arbitrary spin-flipping processes and lattice geometries. We find that, as previously in case of conventional BCS theory without the spin-flipping contribution, also in case of FF phases and with spin-flipping terms one can divide the total superfluid weight to conventional and geometric superfluid contributions. We have focused on a square lattice geometry in the presence of the Rashba-coupling. One of the main findings of this article is that conventional spin-imbalance-induced FF states, in the absence of SOC, indeed have finite BKT transition temperatures in a lattice geometry. For our parameters they could be observed at $T\sim 0.1t$. In earlier theoretical studies it has been predicted that FF states could exist in two-dimensional lattice systems [@kinnunen:2018; @baarsma:2016; @koponen:2007; @gukelberger:2016] but the stability in terms of the BKT transition has never been investigated in lattice systems. By computing $T_{BKT}$ we show that two-dimensional FFLO superfluids should be realizable in finite temperatures. By applying SOC, we show that FF states in a lattice can be further stabilized and for our parameter regime BKT temperatures as high as $T \sim 0.17-0.3t$ can be reached. Spin-orbit coupling also enables the existence of topological nodal and gapped FF states, for which we show the BKT transitions to occur at highest around $T_{BKT} \sim 0.15t$. For literature comparison, we estimated that $T_{BKT} \approx 0.25t$ at $U=-4t$ for usual spin-balanced BCS state at half-filling without SOC, see figure \[fig:1\](c), whereas in [@paiva:2004] the corresponding estimate obtained by Monte Carlo simulations was $T_{BKT} \sim 0.10-0.13t$. Thus, our mean-field approach probably overestimates $T_{BKT}$ in case of a simple square lattice. However, in [@julku:2016; @liang:2017] the superfluid weights of BCS states, derived in the framework of mean-field theory, were shown to agree reasonably well with more sophisticated theoretical methods in case of multiband systems. Thus, it is expected that our mean-field superfluid equations are in better agreement with beyond-mean-field methods when considering multiband lattice models. We have also shown that different topological FF phases and phase transitions could be observed by investigating the total momentum density profiles. When the system goes through a topological phase transition, the momentum distribution develops peaks or dips in the vicinity of momenta in which the energy gap closes and re-opens. In addition to density distributions, also the relative behavior of the longitudinal and transverse superfluid weight components yields implications about the phase transitions, especially near the boundaries of gapless and gapped superfluid phases. Therefore, our work paves the way for stabilizing and identifying exotic topological FF phases in lattice systems. In future studies it would be interesting to see how stable FF states are in multiband models. This could be investigated straightforwardly with our superfluid weight equations as they hold for an arbitrary multiband system. Especially intriguing could be systems which possess both dispersive and flat bands such as kagome or Lieb lattices. In these systems the conventional spin-imbalanced FF states were recently shown to exhibit exotic deformation of Fermi surfaces due to the presence of a flat band [@huhtinen:2018]. In multiband systems one could also expect the geometric superfluid contribution to play a role, in contrast to our square lattice system where the geometric contribution was only non-zero for BCS and gapped FF phases. Furthermore, in flat band systems mean-field theory is shown to be in good agreement with more advanced beyond mean-field approaches [@liang:2017; @julku:2016; @tovmasyan:2016]. Flat band systems are tempting also because it is expected that their superfluid transition temperatures in the weak-coupling region are higher than in dispersive systems [@kopnin:2011; @heikkila:2011; @peotta:2015; @liang:2017; @julku:2016] and thus they could provide a way to realize exotic FFLO phases at high temperatures. Details on deriving the superfluid weight {#app:derivation} ========================================= Here we briefly go through how one obtains the final form for the superfluid weight $D^s$ shown in from the intermediate result . As one can see from , there exists two terms in $K_{\mu\nu}$, the first being the diamagnetic and the second one the paramagnetic contribution, $K_{\mu\nu,\textrm{dia}}$, $K_{\mu\nu,\textrm{para}}$, respectively. We focus on the diamagnetic term and after that just give the result for the paramagnetic term as the derivation for both terms is essentially the same. In the diamagnetic term there exists a double derivative $\partial_\mu \partial_\nu \mathcal{H}({\textbf{k}})$ which can be transformed to a single derivative via integrating by parts: $$\begin{aligned} \label{app1} K_{\mu\nu,\textrm{dia}} &=\frac{1}{\beta} \sum_{{\textbf{k}},\Omega_m} \textrm{Tr} \Big[\partial_\mu \partial_\nu \mathcal{H}({\textbf{k}}) P_+ G(i\Omega_m,{\textbf{k}})\Big] \nonumber \\ &= - \frac{1}{\beta} \sum_{{\textbf{k}},\Omega_m} \textrm{Tr} \Big[ \partial_\mu \mathcal{H}({\textbf{k}}) P_+ \partial_\nu G(i\Omega_m,{\textbf{k}}) \Big].\end{aligned}$$ Because $G(i\Omega_m,{\textbf{k}}) = 1/(i\Omega_m - \mathcal{H}({\textbf{k}}))$, we have $\partial_\nu G^{-1} = -\partial_\nu \mathcal{H}$ and because $\partial_\nu(GG^{-1}) = 0$ we also have $\partial_\nu G = -G \partial_\nu G^{-1} G$ so that can be written as $$\begin{aligned} K_{\mu\nu,\textrm{dia}} &= - \frac{1}{\beta} \sum_{{\textbf{k}},\Omega_m} \textrm{Tr} \Big[ \partial_\mu \mathcal{H}({\textbf{k}}) P_+ G(i\Omega_m,{\textbf{k}}) \partial_\nu \mathcal{H}({\textbf{k}}) G(i\Omega_m,{\textbf{k}}) \Big] \nonumber \\ &= - \frac{1}{\beta} \sum_{{\textbf{k}},\Omega_m}\sum_i^{4M}\langle \phi_i({\textbf{k}}) | \partial_\mu \mathcal{H}({\textbf{k}}) P_+ G(i\Omega_m,{\textbf{k}}) \partial_\nu \mathcal{H}({\textbf{k}}) G(i\Omega_m,{\textbf{k}}) |\phi_i({\textbf{k}}) \rangle,\end{aligned}$$ where $|\phi_i({\textbf{k}}) \rangle$ are the eigenvectors of $\mathcal{H}_{\textbf{k}}$. By using the completeness relation $\sum_j |\phi_j({\textbf{k}}) \rangle \langle \phi_j({\textbf{k}}) | = 1$ and the alternative form for $G(i\Omega_m,{\textbf{k}})$ $$\begin{aligned} G(i\Omega_m,{\textbf{k}}) = \sum_{l=1}^{4M}\frac{|\phi_l({\textbf{k}}) \rangle \langle \phi_l({\textbf{k}}) |}{i\Omega_m - E_{l,{\textbf{k}}}}\end{aligned}$$ we obtain $$\begin{aligned} K_{\mu\nu,\textrm{dia}} =& - \frac{1}{\beta} \sum_{{\textbf{k}},\Omega_m}\sum_{i,j}^{4M}\langle \phi_i({\textbf{k}}) | \partial_\mu \mathcal{H}({\textbf{k}}) P_+ | \phi_j({\textbf{k}}) \rangle \nonumber \\ &\times \langle \phi_j({\textbf{k}}) | \partial_\nu \mathcal{H}({\textbf{k}})| \phi_i({\textbf{k}}) \rangle \frac{1}{(i\Omega_m - E_{j,{\textbf{k}}})(i\Omega_m - E_{i,{\textbf{k}}})}.\end{aligned}$$ The summation over the Matsubara frequencies $\Omega_m$ can be carried out analytically yielding $$\begin{aligned} K_{\mu\nu,\textrm{dia}} = &\sum_{{\textbf{k}},ij} \langle \phi_i({\textbf{k}}) | \partial_\mu \mathcal{H}({\textbf{k}}) P_+ | \phi_j({\textbf{k}}) \rangle \nonumber \\ &\times \langle \phi_j({\textbf{k}}) | \partial_\nu \mathcal{H}({\textbf{k}})| \phi_i({\textbf{k}}) \rangle \frac{n(E_{j,{\textbf{k}}})-n(E_{i,{\textbf{k}}})}{E_{i,{\textbf{k}}} - E_{j,{\textbf{k}}}}.\end{aligned}$$ In a similar fashion one derives the following result for the paramagnetic term: $$\begin{aligned} K_{\mu\nu,\textrm{para}}({\textbf{q}}\rightarrow 0, 0) = &-\sum_{{\textbf{k}},ij} \langle \phi_i({\textbf{k}}) | \partial_\mu \mathcal{H}({\textbf{k}})P_+ | \phi_j({\textbf{k}}) \rangle \nonumber \\ &\times \langle \phi_j({\textbf{k}}) | \partial_\nu \mathcal{H}({\textbf{k}}) \hat{\gamma}_z| \phi_i({\textbf{k}}) \rangle \frac{n(E_{j,{\textbf{k}}})-n(E_{i,{\textbf{k}}})}{E_{i,{\textbf{k}}} - E_{j,{\textbf{k}}}}.\end{aligned}$$ As $D^s_{\mu\nu} = K_{\mu\nu}({\textbf{q}}\rightarrow 0,0) = K_{\mu\nu,\textrm{dia}} + K_{\mu\nu,\textrm{para}}({\textbf{q}}\rightarrow 0, 0)$ and $P_- = (I_{4M} - \hat{\gamma}_z)/2$, one readily obtains the final result presented in . Geometric contribution of the superfluid weight {#app:a} =============================================== In this appendix we show how the total superfluid weight $D^s$  presented in can be split to the so-called conventional and geometric contributions, $D^s_{\textrm{conv}}$ and $D^s_{\textrm{geom}}$. We start by expressing the eigenvectors $| \phi_i({\textbf{k}}) \rangle$ of $\mathcal{H}({\textbf{k}})$ in terms of the eigenvectors of $\mathcal{H}_p({\textbf{k}})$ and $\mathcal{H}_h({\textbf{k}})$ as follows $$\begin{aligned} &|\phi_i({\textbf{k}}) \rangle = \sum_{m=1}^{2M}\Big( w_{p,im} |+\rangle \otimes |m\rangle^p + w_{h,im} |-\rangle \otimes |m \rangle^h \Big),\end{aligned}$$ where $|m\rangle^p$ ( $|m\rangle^h$) are the eigenvectors of $\mathcal{H}_p$ ($\mathcal{H}_h$) and $|\pm \rangle$ are the eigenvectors of $\hat{\sigma_z} \otimes I_{2M}$ with the eigenvalues $\pm 1$. By noting that $$\begin{aligned} \partial_\mu \mathcal{H}({\textbf{k}}) = \begin{bmatrix} \partial_\mu \mathcal{H}_p({\textbf{k}}) & 0 \\ 0 & - \partial_\mu \mathcal{H}_h ({\textbf{k}}- {\tilde{\bf{q}}}) \end{bmatrix}\end{aligned}$$ we can rewrite as $$\begin{aligned} \label{sfw2} D^s_{\mu\nu} = &\sum_{{\textbf{k}},ij} \frac{n(E_j) - n(E_i)}{E_i - E_j} \nonumber \\ & \times\sum_{m_1,m_2}^{2M}\Big[ w^*_{p,im_1} w_{p,jm_2} {}^p \langle m_1 | \partial_\mu \mathcal{H}_p({\textbf{k}}) | m_2 \rangle^p \Big] \nonumber \\ & \times \sum_{m_3,m_4}^{2M} \Big[ w^*_{h,jm_3} w_{h,im_4} {}^h \langle m_3 | -\partial_\nu \mathcal{H}_h({\textbf{k}}- \tilde{{\textbf{q}}}) | m_4 \rangle^h \Big] \nonumber \\ =& \sum_{\substack{{\textbf{k}}\\m_1,m_2,\\m_3,m_4}} W_{m_1m_2}^{m_3m_4} \big({}^p \langle m_1 | \partial_\mu \mathcal{H}_p | m_2 \rangle^p {}^h \langle m_3 | -\partial_\nu\mathcal{H}_h | m_4 \rangle^h \big),\end{aligned}$$ where $$\begin{aligned} W_{m_1m_2}^{m_3m_4} = \sum_{ij} \frac{n(E_j) - n(E_i)}{E_i - E_j} w^*_{p,im_1}w_{p,jm_2}w^*_{h,jm_3}w_{h,im_4}.\end{aligned}$$ and $$\begin{aligned} \label{sfw3} {}^p \langle m_1 | \partial_\mu \mathcal{H}_p | m_2 \rangle^p =& \delta_{m_1,m_2} \epsilon_{m_1} + (\epsilon_{m_1} - \epsilon_{m_2}) ~ {}^p \langle \partial_\mu m_1 | m_2 \rangle^p .\end{aligned}$$ Here $\epsilon_{m_i}$ are the eigenvalues for $\mathcal{H}_p$. Similar expression holds also for the ${}^h \langle m_3 | -\partial_\nu\mathcal{H}_h | m_4 \rangle^h$ elements. From - we note that there exists two superfluid weight components. The component which is called the conventional contribution $D^s_{\textrm{conv}}$ consists of matrix elements with $m_1 = m_2$ and $m_3 = m_4$. As can be seen from , the conventional contribution depends only on the single-particle dispersions $\epsilon_{m_i}$. The remaining part is the geometric contribution $D^s_{\textrm{geom}}$ and it depends on the geometric properties of the Bloch functions, $| m_i \rangle^p$ and $| m_i \rangle^h$. Comparison of the superfluid weight and the BKT temperature to previous literature {#app:benchmark} ================================================================================== As our equations for the superfluid weight hold for arbitrary geometries in the presence and absence of SOC, we can make direct comparisons to previous studies. As the first benchmark, we reproduced the superfluid weight results of [@julku:2016] where BCS states in the Lieb lattice geometry without the SOC are studied by applying mean-field theory and exact diagonalization (ED) methods. One should emphasize that mean-field equations used in [@julku:2016] to compute the superfluid weight are derived by not using the linear response theory as in our study but by using an alternative approach based on the definition given in [@peotta:2015]. Our method yields exactly the same results as the alternative mean-field and ED approaches of [@julku:2016]. Furthermore, we have checked that in the continuum limit our expression for the superfluid weight reduces to the expressions presented in [@iskin:2017] where BCS states in spin-orbit-coupled 2D continuum were considered. We also benchmarked our equations by computing $T_{BKT}$ in case of BCS phases for a 2D square lattice geometry with the same parameters that were used in [@yajie:2016] where topological BCS states in the presence of the SOC were studied. With our equations we find the same functional behavior for $T_{BKT}$ as a function of $U$ but our results are exactly a factor of two larger than those presented in [@yajie:2016]. The reason for this difference is because in [@yajie:2016], the phase fluctuations of the order parameter are rescaled by a factor of $1/\sqrt{2}$ \[see equation (33) in [@yajie:2016]\]. With this rescaling, the periodicity of the $\phi$ field in (38) becomes $2\sqrt{2}\pi$ and therefore the expression for the BKT transition temperature \[equation (39)\] should be multiplied by a factor of 2. Analytic equations for the gap closing and reopening conditions {#app:gaps} =============================================================== In this appendix we show the analytical equations that were used to depict the topological phase transitions in figure \[fig:4\]. The energy gap $E_g$ between the quasi-particle eigenvalues $E^+_{{\textbf{k}},\nu}$ and quasi-holes $E^-_{{\textbf{k}},\nu}$ can only close and reopen at particle-hole-symmetric points which in our case are ${\textbf{k}}_1 = (0,\tilde{q}_y/2)$, ${\textbf{k}}_2 = (0,-\pi + \tilde{q}_y/2)$, ${\textbf{k}}_3 = (\pi,\tilde{q}_y/2)$ and ${\textbf{k}}_4 = (\pi,-\pi + \tilde{q}_y/2)$. The single-particle Hamiltonian $\mathcal{H}_p$ in these four points can be diagonalized analytically which yields four eigenvalues, namely $E^-_{{\textbf{k}},1} \leq E^-_{{\textbf{k}},2} \leq E^+_{{\textbf{k}},2} \leq E^+_{{\textbf{k}},1}$. By demanding $E^-_{{\textbf{k}},2} = E^+_{{\textbf{k}},2}$ at each of the four particle-hole symmetric momenta, one obtains the four gap closing equations which read $$\begin{aligned} h_z^2 =& 6 + \Delta^2 + 4\mu + \mu^2 + 4(2+\mu)\cos(\tilde{q}_y/2) + 2\cos(\tilde{q}_y) - h_x^2 +2\lambda^2[\cos(\tilde{q}_y) -1] \nonumber \\ & +4h_x\lambda\sin(\tilde{q}_y/2) \\ h_z^2 =& 6 + \Delta^2 + 4\mu + \mu^2 - 4(2+\mu)\cos(\tilde{q}_y/2) + 2\cos(\tilde{q}_y) - h_x^2 +2\lambda^2[\cos(\tilde{q}_y) -1] \nonumber \\ & -4h_x\lambda\sin(\tilde{q}_y/2) \\ h_z^2 =& 6 + \Delta^2 - 4\mu + \mu^2 + 4(2+\mu)\cos(\tilde{q}_y/2) + 2\cos(\tilde{q}_y) - h_x^2 +2\lambda^2[\cos(\tilde{q}_y) -1] \nonumber \\ & +4h_x\lambda\sin(\tilde{q}_y/2) \\ h_z^2 =& 6 + \Delta^2 - 4\mu + \mu^2 - 4(2+\mu)\cos(\tilde{q}_y/2) + 2\cos(\tilde{q}_y) - h_x^2 +2\lambda^2[\cos(\tilde{q}_y) -1] \nonumber \\ & -4h_x\lambda\sin(\tilde{q}_y/2).\end{aligned}$$ By solving these equations for different values of $h_x$, $h_z$ and $\mu$, one obtains the topological boundaries shown in figures \[fig:4\](a)-(c). Direction of the Cooper pair momentum {#app:qbm} ===================================== In our computations the Cooper pair momentum $\tilde{{\textbf{q}}}$ is in the $y$-direction, i.e. $\tilde{{\textbf{q}}}\parallel \hat{\textbf{e}}_y$, consistent with earlier studies concerning lattice systems [@xu:2014; @guo:2018; @guo:2017]. We have extensively tested numerically that indeed the wavevector in the $y$-direction minimizes the thermodynamic potential with and without SOC for all the used input parameters. As an example, we have demonstrated this in figure \[fig:appendix\]. In figures \[fig:appendix\](a)-(c) we plot the $(\mu,h_x)$-phase diagram for three different cases: in (a) the thermodynamic potential $\Omega$ is minimized so that $\tilde{\textbf{q}}$ is taken to be in the $y$-direction, in (b) $\tilde{\textbf{q}}$ is along the diagonal direction ($\tilde{q}_x = \tilde{q}_y$) and in (c) $\tilde{\textbf{q}}$ is in the $x$-direction. The out-of-plane Zeeman field is chosen to be $h_z = 0.8$, the spin-orbit-coupling is $\lambda = 0.75$ and the interaction strength is $U=-4$ so the phase diagram in figure \[fig:appendix\](a) is the same as in figure \[fig:3\](c) in the main text. We see how gradually the FF region becomes smaller when the wavevector is forced to deviate from the $y$-direction. In figures \[fig:appendix\](d)-(e) we compare the thermodynamic potentials $\Omega$ of these three different cases. In figure \[fig:appendix\](d) the thermodynamic potential difference of cases $\tilde{\textbf{q}}\parallel\hat{\textbf{e}}_x + \hat{\textbf{e}}_y$ and $\tilde{\textbf{q}}\parallel\hat{\textbf{e}}_y$ is plotted and correspondingly in figure \[fig:appendix\](e) the thermodynamic potential difference of cases $\tilde{\textbf{q}}\parallel\hat{\textbf{e}}_x$ and $\tilde{\textbf{q}}\parallel\hat{\textbf{e}}_y$ is depicted. White lines show the phase boundaries between the BCS, FF and normal phases in case of $\tilde{\textbf{q}}\parallel\hat{\textbf{e}}_y$. We see that within the BCS phase the thermodynamic potential is the same regardless of the direction of the wavevector as in the BCS phase the Cooper pair momentum is zero. When entering the FF phase, it is clear that phase diagrams shown in figures \[fig:appendix\](b)-(c) do not depict the true ground states as their thermodynamic potentials are higher than in case of $\tilde{\textbf{q}}\parallel\hat{\textbf{e}}_y$. Thus the states shown in figure \[fig:appendix\](a) with $\tilde{\textbf{q}}\parallel\hat{\textbf{e}}_y$ are energetically more stable than the states with the Cooper pair momentum in the diagonal or $x$-direction. ![\[fig:appendix\](a)-(c) Computed phase diagrams as functions of $\mu$ and $h_x$ by assuming $\tilde{\textbf{q}}\parallel \hat{\textbf{e}}_y$ (a), $\tilde{\textbf{q}}\parallel \hat{\textbf{e}}_x+\hat{\textbf{e}}_y$ (b) and $\tilde{\textbf{q}}\parallel \hat{\textbf{e}}_x$ (c). Black solid lines depict the phase boundaries between BCS, FF and normal states. (d)-(e) Grand canonical thermodynamic potential differences between the cases $\tilde{\textbf{q}}\parallel \hat{\textbf{e}}_x+\hat{\textbf{e}}_y$ and $\tilde{\textbf{q}}\parallel \hat{\textbf{e}}_y$ (d), and between $\tilde{\textbf{q}}\parallel \hat{\textbf{e}}_x$ and $\tilde{\textbf{q}}\parallel \hat{\textbf{e}}_y$ (e). White lines are the phase boundaries in case of $\tilde{\textbf{q}}\parallel \hat{\textbf{e}}_y$.](appendix_fig){width="1.0\columnwidth"} In figure \[fig:appendix\] we have only presented three different options for the direction of $\tilde{\textbf{q}}$ and only $(\mu,h_x)$-phase diagram. However, they represent the general trend of all the computations of our work: the thermodynamic potential reaches its minimum when $\tilde{\textbf{q}}$ is in the $y$-direction. We have confirmed this by choosing $20$ other directions between the $x$ and $y$-axes. Alternatively, we also minimized the thermodynamic potential by letting $q_x$ and $q_y$ be independent parameters. As the thermodynamic potential can have many local minima as a function of $q_x$ and $q_y$, this procedure is not the most trustworthy for finding the global minimum. However, we did not find a single local minimum lying outside the $y$-axis that would have lower energy than the solutions we find by assuming $\tilde{\textbf{q}}$ $||$ $\hat{\textbf{e}}_y$. Therefore we are confident that our statements and results are correct within the mean-field theory framework.
--- abstract: 'Paul Anglin criticised our analysis of the neoclassical theory of the firm, but makes a number of incorrect assertions about our assumptions. We correct these misunderstandings, but acknowledge that one criticism he makes is correct. We correct this flaw with a new argument that supersedes the flawed strategic reaction argument we presented in our previous paper.' author: - | Russell K. Standish\ Mathematics and Statistics, University of New South Wales - | Stephen L. Keen\ Economics and Finance, University of Western Sydney title: 'Comment on “On the proper behavior of atoms” by Paul Anglin' --- The profit formula ================== We take as our starting point, the usual profit formula of a single product market with $n$ firms: $$\label{profit} \pi_i = q_iP(Q)-\int_0^{q_i} MC(q_i) dq_i,$$ where $\pi_i$ is the profit obtained by firm $i$, as a function of its production $q_i$, and the total market production $Q=\sum_i q_i$. The function $P(Q)$ is the demand curve, namely the price the good achieves when $Q$ items of the good is available on the market. The function $MC(q_i)$ is the marginal cost of producing an extra item of the good, given that a firm is producing $q_i$ items. The trouble with derivatives ============================ In [@Anglin08], Paul Anglin critiques our paper [@Keen-Standish06]. We note a number of problems with this critique. Anglin’s initial proposition is that our results depend on the size of the increment to output for each firm: > I suggest that a flawed premise is being used since it is also true that the effect on P would be about 100 times larger if the change in output by a single firm increased from $dq_i$ = 1 to 100. So, before analyzing the effect of a change by a mass of firm, a more relevant question is: is $dq_i$ = 1 or 100 (or $-1$ or $-100$)?[@Anglin08 p. 278] However, our argument was based not on discrete changes to output but on derivatives. The $dq_{i}$ he mentions is infinitesimal: it cannot be equal to 1 or 100. In any case, we do not use infinitesimals, which are mathematically problematic, but regular derivatives, which in the case of a multivariate function $y(q_{1},\ldots ,q_{n})$ can be either partial $\partial y/\partial q_{i}$ or total $dy/dx$. In footnote 1, Anglin conjectures that the relation $dq_{i}/dQ=\sum_{j}\partial q_{i}/\partial q_{j}$ seems to be a consequence of the fact that $Q=\sum_{j}q_{j}$ [@Anglin08 p. 278]. In comments he made on a previous version of this paper, it would appear that this is the crux of his disagreement with our analysis. In [@Keen-Standish06], we effectively assumed that $$\frac{dq_i}{dQ}=\sum_j\frac{\partial q_i}{\partial q_j}=1$$ in going from equation (4) to (6) in that paper. On reflection, we realise this criticism is correct — there is no justification for assuming $dq_i/dQ$ has any particular value. Nevertheless, the Keen result (eq 6 of [@Keen-Standish06]) can still be derived as the system equilibrium assuming a much weaker additional condition that $dq_i/dQ = dq_j/dQ, \forall i,j$ holds at equilibrium. Symmetry of firms ================= In footnote 2 of [@Anglin08], Anglin asserts we made a symmetry assumption $Q=nq_{i}$, from which he derives an inconsistency. We did not make this assumption at any point in our paper. In the referees comments he made on an earlier version of this paper, it would appear that this is a derived consequence of our assumption that $dq_i/dQ=1$. Coupled with the boundary condition $Q=0 \Rightarrow q_i=0$ and integrating, this would imply $q_i=Q/n$. However, since the Keen equilibrium only requires that $dq_i/dQ=dq_j/dQ, \forall i,j$ at equilibrium, there is no specific requirement for the market to be evenly shared amongst the firms, except in the case of constant marginal cost, as detailed in section \[inhomog cv\]. We do assume that each firm has identical marginal cost functions $MC(q_i)$, which is also assumed in the traditional presentation of the Cournot profit maximum. This is for pedagogical convenience however, the argument presented in section \[inhomog cv\] does not depend on this assumption, and can be easily generalised. Total derivative with respect to industry output rather than single firm’s output ================================================================================= The traditional analysis of the Marshallian and Cournot models is to to hypothesize behavior by the individual firm such that it sets the partial derivative $\partial\pi_i/\partial q_i = 0$ (see e.g. [@Keen-Standish06 eq (2)]); in the Marshallian model this is described as atomistic  profit-maximizing behavior, while in the Cournot model it is described as a constrained profit level in response to the strategic responses of other firms. The Marshallian proposition is strictly false, since the profit of a single firm $\pi_i$ is a function of all $n$ firms’ outputs $q_i$, not a single variable function, whether or not the individual firm can in fact affect the behavior of other firms. The extrema of an $n$-variable function is found at the zero of the derivative, ie when all partial derivatives $\partial\pi_i/\partial q_j = 0$. However $$\partial\pi_i/\partial q_j = \delta_{ij}(P-MC) + q_iP'$$ which can never be satisfied where $q_i>0$ and $P'<0$. The condition $\partial\pi_i/\partial q_i = 0$ describes an unstable equilibrium — it is vulnerable to firms pulling in the same direction, which can happen even in the absence of explicit collusion [@Standish-Keen04]. Instead we propose the condition that all firm’s profits are maximised with respect to total industry output $d\pi_i/dQ=0$. This constrains the dynamics of firms’ outputs to an $n-1$-dimensional polyhedron, but otherwise does not specify what the individual firms should do. As an equilibrium condition, it is vulnerable to a single firm “stealing” market share. However, no firm acts in isolation. The other firms will react, negating the benefit obtained by first firm, causing the system to settle back to the $d\pi_i/dQ=0$ manifold. Conjectural Variation ===================== In our paper, we introduce the idea of firms reacting to the production decisions of their competitors by introducing a dependence between our previous independent variables $q_{i}$. We thank Anglin for reminding us of considerable previous history of doing this under the name of “conjectural variation”; however, this was a literature of which we were already aware, and whose use of the concept differs from our purpose in introducing it here.[^1] Our intent was to make a mathematical argument that shows what happens in the Cournot analysis when one relaxes the assumption of atomism. We have not attempted to model any form of conjectural variation or reaction by the firms in the agent model, and in any case the agent model does not have the atomistic constraint imposed upon it. We appreciate the reference [@Kamien-Schwartz83] Anglin provided, but note that as they started from the incorrect differential condition ($\partial\pi_i/\partial q_i=0$), their results are not applicable. In the next section, we present a strategic response argument that does not make use of the conjectural variation idea at all. Evolution of $dq_i/dQ$ {#inhomog cv} ====================== In our paper [@Keen-Standish06], we introduced a homogenous conjectural variation parameter $\partial q_i/\partial q_j = \theta$. As pointed out by Anglin, this analysis makes use of the faulty assumption $dq_i/dQ=\sum_j \partial q_i/\partial q_j$. To circumvent this problem, and generalise the argument, we take the point that $dq_i/dQ$ are unconstrained endogenous variables, and so we introduce the variables $$\frac{dq_i}{dQ} = \theta_i.$$ This extends phase space from the $n$-dimensional space of firm production $q_i, i=1\ldots n$ to a $2n-1$-dimensional phase space, with the constraint $$\sum_i\frac{dq_i}{dQ} = \frac{dQ}{dQ} =1.$$ The $\theta_i$ might be thought of as a firm’s response function to changing industry output. With the usual profit formula (\[profit\]) the maximum profit for a single firm obtains at the zero of $$\begin{aligned} \label{max-profit} \frac{d\pi_i}{dQ} &=& P \frac{dq_i}{dQ} + q_i \frac{dP}{dQ} - MC(q_i)\frac{dq_i}{dQ} \nonumber \\ &=& P\theta_i + q_iP^{\prime} - MC(q_i)\theta_i\end{aligned}$$ We may sum equation (\[max-profit\]) over $i$ to obtain $$\begin{aligned} \label{max-industry-profit} P + QP^{\prime} - \sum_i MC(q_i)\theta_i = 0.\end{aligned}$$ Given a fixed market partition $\{s_i=q_i/Q\}$, the maximum profit obtains at the zero of the derivative of the total industry profit $$\label{max-profit-given-partition} \frac{d}{dQ}\left(QP-\sum_i\int^{s_iQ}MC(q)dq\right) = P + QP' - \sum_i s_iMC(q_i) = 0.$$ Comparing equations (\[max-industry-profit\]) and (\[max-profit-given-partition\]), we see that the individual firm profit is submaximal unless $$\label{profit-cond} \sum_i MC(q_i)(s_i-\theta_i) = 0.$$ The vector $(m_i=MC(q_i))$ lies in the positive cone ${{\mathbb R}}^{n+}$ (ie $m_i>0,\, \forall i$). The vector $(t_i=s_i-\theta_i)$ lies on a hyperplane passing through the origin, and perpendicular to the unit vector $(1,1,...1)$, since $\sum_i t_i=0$. Condition (\[profit-cond\]) can be thought of as a dot product $\mathbf{m}\cdot {\bf t}=0$. This condition can only be satisfied if $\mathbf{m}$ is proportional to the unit vector (ie marginal cost is constant) or ${\bf t}=0$, which implies $\theta_i=s_i,\,\forall i$. Given a particular partition of the market, profit of all firms will always be increased by moving the $\theta_i$ variables closer to the market share $s_i$. Substituting this condition for variable marginal cost into (\[max-profit\]) gives: $$\label{s_i} s_iP+s_iQP'-s_iMC(s_iQ)=0$$ which can only be simultaneously satisfied for all $i$ if the market is equipartitioned ($s_i=1/n$). The Keen equilibrium obtains on the manifold where $\theta_i=1/n$. Substituting this into equation (\[max-profit\]), one obtains $$\label{Keen} P-MC(q_i) + nq_iP' = 0$$ which can be rearranged to yield $$MR_i-MC = P(Q)+q_iP'(Q) -MC(q_{i}) =\frac{n-1}{n}( P-MC( q_{i})) \label{MR_MCGapRule}$$ where $MR_i$ is the marginal revenue of the firm. When marginal cost is constant, equation (\[max-industry-profit\]) implies that the industry operates at the monopoly pricing at equilibrium: $$P+QP' -MC = 0$$ and from (\[max-profit\]) we see $$q_i=\theta_iQ$$ Only when $\theta_i=1/n$ does this coincide with the Keen equilibrium. We may rearrange equation (\[Keen\]) to give $$\label{q_i} q_i=\frac{MC(q_i)-P}{nP'}$$ If the right hand side of this equation were a monotonic decreasing function of $q_i,\, \forall q_j, j\ne i$, then a unique solution exists for $q_i$, the market is equipartitioned between firms and the Keen equilibrium coincides with monopoly pricing. However, if multiple solutions to (\[q\_i\]) exist,[^2] then the market need not be equipartitioned, and in general the Keen equilibrium differs from monopoly pricing. However, in the limit $n\rightarrow\infty$, assuming finite total industry output, $q_i$ is $o(1/n)$, so $P-MC(q_i)$ tends to some positive value, differing from competitive pricing. In the simple case of a linear demand curve, multiple solutions to $q_i$ can only exist for falling marginal cost. Such markets are dominated by a scramble for market share, as there is a distinct “economy of scale” advantage to being market leader. The analysis presented here does not help determine what the equilibrium state will be. If the marginal cost function differed between firms, the result $\theta_i=s_i$ still holds. The main difference is that the corresponding equation (\[s\_i\]) is now firm dependent $$\label{diffMC_i} P+QP'-MC_i(s_iQ)=0,$$ and the market is no longer equipartitioned at equilibrium. The equivalent of (\[Keen\]) is $$MR_i - MC_i = \frac{Q-q_i}{Q}(P-MC_i(q_i)).$$ Agent simulation ================ What evidence is there that the parameters $\theta_i$ introduced in the previous section will undergo evolution so as to optimise the profit levels of the firms? In [@Standish-Keen04], we introduced a simple agent based model which exhibited an interesting emergent phenomenon where agents would lock into the same strategy of decreasing production to improve profits. At the start of the simulation, agents are randomly increasing or decreasing their production levels without affecting total industry production much. In terms of $\theta_i$, this implies $|\theta_i|\gg 1/n$, and total industry production from equation (\[max-profit\]) is close to competitive levels. As the emergent lock in effect takes place, the firms are changing their production levels in the same way, so $\theta_i=1/n$, and the system converges to the Keen equilibrium. Qualitatively, the results of the two models do differ, with the agent model exhibiting a range of convergent behaviour not seen in the differential case. In the agent model, we were also able to reproduce the neoclassical result of convergence by the firms to output levels at which each firm’s marginal cost equaled its marginal revenue in two ways. However, neither of these accord with the standard Marshallian or Cournot explanations. Convergence to the Cournot output level occurred: 1. When a fraction of firms behaved irrationally, by continuing with a strategy (for example, increasing output) when that strategy *reduced* profit in the previous iteration. Convergence to the neoclassical expectation was monotonic as the proportion of irrational firms was raised from zero to 25 percent; from 25 to 50 percent, the neoclassical case applied; while above 50 percent irrational behaviour, the firms and the system followed a random walk. This result was independent of the number of firms in the industry; and 2. As the standard deviation $\sigma $ of the parameter $\delta q_{i}$ rose, as shown in our paper. This result was also independent of the number of firms in the industry. Conclusion ========== Our conclusions about the strict falsity of the Marshallian model, and the lack of content of the Cournot model—in that while it is strictly true, actual profit-maximizers would not play the Cournot-Nash game—still stand. We therefore continue to assert that economics does not have a model of price setting. Blinder et al.[@Blinder-etal98], provides a good empirical survey of price setting practices in the real world, and as with our model, this survey strongly contradicts accepted neoclassical beliefs. We suggest that a good research goal for economists would be to devise a model of competition that replicates the results of this study. [1]{} Paul Anglin. On the proper behavior of atoms: A comment on a critique. , 387:277–280, 2008. Alan S. Blinder, Elie R. Canetti, David E. Lebow, and Jeremy B. Rudd. . Russell Sage Foundation, New York, 1998. Morton I. Kamien and Nancy L. Schwartz. Conjectural variations. , 16:191–211, 1983. Steve Keen and Russell Standish. Profit maximisation, industry structure, and competition: a critique of neoclassical theory. , 370:81–85, 2006. Russell Standish and Steve Keen. Emergent effective collusion in an economy of perfectly rational competitors. In [*Proceedings 7th Asia-Pacific Conference on Complex Systems*]{}, page 228, Cairns, December 2004. nlin.AO/0411006. [^1]: This can be interpreted as firms anticipating what their competitors might do, although we tend to regard it as describing reactions to competitors in a “time-free” model, so the variation is not conjectural but reactionary. [^2]: For example with $P(Q)=10-Q$ and $MC(q)=1/q$, $(P-MC)/P'$ exhibits a peak value at an intermediate value, so is not monotonic. We thank Paul Anglin for providing this example.
--- abstract: 'Quantum multiparameter estimation involves estimating multiple parameters simultaneously and can be more precise than estimating them individually. Our interest here is to determine fundamental quantum limits to the achievable multiparameter estimation precision in the presence of noise. We present a lower bound to the estimation error covariance for a noisy initial probe state evolving via a noiseless quantum channel. We then present a lower bound to the estimation error covariance in the most general form for a noisy initial probe state evolving via a noisy quantum channel. We show conditions and accordingly measurements to attain these estimation precision limits for noisy systems. We see that the Heisenberg precision scaling of $1/N$ can be achieved with a probe comprising $N$ particles even in the presence of noise. In fact, some noise in the initial probe state or the quantum channel can serve as a feature rather than a bug, since the estimation precision scaling achievable in the presence of noise in the initial state or the channel in some situations is impossible in the absence of noise in the initial state or the channel. However, a lot of noise harms the quantum advantage achievable with $N$ parallel resources, and allows for a best precision scaling of $1/\sqrt{N}$. Moreover, the Heisenberg precision limit can be beaten with noise in the channel, and we present a super-Heisenberg precision limit with scaling of $1/N^2$ for optimal amount of noise in the channel, characterized by one-particle evolution operators. Further, using $\gamma$-particle evolution operators for the noisy channel, where $\gamma>1$, the best precision scaling attainable is $1/N^{2\gamma}$, which is otherwise known to be only possible using $2\gamma$-particle evolution operators for a noiseless channel.' author: - Shibdas Roy bibliography: - 'noisy\_qcrb\_biblio.bib' title: Fundamental noisy multiparameter quantum bounds --- Introduction {#sec:intro} ============ Studying quantum multiparameter estimation has recently been of significant interest [@GPANKK; @VDGJKKDBW; @CDBW; @HBDW; @BD; @SBD; @PCSHDWBSS; @GBD; @NLKA; @LY; @RGMSSGB; @CSVPSS; @PKD; @GPS]. While quantum resources allow for surpassing measurement limits set by classical physics [@GLM1; @GLM2; @FOSSDBD], it is important to consider fundamental measurement limits set by quantum mechanics. Although quantum estimation of a single parameter captures many scenarios [@GLM3], the practically more relevant problem of estimating multiple parameters simultaneously has started drawing more attention, mainly because unlike in quantum single-parameter estimation case, quantum measurements required to attain multiparameter bounds do not necessarily commute [@CWH; @MGAP; @BD; @LC]. Multiparameter estimation using a pure (i.e. noiseless) probe state under unitary (i.e. noiseless) evolution has been studied, e.g. in Ref. [@BD]. This work, like most in the literature, used symmetric logarithmic derivatives (SLDs), as used by Helstrom [@CWH], to define the quantum Fisher information matrix (QFIM) [@YZF]. Then, the estimation error covariance (that is the multiparameter counterpart to the mean-squared estimation error in single parameter estimation) is lower-bounded by the inverse of the QFIM and the bound is called a quantum Cramér-Rao bound (QCRB) [@WM]. Such a QFIM for a probe with multiple particles under unitary evolution via one particle Hamiltonians [@BD; @WM; @NC] was shown to depend only on the one- and two-particle reduced density operators [@NC] of the probe state. However, when the initial probe state is mixed (i.e. noisy) but the quantum channel is unitary, even for single parameter estimation, only an upper bound to such an SLD-based QFIM (and therefore, a lower bound to the corresponding QCRB) can be explicitly established in general [@BCM; @EFD]. Although noiseless quantum parameter estimation has been studied extensively and is well understood, it is important to study and better understand fundamental quantum estimation limits in more practical noisy situations [@VDGJKKDBW; @SBD1; @DRFG; @KDD; @EFD; @YZF; @DDKG; @DDM; @DDCS; @NBCA; @YGXWS; @RCSGRMGRB; @JCH; @SSD; @SSKD; @FAAMLOGO; @HMM; @ZZPJ; @CASZZHTXKLG; @DDJK; @BAR; @BdC; @AMR; @YF; @MT; @ARTG; @HSKDDH]. In this article, we present a multiparameter QCRB for a noisy initial state evolving unitarily, based on anti-symmetric logarithmic derivatives (ALDs) [@TWC; @FN], that lend a convenient way to study noisy quantum metrology. Moreover, we use a similar ALD-approach to present an upper bound to the QFIM (like in Refs. [@EFD; @YZF]) for the case of impure initial states under arbitrary evolution. That is, we consider a noisy quantum channel and a mixed intial probe state and define a quantum lower bound for the estimation error covariance in this general-most case. Such bounds for an $N$-particle probe state depend on the one- and two-particle reduced density operators only, similar to the case of pure state evolving unitarily in Ref. [@BD]. We also provide conditions and accordingly measurements that allow to attain these bounds. Our results here are fundamentally profound because of several reasons. Firstly, the tight bounds presented here are explicitly computable (e.g. in terms of the Kraus operators of a noisy channel), without any knowledge of the eigenvalues and the eigenvectors of the evolved probe state [@MGAP; @SBD1] and are not known to be possible for these most general noisy cases using the conventional SLD-approach. A similar bound with SLDs was obtained for single parameter estimation earlier [@FI; @EFD; @DDM], but it was not considered tight being an upper bound to the SLD QFI, and accordingly a tighter bound, linear in the number $N$ of resources was considered. Secondly, our bounds are such that the quantum enhancement to the estimation precision is provided by the two-particle reduced density matrices of the probe state and the attainability of the quantum enhancement is determined solely by the one-particle reduced density matrices of the probe state, when the channel is characterized by one-particle evolution operators, even in the presence of noise, similar to the noiseless case from Ref. [@BD]. Thirdly, the results here suggest that the Heisenberg scaling of $1/N$ in the estimation precision, with $N$ number of resources, is achievable even in the presence of noise. Moreover, some noise in the quantum channel or the initial probe state can act as a feature rather than a bug, since we see that there are situations when it is not possible to attain the Heisenberg limit in the absence of noise in the channel or the initial state, but it is possible in the presence of noise in the channel or the initial state. However, too much noise in the initial state or the channel harms the quantum advantage achievable with $N$ parallel resources. Furthermore, we show that the Heisenberg precision limit can be beaten with noise in the quantum channel. The best achievable precision limit for non-unitary channel is then determined by two-particle reduced density operators of the evolved probe state being maximally entangled and one-particle reduced density operators being maximally mixed, and corresponds to a precision scaling of $1/N^2$, attained with one-particle evolution operators for the channel. Further, using $\gamma$-particle (instead of one-particle) evolution operators for a noisy channel, where $\gamma>1$, the best precision scaling achievable is $1/N^{2\gamma}$, that is otherwise known as achievable with $2\gamma$-particle evolution operators of a noiseless channel. Before we proceed, it is important to explicitly point out why the non-standard ALD-approach instead of the standard SLD-approach is adopted in this paper. The way we choose the ALDs in this article, it turns out that the ALD-based QFIM is an upper bound to the standard SLD-based QFIM for the noiseless channel case. As already pointed out, such an upper bound to the SLD QFIM for single parameter estimation has been obtained earlier, but it was not considered a tight bound, since beating the SLD QFIM would mean that the Heisenberg limit can be beaten. However, we show here that such an upper bound to the SLD QFIM can be tight too, but the use of ALD-approach indicates that the Heisenberg limit is not beaten for the noiseless channel case. Thus, the QFIM obtained here for the noiseless channel case cannot be obtained using the SLD-approach and the corresponding equivalent bound obtained using SLDs would seem to beat the Heisenberg limit. Moreover, for the multiparameter noisy channel case considered here, the upper bound to the ALD QFIM we obtained cannot be obtained using the SLD-approach, since it would be an upper bound to the aforementioned upper bound to the SLD QFIM. We show that such an upper bound to the ALD QFIM can also be tight, implying that the Heisenberg limit can be beaten. It is unlikely that there exists some other logarithmic derivative for which the QFIM would be the upper bound to the ALD QFIM, suggesting that the Heisenberg limit is still not beaten. Multiparameter Quantum Cramér-Rao Bound {#sec:qcrb} ======================================= An experiment for estimation of some unknown parameters corresponding to a quantum process involves three stages. First, a probe state is prepared in an initial state, comprising $N$ number of resources, and evolves under the action of the quantum process. The second stage involves choosing a suitable measurement, applied to the evolved probe state. The final step involves associating, through an estimator, each experimental result with an estimation of the parameters [@EFD]. The Heisenberg limit to the estimation precision is then the precision scaling of $1/N$. Consider that a probe state $\hat{\rho}$ acquires $q$ number of parameters $\boldsymbol{\theta} = \left[\begin{array}{cccc} \theta_1 & \theta_2 & \hdots & \theta_q \end{array}\right]^T$ via a unitary transformation $\hat{U}(\boldsymbol{\theta})$, and we seek the best quantum strategy to estimate the parameters from the evolved probe state, $\hat{\rho}(\boldsymbol{\theta})=\hat{U}(\boldsymbol{\theta})\hat{\rho}\hat{U}^\dagger(\boldsymbol{\theta})$. Let a measurement performed on the evolved state $\hat{\rho}(\boldsymbol{\theta})$ be given by some positive operator valued measure (POVM) $\{\hat{P}_m\}$. The conditional probability to obtain the outcome $m$ given the parameters have the value $\boldsymbol{\theta}$ is $p(m|\boldsymbol{\theta}) = \mathrm{Tr}\left(\hat{P}_m \hat{\rho}(\boldsymbol{\theta})\right)$. The estimates $\boldsymbol{\tilde{\theta}}(m) = \left[\begin{array}{cccc} \tilde{\theta}_1(m) & \tilde{\theta}_2(m) & \hdots & \tilde{\theta}_q(m) \end{array}\right]^T$ are unbiased if $$\label{eq:unbiased_est} \sum_m p(m|\boldsymbol{\theta})\tilde{\theta}_j(m) = \theta_j \qquad \forall j.$$Then, the estimation error covariance is $$\label{eq:est_err_cov} V\left[\boldsymbol{\tilde{\theta}}(m)\right] = \sum_m p(m|\boldsymbol{\theta}) \left(\boldsymbol{\tilde{\theta}}(m)-\boldsymbol{\theta}\right)\left(\boldsymbol{\tilde{\theta}}(m)-\boldsymbol{\theta}\right)^T.$$Then, for unbiased estimators, the above covariance satisfies the Cramér-Rao inequality: $$\nu V\left[\boldsymbol{\tilde{\theta}}(m)\right] \geq \left[J_C(\boldsymbol{\theta})\right]^{-1},$$where where $\nu$ is the number of times the overall experiment is repeated and $J_C(\boldsymbol{\theta})$ is the classical Fisher Information Matrix (FIM), given by $$\label{eq:fim1} J_C^{jk} = \sum_m \frac{1}{p(m|\boldsymbol{\theta})}\frac{\partial}{\partial\theta_j}p(m|\boldsymbol{\theta})\frac{\partial}{\partial\theta_k}p(m|\boldsymbol{\theta}).$$Further, the maximisation of the FIM over all possible POVMs yields the quantum Fisher Information Matrix (QFIM), $J_Q(\boldsymbol{\theta})$, which is determined from [@TWC; @FN]: $$\label{eq:ald_diffeqn1} \frac{1}{2}\left(\hat{L}_k\hat{\rho}(\boldsymbol{\theta})+\hat{\rho}(\boldsymbol{\theta})\hat{L}_k^\dagger\right)=\frac{\partial}{\partial\theta_k}\hat{\rho}(\boldsymbol{\theta}).$$where $\hat{L}_k$ is an operator. The QFIM $J_Q(\boldsymbol{\theta})$ is then [@TWC]: $$\label{eq:qfim1} J_Q^{jk} = \frac{1}{2}{\rm Tr}\left[\left(\hat{L}_j^\dagger \hat{L}_k+\hat{L}_k^\dagger \hat{L}_j\right)\hat{\rho}(\boldsymbol{\theta})\right],$$Then, we have $$\label{eq:ald_qcrb} \nu V\left[\boldsymbol{\tilde{\theta}}(m)\right] \geq \left[J_C(\boldsymbol{\theta})\right]^{-1} \geq \left[J_Q(\boldsymbol{\theta})\right]^{-1},$$where, $\hat{L}_k$ was taken to be Hermitian by Helstrom [@CWH], in which case it is called the symmetric logarithmic derivative (SLD). In general, $\hat{L}_k$ need not be Hermitian. We assume that $\hat{L}_k$ is anti-Hermitian, such that $\hat{L}_k^\dagger=-\hat{L}_k$ [@TWC; @FN], in which case it is called the anti-symmetric logarithmic derivative (ALD). Thus, (\[eq:qfim1\]) defines a certain family of logarithmic derivatives, satisfying ${\rm Tr}\left[\hat{\rho}(\boldsymbol{\theta})\hat{L}_k\right]=0$, such that a Hermitian $\hat{L}_k$ is an SLD and an anti-Hermitian $\hat{L}_k$ is an ALD [@FN]. Although Ref. [@TWC] considered a different (Bayesian waveform-) estimation problem, (\[eq:ald\_qcrb\]) can be similarly proven here. See Appendix \[sec:app1\]. Although the classical Cramér-Rao bound (i.e. the first inequality in (\[eq:ald\_qcrb\])) can always be saturated, e.g. by a maximum likelihood estimator [@SLB], the QCRB (i.e. the second inequality in (\[eq:ald\_qcrb\])) for SLDs are not known to be attainable in general. We claim that an ALD-based QCRB of the form (\[eq:ald\_qcrb\]) can be saturated (i.e. attained), when the QFIM is not rank deficient and the expectation of the commutator of every pair of the ALDs vanishes, similar to the case of SLD-based QCRB [@KM; @RJD; @BD]: $$\label{eq:qcrb_saturate} {\rm Tr}\left[\left(\hat{L}_j^\dagger \hat{L}_k-\hat{L}_k^\dagger \hat{L}_j\right)\hat{\rho}(\boldsymbol{\theta})\right]={\rm Tr}\left(\left[\hat{L}_j,\hat{L}_k\right]\hat{\rho}(\boldsymbol{\theta})\right)= 0.$$See Appendix \[sec:app2\]. The above condition is trivially true for single parameter estimation. Then, the set of POVMs of cardinality $q+2$, comprising the following $q+1$ elements, $$\label{eq:qcrb_measure1} \begin{split} \hat{P}_0&=\hat{\rho}(\boldsymbol{\theta})=\hat{U}(\boldsymbol{\theta})\hat{\rho}\hat{U}^\dagger(\boldsymbol{\theta}),\\ \hat{P}_m&=\frac{\partial\hat{\rho}(\boldsymbol{\theta})}{\partial\theta_m}=\frac{\partial\hat{U}(\boldsymbol{\theta})}{\partial\theta_m}\hat{\rho}\hat{U}^\dagger(\boldsymbol{\theta})+\hat{U}(\boldsymbol{\theta})\hat{\rho}\frac{\partial\hat{U}^\dagger(\boldsymbol{\theta})}{\partial\theta_m} \, \forall m=1,\ldots,q, \end{split}$$along with one normalising element, saturates the QCRB (see Appendix \[sec:app7\]). For pure states $|\psi\rangle$, the $q+1$ projectors $$\begin{split} \hat{P}_0&=\hat{\rho}(\boldsymbol{\theta})=\hat{U}(\boldsymbol{\theta})|\psi\rangle\langle\psi|\hat{U}^\dagger(\boldsymbol{\theta}),\\ \hat{P}_m&=\frac{\partial\hat{U}(\boldsymbol{\theta})}{\partial\theta_m}|\psi\rangle\langle\psi|\frac{\partial\hat{U}^\dagger(\boldsymbol{\theta})}{\partial\theta_m} \quad \forall m=1,\ldots,q, \end{split}$$along with one normalising element, saturates the QCRB. This follows from Refs. [@HBDW; @BD] (see Appendix \[sec:app6\]). The QFIM for One-Particle Hamiltonians {#sec:qfim} ====================================== Let us now consider that the unitary evolution is $\hat{U}(\boldsymbol{\theta}) = e^{-i\hat{H}(\boldsymbol{\theta})}$ and that the probe state $\hat{\rho}$ comprises $N$ particles evolving under the one-particle Hamiltonian $\hat{h}^{[n]} = \sum_{k=1}^q \theta_k\hat{h}^{[n]}_k$ for $n=1,\hdots,N$, such that [@BD]: $$\hat{H}(\boldsymbol{\theta}) = \sum_{n=1}^N\hat{h}^{[n]} = \sum_{k=1}^q\theta_k\sum_{n=1}^N\hat{h}_k^{[n]} \equiv \sum_{k=1}^q\theta_k\hat{H}_k.$$The generators $\hat{H}_k$ are assumed to not depend on $\boldsymbol{\theta}$ and do not generally commute with each other. Then, as employed by Ref. [@BD], we have [@RMW]: $$\label{eq:noncommuting_generators} \frac{\partial\hat{U}(\boldsymbol{\theta})}{\partial\theta_k}=-i\int_0^1 d\alpha e^{-i(1-\alpha)\hat{H}(\boldsymbol{\theta})}\frac{\partial\hat{H}(\boldsymbol{\theta})}{\partial\theta_k}e^{-i\alpha\hat{H}(\boldsymbol{\theta})}.$$Then, we have $$\label{eq:unitary_delrho} \frac{\partial}{\partial\theta_k}\hat{\rho}(\boldsymbol{\theta})=\frac{\partial}{\partial\theta_k}\left(\hat{U}(\boldsymbol{\theta})\hat{\rho}\hat{U}^\dagger(\boldsymbol{\theta})\right)=-i\left[\hat{M}_k(\boldsymbol{\theta}),\hat{\rho}(\boldsymbol{\theta})\right],$$ where $$\label{eq:unitary_mk} \hat{M}_k(\boldsymbol{\theta}) = i\frac{\partial\hat{U}(\boldsymbol{\theta})}{\partial\theta_k}\hat{U}^\dagger(\boldsymbol{\theta}) = \hat{U}(\boldsymbol{\theta})\hat{A}_k(\boldsymbol{\theta})\hat{U}^\dagger(\boldsymbol{\theta}),$$with $\hat{A}_k(\boldsymbol{\theta}) = \int_0^1 d\alpha e^{i\alpha\hat{H}(\boldsymbol{\theta})}\hat{H}_ke^{-i\alpha\hat{H}(\boldsymbol{\theta})}$. We choose the operator $\hat{L}_k$ as the anti-Hermitian, $\hat{L}_k = -2i\Delta \hat{M}_k$, where $$\label{eq:unitary_delmk} \Delta \hat{M}_k \equiv \hat{M}_k(\boldsymbol{\theta}) - {\rm Tr}\left(\hat{M}_k(\boldsymbol{\theta})\hat{\rho}(\boldsymbol{\theta})\right).$$The QFIM from (\[eq:qfim1\]) then takes the form: $$\label{eq:qfim3} J_Q^{jk} = 2{\rm Tr}\left[\left(\Delta\hat{A}_j\Delta\hat{A}_k+\Delta\hat{A}_k\Delta\hat{A}_j\right)\hat{\rho}\right],$$where $$\begin{split} \Delta\hat{A}_k &= \hat{A}_k(\boldsymbol{\theta}) - {\rm Tr}\left(\hat{A}_k(\boldsymbol{\theta})\hat{\rho}\right)\\ &= \sum_n \left(\hat{b}_k^{[n]} - {\rm Tr}\left(\hat{b}_k^{[n]}\hat{\rho}^{[n]}\right) \right) \equiv \sum_n \hat{c}_k^{[n]}, \end{split}$$with $\hat{b}_k^{[n]} = \int_0^1 d\alpha e^{i\alpha\hat{h}^{[n]}}\hat{h}_k^{[n]}e^{-i\alpha\hat{h}^{[n]}}$. Thus, (\[eq:qfim3\]) becomes: $$\begin{split} J_Q^{jk}&=2\sum_n{\rm Tr}\left[ \left(\hat{c}_j^{[n]}\hat{c}_k^{[n]}+\hat{c}_k^{[n]}\hat{c}_j^{[n]}\right)\hat{\rho}^{[n]} \right]\\ &+2\sum_{n\neq m}{\rm Tr}\left[ \left(\hat{c}_j^{[n]}\otimes\hat{c}_k^{[m]}+\hat{c}_k^{[n]}\otimes\hat{c}_j^{[m]}\right)\hat{\rho}^{[n,m]} \right]\\ &=4\sum_n{\rm Re}\left[{\rm Tr}\left[\hat{\rho}^{[n]}\hat{b}_j^{[n]}\hat{b}_k^{[n]}\right]-{\rm Tr}\left[\hat{\rho}^{[n]}\hat{b}_j^{[n]}\right]{\rm Tr}\left[\hat{\rho}^{[n]}\hat{b}_k^{[n]}\right]\right]\\ &+4\sum_{n\neq m}{\rm Re}\left[{\rm Tr}\left[\hat{\rho}^{[n,m]}\left(\hat{b}_j^{[n]}\otimes\hat{b}_k^{[m]}\right)\right]\right.\\ &\left.-{\rm Tr}\left[\hat{\rho}^{[n]}\hat{b}_j^{[n]}\right]{\rm Tr}\left[\hat{\rho}^{[m]}\hat{b}_k^{[m]}\right]\right]\\ &=\sum_n J_Q^{jk,[1]}\left(\hat{\rho}^{[n]}\right) + \sum_{n\neq m} J_Q^{jk,[2]}\left(\hat{\rho}^{[n,m]}\right), \end{split}$$ where $J_Q^{jk,[1]}$ depends only on one-particle reduced density matrix on subsystem $n$ and $J_Q^{jk,[2]}$ depends on two-particle reduced density matrix on subsystems $n$, $m$. We now restrict to permutationally invariant states [@BD], i.e. states that are invariant under any permutation of its constituents: $\hat{\rho}=\hat{O}_\pi\hat{\rho}\hat{O}_\pi^\dagger$ for all possible $\pi$, where $\hat{O}_\pi$ is the unitary operator for the permutation $\pi$. Then, $$\label{eq:qfim_1p2p} J_Q^{jk} = NJ_Q^{jk,[1]}\left(\hat{\rho}^{[1]}\right)+N(N-1)J_Q^{jk,[2]}\left(\hat{\rho}^{[1]},\hat{\rho}^{[2]}\right),$$where $$J_Q^{jk,[1]}=4{\rm Re}\left[{\rm Tr}\left[\hat{\rho}^{[1]}\hat{b}_j\hat{b}_k\right]-{\rm Tr}\left[\hat{\rho}^{[1]}\hat{b}_j\right]{\rm Tr}\left[\hat{\rho}^{[1]}\hat{b}_k\right]\right]$$only depends on the first order reduced density matrix, $$J_Q^{jk,[2]}=4{\rm Re}\left[{\rm Tr}\left[\hat{\rho}^{[2]}\left(\hat{b}_j\otimes\hat{b}_k\right)\right]-{\rm Tr}\left[\hat{\rho}^{[1]}\hat{b}_j\right]{\rm Tr}\left[\hat{\rho}^{[1]}\hat{b}_k\right]\right]$$also depends on the second order reduced density matrix. Then, similar observations can be made as were made in Ref. [@BD] for pure state. For example, if the probe state is a product state, i.e. $\hat{\rho}=\bigotimes_{n=1}^N\hat{\rho}^{[n]}$, and permutationally invariant, then $\hat{\rho}^{[2]}=\hat{\rho}^{[1]}\otimes\hat{\rho}^{[1]}$, such that $J_Q^{jk,[2]}=0$, and so $J_Q^{jk}=NJ_Q^{jk,[1]}$. This implies that quantum correlations are necessary for achieving the Heisenberg scaling $1/N$, which is evidently attainable even when the initial probe state is mixed. However, if both $\hat{\rho}^{[1]}$ and $\hat{\rho}^{[2]}$ are maximally mixed, the Heisenberg scaling is lost, i.e. too much quantum correlations harm the quantum advantage with $N$ parallel resources [@BD; @HGS]. Thus, any quantum enhancement to the estimation precision is provided by the two-particle reduced density matrices of the probe state. Moreover, from (\[eq:qcrb\_measure1\]), the set of POVMs, comprising $$\begin{split} \hat{P}_0 &= \hat{\rho}(\boldsymbol{\theta}) = \hat{U}(\boldsymbol{\theta})\hat{\rho}\hat{U}^\dagger(\boldsymbol{\theta}),\\ \hat{P}_m &=\frac{\partial\hat{\rho}(\boldsymbol{\theta})}{\partial\theta_m} = -i\left[\hat{M}_m(\boldsymbol{\theta}),\hat{\rho}(\boldsymbol{\theta})\right] \quad \forall m=1,\ldots,q, \end{split}$$along with one element accounting for normalisation, saturates the QCRB for (\[eq:qfim3\]), provided we have (\[eq:qcrb\_saturate\]), i.e. here $$\begin{split} &2{\rm Tr}\left[\left(\Delta\hat{A}_j\Delta\hat{A}_k-\Delta\hat{A}_k\Delta\hat{A}_j\right)\hat{\rho}\right]=0 \quad \forall j,k\\ \Rightarrow &4\sum_n{\rm Im}{\rm Tr}\left[\hat{\rho}^{[n]}\hat{b}_j^{[n]}\hat{b}_k^{[n]}\right]\\ +&4\sum_{n\neq m}{\rm Im}{\rm Tr}\left[\hat{\rho}^{[n,m]}\left(\hat{b}_j^{[n]}\otimes\hat{b}_k^{[m]}\right)\right]=0\\ \Rightarrow &4\sum_n{\rm Im}{\rm Tr}\left[\hat{\rho}^{[n]}\hat{b}_j^{[n]}\hat{b}_k^{[n]}\right]=0, \end{split}$$since ${\rm Tr}\left[\hat{\rho}^{[n,m]}\left(\hat{b}_j^{[n]}\otimes\hat{b}_k^{[m]}\right)\right]\in\mathbb{R}$. Hence, the attainability of the quantum enhancement to the estimation precision is determined solely by the one-particle reduced density matrices of the probe state. Estimating a Magnetic Field in Three Dimensions {#sec:mag_fld} =============================================== Now consider the task of estimating the components of a magnetic field in three dimensions simultaneously using two-level systems. The Hamilton operator for this system is given by $\hat{h}=\boldsymbol{\hat{\mu}}\cdot\mathbf{B}=\sum_{k=1}^3\hat{\mu}_kB_k=\sum_{k=1}^3(\mu/2)B_k\hat{\sigma}_k:=\sum_{k=1}^3\theta_k\hat{\sigma}_k$, where the magnetic moment $\hat{\mu}_k=\mu\hat{\sigma}_k/2$ is proportional to the spin, $\{\hat{\sigma}_k\}$ are the unnormalized Pauli operators, and $\theta_k=\mu B_k/2$ [@BD]. Start with a Greenberger-Horne-Zeilinger (GHZ) type pure state $|\Phi_k\rangle = \left(|\phi_k^{+}\rangle^{\otimes N}+|\phi_k^{-}\rangle^{\otimes N}\right)/\sqrt{2}$, where $|\phi_k^{\pm}\rangle$ is the eigenvector of the Pauli operator $\hat{\sigma}_k$ corresponding to the eigenvalue $\pm 1$ ($k=1$, $2$, $3$ corresponding to the $X$, $Y$, and $Z$ directions). These states are permutationally invariant with first and second order reduced density matrices $\hat{\rho}_k^{[1]}=\mathbb{1}_2/2$ and $\hat{\rho}_k^{[2]}=(|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|+|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|)/2=(\mathbb{1}_2\otimes\mathbb{1}_2+\hat{\sigma}_k\otimes\hat{\sigma}_k)/4$, respectively [@BD]. Now, Ref. [@BD] used the pure state $|\psi\rangle = \mathcal{N}\left(e^{i\delta_1}|\Phi_1\rangle+e^{i\delta_2}|\Phi_2\rangle+e^{i\delta_3}|\Phi_3\rangle\right)$, where $\mathcal{N}$ is the normalization constant and $\{\delta_k\}$ are adjustable local phases. We here intend to estimate the three components of the magnetic field using a mixed state $\hat{\rho}^N$, obtained from the above pure state in the presence of local dephasing, described using two single-particle Kraus operators [@NC], $$\hat{E}_0 = \left[\begin{array}{cc} 1 & 0\\ 0 & e^{-\lambda} \end{array}\right], \qquad \hat{E}_1 = \left[\begin{array}{cc} 0 & 0\\ 0 & \sqrt{1-e^{-2\lambda}} \end{array}\right],$$where $\lambda$ is some constant causing the phase damping, such that the off-diagonal elements of the density matrix decay exponentially to zero with time. Considering that all particles dephase uniformly, the $N$-particle density matrix of the desired mixed state is then [@JD]: $$\hat{\rho}^N = \sum_{g=0}^N\sum_{\pi_g^N}\pi_g^N\left[\hat{E}_1^{\otimes g}\otimes \hat{E}_0^{\otimes N-g}\right]|\psi\rangle\langle\psi|\pi_g^N\left[\hat{E}_1^{\dagger\otimes g}\otimes \hat{E}_0^{\dagger\otimes N-g}\right],$$where $\pi_g^N$ represents different permutations of $g$ and $N-g$ copies of the $\hat{E}_1$ and $\hat{E}_0$ operators, respectively. Note that the operators $\hat{E}_0$ and $\hat{E}_1$ are Hermitian, so the $\dagger$s can be dropped. Clearly, these states $\hat{\rho}^N$ are permutationally invariant as well, with now first and second order reduced density matrices $\hat{\rho}^{[1]}=\mathbb{1}_2/2$ and $\hat{\rho}_k^{[2]}=[\mathbb{1}_2\otimes\mathbb{1}_2+(\sum_{r=0}^1\hat{E}_r\hat{\sigma}_k\hat{E}_r)\otimes(\sum_{s=0}^1\hat{E}_s\hat{\sigma}_k\hat{E}_s)]/4$, respectively. This is shown in Appendix \[sec:app3\]. For $N=8n$, $n \in \mathbb{N}$ (and $\delta_k=0$ for all $k$), the two-body reduced density matrix of $\hat{\rho}^N$ is an equal mixture of those in all directions (as in the pure state case in Ref. [@BD]), given by $\hat{\rho}^{[2]}=\frac{1}{3}\sum_{k=1}^3\hat{\rho}_k^{[2]}=\frac{1}{4}\mathbb{1}_2\otimes\mathbb{1}_2+\frac{1}{12}\sum_{k=1}^3\left[\left(\sum_{r=0}^1\hat{E}_r\hat{\sigma}_k\hat{E}_r\right)\otimes\left(\sum_{s=0}^1\hat{E}_s\hat{\sigma}_k\hat{E}_s\right)\right]$. For any other $N$, the difference from the form of $\hat{\rho}^{[2]}$ is exponentially small in $N$. This directly follows from the way it was shown in Ref. [@BD] for the pure state case. Hence, we consider the probe state to have marginals $\hat{\rho}^{[1]}=\mathbb{1}_2/2$ and $\hat{\rho}^{[2]}$ as above and calculate the QFIM. We get $$\label{eq:qfim_1p} J_Q^{jk,[1]}=2{\rm Tr}\left[\hat{b}_j\hat{b}_k\right],$$ and $$\label{eq:qfim_2p} \begin{split} J_Q^{jk,[2]}&=\frac{1}{3}\sum_{t=1}^3{\rm Tr}\left[\left(\sum_{r=0}^1\hat{E}_r\hat{\sigma}_t\hat{E}_r\otimes\sum_{s=0}^1\hat{E}_s\hat{\sigma}_t\hat{E}_s\right)\left(\hat{b}_j\otimes\hat{b}_k\right)\right]\\ &=\frac{1}{3}\sum_{t=1}^3{\rm Tr}\left[\sum_{r=0}^1\hat{E}_r\hat{\sigma}_t\hat{E}_r\hat{b}_j\right]{\rm Tr}\left[\sum_{s=0}^1\hat{E}_s\hat{\sigma}_t\hat{E}_s\hat{b}_k\right]\\ &=\frac{1}{3}\sum_{t=1}^3{\rm Tr}\left[\hat{\sigma}_t\sum_{r=0}^1\hat{E}_r\hat{b}_j\hat{E}_r\right]{\rm Tr}\left[\hat{\sigma}_t\sum_{s=0}^1\hat{E}_s\hat{b}_k\hat{E}_s\right]\\ &=\frac{2}{3}{\rm Tr}\left[\sum_{t=1}^3{\rm Tr}\left[\hat{\sigma}_t\left(\sum_{r=0}^1\hat{E}_r\hat{b}_j\hat{E}_r\right)\right]\hat{\sigma}_t\left(\sum_{s=0}^1\hat{E}_s\hat{b}_k\hat{E}_s\right)\right]\\ &=\frac{2}{3}{\rm Tr}\left[\left(\sum_{r=0}^1\hat{E}_r\hat{b}_j\hat{E}_r\right)\left(\sum_{s=0}^1\hat{E}_s\hat{b}_k\hat{E}_s\right)\right]. \end{split}$$ Define $\hat{f}_j=\sum_{r=0}^1\hat{E}_r\hat{b}_j\hat{E}_r$ and $\hat{f}_k=\sum_{s=0}^1\hat{E}_s\hat{b}_k\hat{E}_s$. Also, let $\xi=\sqrt{\theta_1^2+\theta_2^2+\theta_3^2}, \eta_k=\frac{\theta_k}{\sqrt{\theta_1^2+\theta_2^2+\theta_3^2}}$ for all $k=1$, $2$, $3$ (corresponding to the $X$, $Y$ and $Z$ directions). Here, (\[eq:qfim\_1p\]) is found to be (following Ref. [@BD]): $$\label{eq:qfim_1p_x} J_Q^{jk,[1]}=4\left[\left(1-{\rm sinc}^2[\xi]\right)\eta_j\eta_k+\delta_{jk}{\rm sinc}^2[\xi]\right],$$ where ${\rm sinc}[\xi]={\rm sin}[\xi]/\xi$. From (\[eq:qfim\_1p2p\]), (\[eq:qfim\_1p\_x\]), (\[eq:qfim\_2p\]), we get: $$\label{eq:qfim_mgfld} \begin{split} J_Q^{jk}&=4N\left[\left(1-{\rm sinc}^2[\xi]\right)\eta_j\eta_k+\delta_{jk}{\rm sinc}^2[\xi]\right]\\ &+\frac{2N(N-1)}{3}{\rm Tr}\left[\hat{f}_j\hat{f}_k\right], \end{split}$$ where the terms ${\rm Tr}\left[\hat{f}_j\hat{f}_k\right]$ can be explicitly calculated. Since some or all of the terms ${\rm Tr}\left[\hat{b}_j\hat{b}_k\right]$ are non-zero, we can have the terms ${\rm Tr}\left[\hat{f}_j\hat{f}_k\right]$ as non-zero, such that the scaling $1/N$ can be achieved, as the parallel scheme bound without ancillas from Ref. [@DDM] can be tight even for $\beta\neq 0$. Even when ${\rm Tr}\left[\hat{b}_j\hat{b}_k\right]=0$, the terms ${\rm Tr}\left[\hat{f}_j\hat{f}_k\right]$ in general (i.e. when $\hat{E}_0$ and $\hat{E}_1$ need not be local dephasing operators), can be non-zero. This implies that it is possible to achieve the Heisenberg scaling with the presence of noise in the initial probe state, even when such a scaling cannot be achieved in the absence of noise in the initial state. This is because mixed separable states can be as nonclassical as entangled pure states [@PGACHW]. Thus, noise in the initial probe state can act as a feature rather than a bug in attaining the Heisenberg limit. Note though that it is unlikely for all the terms ${\rm Tr}\left[\hat{b}_j\hat{b}_k\right]$ to be zero, since that would mean that the QFIM $J_Q$ is zero for the pure state case from Ref. [@BD]. However, even when some or all of the terms ${\rm Tr}\left[\hat{b}_j\hat{b}_k\right]$ are non-zero, it may be possible for the terms ${\rm Tr}\left[\hat{f}_j\hat{f}_k\right]$ to be such that the QFIM $J_Q$ for the mixed state case considered here is larger than that for the pure state case from Ref. [@BD]. This is because mixed entangled states can be more nonclassical than pure entangled states [@PGACHW]. Thus, noise in the initial probe state can allow for better estimation precision than the case of no noise in the initial state. Although noise is known to reduce quantum correlations in a system in most cases [@HHHH; @NC], noise can also introduce or increase quantum correlations in a system [@DB; @BFP; @SKB; @OCMBRM]. For example, local dephasing considered in this section is a local unital noise [@SKB], that mostly decreases quantum correlations. Instead, if local non-unital noise, such as local dissipation [@OCMBRM] as represented by the following single-particle Kraus operators [@NC], is used to obtain the initial mixed probe state from a classically correlated separable state, the mixed state so obtained can have quantum correlations, that may be activated into entanglement, allowing for better estimation precision [@BIWH; @SC; @HWD; @MCWV]: $$\hat{E}_0 = \left[\begin{array}{cc} 1 & 0\\ 0 & \sqrt{1-e^{-2\kappa}} \end{array}\right], \qquad \hat{E}_1 = \left[\begin{array}{cc} 0 & e^{-\kappa}\\ 0 & 0 \end{array}\right],$$where $\kappa$ is a constant causing amplitude damping. This is why ancilla-assisted schemes of Ref. [@DDM] yielded scaling better than that without ancillas for amplitude damping. Nonetheless, if $\hat{\rho}^{[2]}=\hat{\rho}^{[1]}\otimes\hat{\rho}^{[1]}=\mathbb{1}_4/4$, i.e. both $\hat{\rho}^{[1]}$ and $\hat{\rho}^{[2]}$ are maximally mixed, then ${\rm Tr}\left[\hat{f}_j\hat{f}_k\right]=0$, since $\sum_{t=1}^3\left(\sum_{r=0}^1\hat{E}_r\hat{\sigma}_t\hat{E}_r\otimes\sum_{s=0}^1\hat{E}_s\hat{\sigma}_t\hat{E}_s\right)$ would be zero in (\[eq:qfim\_2p\]), such that the best scaling achievable is $1/\sqrt{N}$. Thus, unlike the conventional wisdom that any amount of noise is harmful, we see that some amount of noise in the initial probe state can be useful and provides a quantum advantage through its quantum correlations, but a lot of noise is harmful because of too much quantum correlations in the state. Noisy Quantum Channel {#sec:noisy} ===================== We consider a general noisy quantum channel that allows the state $\hat{\rho}$ to evolve not necessarily unitarily. Let $\hat{\Pi}_l(\boldsymbol{\theta})$ be the Kraus operators that describe the dynamical evolution of the system. The state of the system after the evolution is [@EFD; @YZF] $$\label{eq:kraus_evolution} \hat{\rho}(\boldsymbol{\theta}) = \sum_l \hat{\Pi}_l(\boldsymbol{\theta})\hat{\rho}\hat{\Pi}_l^\dagger(\boldsymbol{\theta}),$$where $\sum_l\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\hat{\Pi}_l(\boldsymbol{\theta}) = \mathbb{1}$. Even when the transformation (\[eq:kraus\_evolution\]) is non-unitary, it may be described by a unitary evolution $\hat{U}_{SB}(\boldsymbol{\theta})$ in a bigger space, comprising the system $S$ and some vacuum state ancillary bath $B$. The evolved state in $S+B$ space is given by $$\begin{split} \hat{\rho}_{SB}(\boldsymbol{\theta})&=\hat{U}_{SB}(\boldsymbol{\theta})\left(\hat{\rho}\otimes|0\rangle\langle 0|\right)\hat{U}_{SB}^\dagger(\boldsymbol{\theta})\\ &=\sum_{l,v}\hat{\Pi}_l(\boldsymbol{\theta})\hat{\rho}\hat{\Pi}_v^\dagger(\boldsymbol{\theta})\otimes|l\rangle\langle v| \end{split}$$Then, following from (\[eq:unitary\_delrho\]), (\[eq:unitary\_mk\]), (\[eq:unitary\_delmk\]) for the noiseless $S+B$ space, we get $$\frac{\partial}{\partial\theta_k}\hat{\rho}_{SB}(\boldsymbol{\theta})=-i\left[\hat{M}_k(\boldsymbol{\theta}),\hat{\rho}_{SB}(\boldsymbol{\theta})\right] =\frac{1}{2}\left(\hat{L}_k\hat{\rho}_{SB}(\boldsymbol{\theta})+\hat{\rho}_{SB}(\boldsymbol{\theta})\hat{L}_k^\dagger\right),$$where $\hat{M}_k(\boldsymbol{\theta}) \equiv i\frac{\partial\hat{U}_{SB}(\boldsymbol{\theta})}{\partial\theta_k}\hat{U}_{SB}^\dagger(\boldsymbol{\theta})$, and $\hat{L}_k = -2i\Delta\hat{M}_k$ is anti-Hermitian, $\Delta\hat{M}_k \equiv \hat{M}_k(\boldsymbol{\theta}) - {\rm Tr}\left(\hat{M}_k(\boldsymbol{\theta})\hat{\rho}_{SB}(\boldsymbol{\theta})\right)$. Then, the QFIM from (\[eq:qfim1\]) for $\hat{\rho}_{SB}(\boldsymbol{\theta})$ takes the form: $$\label{eq:noisy_qfim1} \begin{split} J_Q^{jk}&=4{\rm Re}\left[{\rm Tr}\left(\hat{H}^{jk}_1(\boldsymbol{\theta})\left(\hat{\rho}\otimes|0\rangle\langle 0|\right)\right)\right.\\ &\left.-{\rm Tr}\left(\hat{H}^j_2(\boldsymbol{\theta})\left(\hat{\rho}\otimes|0\rangle\langle 0|\right)\right){\rm Tr}\left(\hat{H}^k_2(\boldsymbol{\theta})\left(\hat{\rho}\otimes|0\rangle\langle 0|\right)\right)\right], \end{split}$$ where $$%\begin{split} \hat{H}^{jk}_1(\boldsymbol{\theta})=\frac{\partial\hat{U}_{SB}^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{U}_{SB}(\boldsymbol{\theta})}{\partial\theta_k}, \, \hat{H}^k_2(\boldsymbol{\theta})=i\frac{\partial\hat{U}_{SB}^\dagger(\boldsymbol{\theta})}{\partial\theta_k}\hat{U}_{SB}(\boldsymbol{\theta}). %\end{split}$$However, when only the system $S$ is monitored but the bath $B$ is not monitored, we recover (\[eq:kraus\_evolution\]) by taking a partial trace with respect to $B$: ${\rm Tr}_B\left(\hat{\rho}_{SB}(\boldsymbol{\theta})\right) = \hat{\rho}(\boldsymbol{\theta})$. Then, if we trace out the bath $B$ before having the traces in (\[eq:noisy\_qfim1\]), we obtain an upper bound (like those obtained in Refs. [@EFD; @YZF]) to the QFIM in (\[eq:qfim1\]) for $\hat{\rho}(\boldsymbol{\theta})$: $$\label{eq:noisy_qfim2} C_Q^{jk}=4{\rm Re}\left[{\rm Tr}\left(\hat{K}^{jk}_1(\boldsymbol{\theta})\hat{\rho}\right)-{\rm Tr}\left(\hat{K}^j_2(\boldsymbol{\theta})\hat{\rho}\right){\rm Tr}\left(\hat{K}^k_2(\boldsymbol{\theta})\hat{\rho}\right)\right],$$where $$%\begin{split} \hat{K}^{jk}_1(\boldsymbol{\theta})=\sum_l\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}, \, \hat{K}^k_2(\boldsymbol{\theta})=i\sum_p\frac{\partial\hat{\Pi}_p^\dagger(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_p(\boldsymbol{\theta}), %\end{split}$$such that $$\begin{split} \hat{K}^{jk}_1(\boldsymbol{\theta})\hat{\rho}&={\rm Tr}_B\left[\hat{H}^{jk}_1(\boldsymbol{\theta})\left(\hat{\rho}\otimes|0\rangle\langle 0|\right)\right],\\ \hat{K}^k_2(\boldsymbol{\theta})\hat{\rho}&={\rm Tr}_B\left[\hat{H}^k_2(\boldsymbol{\theta})\left(\hat{\rho}\otimes|0\rangle\langle 0|\right)\right]. \end{split}$$We prove in Appendix \[sec:app5\] that $C_Q$ from (\[eq:noisy\_qfim2\]) is an upper bound to the QFIM $J_Q$ from (\[eq:qfim1\]) for $\hat{\rho}(\boldsymbol{\theta})$. One may compare these results with those in Ref. [@YZF], where initially pure states in different modes were assumed to evolve independently. We made no such assumption and our initial state is mixed, and so our results are more general. Also, we consider estimation of multiple parameters, as opposed to single parameter estimation studied in Ref. [@EFD]. Our upper bound to the QFIM is relevant, since there are an infinitude of Kraus representations $\hat{\Pi}_l(\boldsymbol{\theta})$ of the channel that make the bound to equal the QFIM [@EFD]. Now, we claim that (\[eq:noisy\_qfim2\]) is saturated, when the following condition is satisfied: $$\label{eq:qfim_bound_saturate} {\rm Im}\left[\sum_l{\rm Tr}\left\lbrace\left(\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}\right)\hat{\rho}\right\rbrace\right]=0 \quad \forall j,k,$$which is obtained from (\[eq:qcrb\_saturate\]) for $S+B$ space, by tracing out $B$ (see Appendix \[sec:app8\]). That is, the bound (\[eq:noisy\_qfim2\]) is saturated, when the expectation, with respect to the initial probe state, of the commutator of every pair of the derivatives of the channel Kraus operator and its adjoint vanishes. Clearly, when the above condition is satisfied, it is possible to attain the elusive Heisenberg limit even in the most general noisy estimation scenario. The above condition is trivially true for single parameter estimation. Then, the set of POVMs of cardinality $q+2$, comprising the following $q+1$ elements, $$\label{eq:qfim_bound_measure1} \begin{split} \hat{P}_0&=\hat{\rho}(\boldsymbol{\theta})=\sum_l\hat{\Pi}_l(\boldsymbol{\theta})\hat{\rho}\hat{\Pi}_l^\dagger(\boldsymbol{\theta}),\\ \hat{P}_m&=\frac{\partial\hat{\rho}(\boldsymbol{\theta})}{\partial\theta_m}=\sum_l\left[\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_m}\hat{\rho}\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\right.\\ &\left.+\hat{\Pi}_l(\boldsymbol{\theta})\hat{\rho}\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_m}\right] \qquad \forall m=1,\ldots,q, \end{split}$$along with one element accounting for normalisation, saturates (\[eq:noisy\_qfim2\]) (See Appendices \[sec:app9\] and \[sec:app10\]). Upper Bound to the QFIM for $N$ Particles Evolving via Noisy Channel {#sec:qfim_bound} ==================================================================== Consider that the probe comprising $N$ particles evolves not necessarily unitarily. Then, the QFIM (\[eq:qfim3\]) is for unitary evolution of a probe comprising more than $N$ particles in $S+B$ space. The evolution of the probe comprising $N$ particles in $S$ space alone is described here by some unital Kraus operators $\hat{\Pi}_l(\boldsymbol{\theta})=\frac{1}{\sqrt{L}}e^{-i\hat{G}_l(\boldsymbol{\theta})}$, where $l=1,\ldots,L$, $\sum_l\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\hat{\Pi}_l(\boldsymbol{\theta})=\sum_l\hat{\Pi}_l(\boldsymbol{\theta})\hat{\Pi}_l^\dagger(\boldsymbol{\theta})=\mathbb{1}$, and $$\hat{G}_l(\boldsymbol{\theta}) = \sum_{n=1}^N\hat{\pi}_{l_n}^{[n]} = \sum_{k=1}^q\theta_k\sum_{n=1}^N\hat{\pi}_{l_nk}^{[n]} \equiv \sum_{k=1}^q\theta_k\hat{G}_{lk}.$$ The generators $\hat{G}_{lk}$ do not depend on $\boldsymbol{\theta}$ and do not generally commute with each other. Then, as in Section \[sec:qfim\], $$\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}=\frac{-i}{L\sqrt{L}}\int_0^1 d\alpha e^{-i(1-\alpha)\hat{G}_l(\boldsymbol{\theta})}\frac{\partial\hat{G}_l(\boldsymbol{\theta})}{\partial\theta_k}e^{-i\alpha\hat{G}_l(\boldsymbol{\theta})}.$$So, ${\small{\rm Tr}_B\left[\hat{M}_k(\boldsymbol{\theta})\right]=i\sum_l\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_l^\dagger(\boldsymbol{\theta})=\sum_l\hat{\Pi}_l(\boldsymbol{\theta})\hat{B}_{lk}(\boldsymbol{\theta})\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}$, where $\hat{M}_k(\boldsymbol{\theta})$ is from (\[eq:unitary\_mk\]) for $S+B$ space, $\sum_l\hat{B}_{lk}(\boldsymbol{\theta})={\rm Tr}_B\left[\hat{A}_k(\boldsymbol{\theta})\right]=\frac{1}{L}\sum_l\int_0^1 d\alpha e^{i\alpha\hat{G}_l(\boldsymbol{\theta})}\hat{G}_{lk}e^{-i\alpha\hat{G}_l(\boldsymbol{\theta})}$, since we have $\frac{\partial\hat{G}_l(\boldsymbol{\theta})}{\partial\theta_k}=\hat{G}_{lk}$. Then, we have in $S+B$ space $$\frac{\partial\hat{U}_{SB}^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{U}_{SB}(\boldsymbol{\theta})}{\partial\theta_k}=\hat{A}_j(\boldsymbol{\theta})\hat{U}_{SB}^\dagger(\boldsymbol{\theta})\hat{U}_{SB}(\boldsymbol{\theta})\hat{A}_k(\boldsymbol{\theta}).$$Tracing out the bath $B$, we get (see Appendix \[sec:app11\] to understand why an extra $1/L$ does not arise below): $$\begin{split} \sum_l\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}&=\sum_l\hat{B}_{lj}(\boldsymbol{\theta})\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\hat{\Pi}_l(\boldsymbol{\theta})\hat{B}_{lk}(\boldsymbol{\theta})\\ \Rightarrow{\rm Tr}\left[\sum_l\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}\right]&={\rm Tr}\left[\sum_l\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\hat{\Pi}_l(\boldsymbol{\theta})\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}\right]={\rm Tr}\left[\sum_l\hat{B}_{lj}(\boldsymbol{\theta})\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\hat{\Pi}_l(\boldsymbol{\theta})\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\hat{\Pi}_l(\boldsymbol{\theta})\hat{B}_{lk}(\boldsymbol{\theta})\right], \end{split}$$ since $\sum_l\hat{\Pi}_l(\boldsymbol{\theta})\hat{\Pi}_l^\dagger(\boldsymbol{\theta})=\mathbb{1}$. Also, we have in $S+B$ space $$i\frac{\partial\hat{U}_{SB}^\dagger(\boldsymbol{\theta})}{\partial\theta_k}\hat{U}_{SB}(\boldsymbol{\theta})=-\hat{A}_k(\boldsymbol{\theta})\hat{U}_{SB}^\dagger(\boldsymbol{\theta})\hat{U}_{SB}(\boldsymbol{\theta}).$$ Again, tracing out the bath $B$, we get: $$i\sum_l\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_l(\boldsymbol{\theta})=-\sum_l\hat{B}_{lk}(\boldsymbol{\theta})\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\hat{\Pi}_l(\boldsymbol{\theta}).$$ Then, we get the desired upper bound $C_Q$ to the QFIM from (\[eq:qfim3\]) as follows: $$\label{eq:qfim4} \begin{split} C_Q^{jk}&=4{\rm Re}\left[{\rm Tr}\left(\sum_l\hat{B}_{lj}(\boldsymbol{\theta})\hat{B}_{lk}(\boldsymbol{\theta})\hat{\rho}\right)\right.\\ &\left.-{\rm Tr}\left(\sum_p\hat{B}_{pj}(\boldsymbol{\theta})\hat{\rho}\right){\rm Tr}\left(\sum_r\hat{B}_{rk}(\boldsymbol{\theta})\hat{\rho}\right)\right], \end{split}$$ since $\sum_l\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\hat{\Pi}_l(\boldsymbol{\theta})=\mathbb{1}$. Also, here we have $$\sum_l\hat{B}_{lk}(\boldsymbol{\theta})=\sum_n\sum_{l_n}\hat{d}_{l_nk}^{[n]}=\frac{1}{L}\sum_n\sum_{l_n}\int_0^1 d\alpha e^{i\alpha\hat{\pi}_{l_n}^{[n]}}\hat{\pi}_{l_nk}^{[n]}e^{-i\alpha\hat{\pi}_{l_n}^{[n]}}.$$ Thus, (\[eq:qfim4\]) becomes: $$\begin{split} C_Q^{jk}&=4\sum_n{\rm Re}\left[{\rm Tr}\left[\sum_{l_n}\hat{\rho}^{[n]}\hat{d}_{l_nj}^{[n]}\hat{d}_{l_nk}^{[n]}\right]\right.\\ &\left.-{\rm Tr}\left[\sum_{l_{n_1}}\hat{\rho}^{[n]}\hat{d}_{l_{n_1}j}^{[n]}\right]{\rm Tr}\left[\sum_{l_{n_2}}\hat{\rho}^{[n]}\hat{d}_{l_{n_2}k}^{[n]}\right]\right]\\ &+4\sum_{n\neq m}\sum_{l_n,l_m}{\rm Re}\left[{\rm Tr}\left[\hat{\rho}^{[n,m]}\left(\hat{d}_{l_nj}^{[n]}\otimes\hat{d}_{l_mk}^{[m]}\right)\right]\right.\\ &\left.-{\rm Tr}\left[\hat{\rho}^{[n]}\hat{d}_{l_nj}^{[n]}\right]{\rm Tr}\left[\hat{\rho}^{[m]}\hat{d}_{l_mk}^{[m]}\right]\right]\\ &=\sum_n C_Q^{jk,[1]}\left(\hat{\rho}^{[n]}\right) + \sum_{n\neq m} C_Q^{jk,[2]}\left(\hat{\rho}^{[n,m]}\right), \end{split}$$where $C_Q^{jk,[1]}$ depends only on one-particle reduced density matrix on subsystem $n$ and $C_Q^{jk,[2]}$ depends on two-particle reduced density matrix on subsystems $n$, $m$. Further, if we restrict ourselves to only permutationally invariant states, the upper bound to the QFIM from (\[eq:qfim\_1p2p\]) is as follows: $$\label{eq:qfim_bound_1p2p} C_Q^{jk} = NC_Q^{jk,[1]}\left(\hat{\rho}^{[1]}\right)+N(N-1)C_Q^{jk,[2]}\left(\hat{\rho}^{[1]},\hat{\rho}^{[2]}\right),$$where $$C_Q^{jk,[1]}=4\sum_{p,r}{\rm Re}\left[{\rm Tr}\left[\hat{\rho}^{[1]}\hat{d}_{pj}\hat{d}_{pk}\right]-{\rm Tr}\left[\hat{\rho}^{[1]}\hat{d}_{pj}\right]{\rm Tr}\left[\hat{\rho}^{[1]}\hat{d}_{rk}\right]\right]$$only depends on the first order reduced density matrix, $$\begin{split} C_Q^{jk,[2]}&=4\sum_{p,r}{\rm Re}\left[{\rm Tr}\left[\hat{\rho}^{[2]}\left(\hat{d}_{pj}\otimes\hat{d}_{rk}\right)\right]\right.\\ &\left.-{\rm Tr}\left[\hat{\rho}^{[1]}\hat{d}_{pj}\right]{\rm Tr}\left[\hat{\rho}^{[1]}\hat{d}_{rk}\right]\right] \end{split}$$also depends on the second order reduced density matrix. Clearly, when the two-particle reduced density matrix of the initial probe state is a product state, we get $C_Q^{jk,[2]}=0$. When both the one- and two-particle reduced density matrices of the initial probe state are maximally mixed, we again get $C_Q^{jk,[2]}=0$. Thus, a precision scaling of $1/N$ cannot be achieved, when there are no correlations or too much quantum correlations in the initial state, like in unitary channel case. Thus, any quantum enhancement to the estimation precision is provided by the two-particle reduced density matrices of the probe state. Now, from (\[eq:qfim\_bound\_measure1\]), the set of POVMs comprising $$\begin{split} \hat{P}_0&=\hat{\rho}(\boldsymbol{\theta})=\sum_l\hat{\Pi}_l(\boldsymbol{\theta})\hat{\rho}\hat{\Pi}_l^\dagger(\boldsymbol{\theta}),\\ \hat{P}_m&=\frac{\partial\hat{\rho}(\boldsymbol{\theta})}{\partial\theta_m}=\sum_l\left[\hat{\Pi}_l(\boldsymbol{\theta})\hat{B}_{lm}(\boldsymbol{\theta})\hat{\rho}\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\right.\\ &\left.+\hat{\Pi}_l(\boldsymbol{\theta})\hat{\rho}\hat{B}_{lm}(\boldsymbol{\theta})\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\right] \qquad \forall m=1,\ldots,q, \end{split}$$along with one element accounting for normalisation, saturates the upper bound (\[eq:qfim4\]) to the QFIM, provided we have (\[eq:qfim\_bound\_saturate\]), i.e. here $$\begin{split} &4\sum_l{\rm Im}\left[{\rm Tr}\left(\hat{B}_{lj}(\boldsymbol{\theta})\hat{B}_{lk}(\boldsymbol{\theta})\hat{\rho}\right)\right]=0 \quad \forall j,k\\ \Rightarrow &4\sum_n{\rm Im}{\rm Tr}\left[\sum_{l_n}\hat{\rho}^{[n]}\hat{d}_{l_nj}^{[n]}\hat{d}_{l_nk}^{[n]}\right]\\ +&4\sum_{n\neq m}\sum_{l_n,l_m}{\rm Im}{\rm Tr}\left[\hat{\rho}^{[n,m]}\left(\hat{d}_{l_nj}^{[n]}\otimes\hat{d}_{l_mk}^{[m]}\right)\right]=0\\ \Rightarrow &4\sum_n{\rm Im}{\rm Tr}\left[\sum_{l_n}\hat{\rho}^{[n]}\hat{d}_{l_nj}^{[n]}\hat{d}_{l_nk}^{[n]}\right]=0, \end{split}$$since $\sum_{l_n,l_m}{\rm Tr}\left[\hat{\rho}^{[n,m]}\left(\hat{d}_{l_nj}^{[n]}\otimes\hat{d}_{l_mk}^{[m]}\right)\right]\in\mathbb{R}$. Hence, the attainability of the quantum enhancement to the estimation precision is determined solely by the one-particle reduced density matrices of the probe state. Consider the magnetic field example again here in the context of noisy channel. The same permutationally invariant mixed input probe state is used. Thus, the first and second order marginals are the same. Moreover, for the purposes of this example here, each Pauli operator $\hat{\sigma}_k$ for $k=1,2,3$ (corresponding to $X$, $Y$ and $Z$ directions) can be split into a sum of two single particle Kraus operators as $\hat{\sigma}_k=\sum_{l=1}^2\hat{\pi}_{lk}$, so that $\hat{\pi}_l=\sum_{k=1}^3\theta_k\hat{\pi}_{lk}$, e.g. $$\begin{split} \hat{\sigma}_1&=\left[\begin{array}{cc}0 & 1\\1 & 0\end{array}\right]=\left[\begin{array}{cc}0 & 1\\0 & 0\end{array}\right]+\left[\begin{array}{cc}0 & 0\\1 & 0\end{array}\right],\\ \hat{\sigma}_2&=\left[\begin{array}{cc}0 & -i\\i & 0\end{array}\right]=\left[\begin{array}{cc}0 & -i\\0 & 0\end{array}\right]+\left[\begin{array}{cc}0 & 0\\i & 0\end{array}\right],\\ \hat{\sigma}_3&=\left[\begin{array}{cc}1 & 0\\0 & -1\end{array}\right]=\left[\begin{array}{cc}1 & 0\\0 & 0\end{array}\right]+\left[\begin{array}{cc}0 & 0\\0 & -1\end{array}\right]. \end{split}$$ One can verify that such a decomposition for each Pauli operator $\hat{\sigma}_k$ satisfies $\sum_l\hat{\pi}_{lk}^\dagger\hat{\pi}_{lk}=\sum_l\hat{\pi}_{lk}\hat{\pi}_{lk}^\dagger=\mathbb{1}_2$. Then, $$\hat{d}_{lk}=\frac{1}{2}\int_0^1 d\alpha e^{i\alpha\hat{\pi}_l}\hat{\pi}_{lk}e^{-i\alpha\hat{\pi}_l}.$$ Then, we get: $$\label{eq:qfim_bound_1p} C_Q^{jk,[1]}=\sum_{l,p}{\rm Re}\left[2{\rm Tr}\left[\hat{d}_{lj}\hat{d}_{lk}\right]-{\rm Tr}\left[\hat{d}_{lj}\right]{\rm Tr}\left[\hat{d}_{pk}\right]\right],$$and $$\label{eq:qfim_bound_2p} \begin{split} C_Q^{jk,[2]}&=\frac{1}{3}\sum_{l,p}\sum_{t=1}^3{\rm Re}\left[{\rm Tr}\left[\left(\sum_{r=0}^1\hat{E}_r\hat{\sigma}_t\hat{E}_r\otimes\sum_{s=0}^1\hat{E}_s\hat{\sigma}_t\hat{E}_s\right)\right.\right.\\ &\left.\left.\times\left(\hat{d}_{lj}\otimes\hat{d}_{pk}\right)\right]\right]=\frac{2}{3}\sum_{l,p}{\rm Re}\left[{\rm Tr}\left[\left(\sum_{r=0}^1\hat{E}_r\hat{d}_{lj}\hat{E}_r\right)\right.\right.\\ &\left.\left.\times\left(\sum_{s=0}^1\hat{E}_s\hat{d}_{pk}\hat{E}_s\right)\right]\right]. \end{split}$$ Define $\hat{g}_{lj}=\sum_{r=0}^1\hat{E}_r\hat{d}_{lj}\hat{E}_r$ and $\hat{g}_{pk}=\sum_{s=0}^1\hat{E}_s\hat{d}_{pk}\hat{E}_s$. Thus, from (\[eq:qfim\_bound\_1p2p\]), (\[eq:qfim\_bound\_1p\]) and (\[eq:qfim\_bound\_2p\]), we get: $$\begin{split} C_Q^{jk}&=N\sum_{l,p}{\rm Re}\left[2{\rm Tr}\left[\hat{d}_{lj}\hat{d}_{lk}\right]-{\rm Tr}\left[\hat{d}_{lj}\right]{\rm Tr}\left[\hat{d}_{pk}\right]\right]\\ &+\frac{2N(N-1)}{3}\sum_{l,p}{\rm Re}\left[{\rm Tr}\left[\hat{g}_{lj}\hat{g}_{pk}\right]\right], \end{split}$$where all the quantities may be explicitly calculated. Again, note that when the terms ${\rm Tr}\left[\hat{d}_{lj}\hat{d}_{pk}\right]$ are all zero, the terms ${\rm Tr}\left[\hat{g}_{lj}\hat{g}_{pk}\right]$ in general (i.e. when $\hat{E}_0$ and $\hat{E}_1$ need not be local dephasing operators) can be non-zero, such that it is possible to achieve the Heisenberg limit with the presence of noise in the initial probe state, even when it cannot be achieved in the absence of noise in the initial probe state. Moreover, when the terms ${\rm Tr}\left[\hat{d}_{lj}\hat{d}_{pk}\right]$ are not all zero, the terms ${\rm Tr}\left[\hat{g}_{lj}\hat{g}_{pk}\right]$ can be such that $C_Q$ with noise in the initial probe state, such as by means of $\hat{E}_0$ and $\hat{E}_1$ for local dissipation, is larger than that without noise in the initial probe state, so that the estimation precision can be better with noise in the initial probe state than that without noise in the initial state. We next consider the more general situation, where the noisy channel need not be necessarily unital, and illustrate that the presence of noise in the channel can actually serve as a feature rather than a bug, since even when the Heisenberg precision scaling cannot be achieved with a unitary channel, it is possible to attain the Heisenberg scaling, and in fact, even beat it with a noisy channel. Noise in Channel as a Feature rather than a Bug {#sec:noise_feature} =============================================== We now look at the utility of the presence of noise in a general channel in achieving or even beating the Heisenberg precision limit. Consider first the case of a mixed probe state, comprising $N$ particles, evolving through a unitary channel, and that the $N$ particles of the probe undergo $N$ independent $\boldsymbol{\theta}$-dependent unitary evolutions, i.e. the unitary operator of the channel is a product of $N$ independent unitary operators $\hat{U}(\boldsymbol{\theta})=\bigotimes_{n=1}^N\hat{U}_{(n)}(\boldsymbol{\theta})$. Then, the QFIM takes the form as in (\[eq:noisy\_qfim1\]) as follows: $$\begin{split} J_Q^{jk}&=4{\rm Re}\sum_n\left[{\rm Tr}\left(\frac{\partial\hat{U}_{(n)}^{\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{U}_{(n)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[n]}\right)-{\rm Tr}\left(i\frac{\partial\hat{U}_{(n)}^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\hat{U}_{(n)}(\boldsymbol{\theta})\hat{\rho}^{[n]}\right){\rm Tr}\left(i\frac{\partial\hat{U}_{(n)}^\dagger(\boldsymbol{\theta})}{\partial\theta_k}\hat{U}_{(n)}(\boldsymbol{\theta})\hat{\rho}^{[n]}\right)\right]\\ &+4{\rm Re}\sum_{n\neq m}\left[{\rm Tr}\left\lbrace\left(\frac{\partial\hat{U}_{(n)}^{\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{U}_{(n)}(\boldsymbol{\theta})\otimes\hat{U}_{(m)}^\dagger(\boldsymbol{\theta})\frac{\partial\hat{U}_{(m)}(\boldsymbol{\theta})}{\partial\theta_k}\right)\hat{\rho}^{[n,m]}\right\rbrace-{\rm Tr}\left(i\frac{\partial\hat{U}_{(n)}^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\hat{U}_{(n)}(\boldsymbol{\theta})\hat{\rho}^{[n]}\right){\rm Tr}\left(i\frac{\partial\hat{U}_{(m)}^\dagger(\boldsymbol{\theta})}{\partial\theta_k}\hat{U}_{(m)}(\boldsymbol{\theta})\hat{\rho}^{[m]}\right)\right]\\ &=J_Q^{jk,[n]}+J_Q^{jk,[n,m]}. \end{split}$$ Now, note that the first term $J_Q^{jk,[n]}$ is of O($N$) and the second term $J_Q^{jk,[n,m]}$ is of O($N^2$), as they involve $N$ and $N(N-1)/2$ terms, respectively. Then, the term $J_Q^{jk,[n,m]}$ should be non-zero, implying that quantum correlations amongst the particles play a role in attaining the Heisenberg scaling of $1/N$. As observed earlier, if the probe state is a product state, i.e. $\hat{\rho}=\bigotimes_{n=1}^N\hat{\rho}^{[n]}$, then we have $\hat{\rho}^{[n,m]}=\hat{\rho}^{[n]}\otimes\hat{\rho}^{[m]}$, and consequently $J_Q^{jk,[n,m]}=0$, such that the Heisenberg scaling is lost and the covariance scales as $1/\sqrt{N}$ at best. Also, if both $\hat{\rho}^{[n]}$ and $\hat{\rho}^{[n,m]}$ are maximally mixed, the Heisenberg scaling is lost again and the best scaling for the covariance is $1/\sqrt{N}$, implying that too much quantum correlations harms the quantum advantage with $N$ parallel resources. Classical correlations in the initial probe state cannot be converted into quantum correlations by a unitary channel and cannot allow for an advantage over the scaling $1/\sqrt{N}$. Thus, any quantum enhancement to the estimation precision is provided by the two-particle reduced density matrices of the probe state. Notice that the saturability condition (\[eq:qcrb\_saturate\]) here yields: $$\begin{split} &4{\rm Im}\sum_n{\rm Tr}\left(\frac{\partial\hat{U}_{(n)}^{\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{U}_{(n)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[n]}\right)+4{\rm Im}\sum_{n\neq m}{\rm Tr}\left\lbrace\left(\frac{\partial\hat{U}_{(n)}^{\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{U}_{(n)}(\boldsymbol{\theta})\otimes\hat{U}_{(m)}^\dagger(\boldsymbol{\theta})\frac{\partial\hat{U}_{(m)}(\boldsymbol{\theta})}{\partial\theta_k}\right)\hat{\rho}^{[n,m]}\right\rbrace=0\\ \Rightarrow &4{\rm Im}\sum_n{\rm Tr}\left(\frac{\partial\hat{U}_{(n)}^{\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{U}_{(n)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[n]}\right)=0, \end{split}$$ since $\hat{U}_{(n/m)}^{\dagger}(\boldsymbol{\theta})\hat{U}_{(n/m)}(\boldsymbol{\theta})=\mathbb{1}_2 \, \forall n,m$. Clearly, the attainability of the quantum enhancement to the estimation precision is determined solely by the one-particle reduced density matrices of the initial mixed probe state. Next, consider the case of a mixed initial probe state, comprising $N$ particles, evolving through a noisy quantum channel, and that the $N$ particles of the initial probe state undergo $N$ independent $\boldsymbol{\theta}$-dependent evolutions, i.e. the Kraus operator of the noisy quantum channel is a product of $N$ independent Kraus operators $\hat{\Pi}_l(\boldsymbol{\theta})=\bigotimes_{n=1}^N\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})$, where we have $l=(l_1,l_2,\ldots,l_N)$. Then, (\[eq:noisy\_qfim2\]) takes the form: $$\begin{split} C_Q^{jk}&=4{\rm Re}\sum_n\left[{\rm Tr}\left(\sum_{l_n}\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[n]}\right)-{\rm Tr}\left(i\sum_{l_{n_1}}\frac{\partial\hat{\Pi}_{l_{n_1}}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{\Pi}_{l_{n_1}}(\boldsymbol{\theta})\hat{\rho}^{[n]}\right){\rm Tr}\left(i\sum_{l_{n_2}}\frac{\partial\hat{\Pi}_{l_{n_2}}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_{l_{n_2}}^{(n)}(\boldsymbol{\theta})\hat{\rho}^{[n]}\right)\right]\\ &+4{\rm Re}\sum_{n\neq m}\sum_{l_n,l_m}\left[{\rm Tr}\left\lbrace\left(\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\otimes\hat{\Pi}_{l_m}^{(m)\dagger}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_m}^{(m)}(\boldsymbol{\theta})}{\partial\theta_k}\right)\hat{\rho}^{[n,m]}\right\rbrace\right.\\ &\left.-{\rm Tr}\left(i\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\hat{\rho}^{[n]}\right){\rm Tr}\left(i\frac{\partial\hat{\Pi}_{l_m}^{(m)\dagger}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_{l_m}^{(m)}(\boldsymbol{\theta})\hat{\rho}^{[m]}\right)\right]\\ &=C_Q^{jk,[n]}+C_Q^{jk,[n,m]}. \end{split}$$ Again, note that the first term $C_Q^{jk,[n]}$ is of O($N$) and the second term $C_Q^{jk,[n,m]}$ is of O($N^2$), as they involve $N$ and $N(N-1)/2$ terms, respectively. Then, the term $C_Q^{jk,[n,m]}$ should be non-zero, implying that quantum correlations amongst the particles play a role in attaining the Heisenberg scaling of $1/N$ or better. Now, if the initial probe state is separable but not a product state, then that leads to $C_Q^{jk,[n,m]}\neq 0$. This is because, as noted earlier, although noise is widely known to reduce quantum correlations in a system in most cases [@HHHH; @NC], noise can also introduce or increase quantum correlations in a system [@DB; @BFP; @SKB; @OCMBRM], that may then be activated into entanglement [@MCWV; @PGACHW]. Even without quantum correlations between the particles of the initial probe state, an estimation precision scaling of $1/N$ or better can be achieved, when the initial probe state has classical correlations, that can be converted into quantum correlations by non-unital noise in the channel, unlike in cases of mixed state evolving unitarily or unitally considered earlier. Thus, noise in the quantum channel can act as a feature rather than a bug, since we see that the estimation precision that can be achieved with a noisy channel in some situations is impossible with a noiseless channel. However, if both $\hat{\rho}^{[n]}$ and $\hat{\rho}^{[n,m]}$ are maximally mixed, we get $C_Q^{jk,[n,m]}=0$, so a best precision scaling of $1/\sqrt{N}$ can be achieved. Moreover, if there exists some Kraus representation $\hat{\Pi}_l(\boldsymbol{\theta})$ of the quantum channel which renders $C_Q^{jk,[n,m]}=0$, then the covariance scales as $1/\sqrt{N}$ at best, even when the particles of the initial probe state are entangled. Extending the argument from Ref. [@EFD] to the multiparameter case, the covariance also scales as $1/\sqrt{N}$ at most, even in the presence of feedback control. Thus, any quantum enhancement to the estimation precision is provided by the two-particle reduced density matrices of the initial probe state. The saturability condition (\[eq:qfim\_bound\_saturate\]) here becomes: $$\begin{split} &4{\rm Im}\sum_n{\rm Tr}\left(\sum_{l_n}\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[n]}\right)+4{\rm Im}\sum_{n\neq m}\sum_{l_n,l_m}{\rm Tr}\left\lbrace\left(\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\otimes\hat{\Pi}_{l_m}^{(m)\dagger}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_m}^{(m)}(\boldsymbol{\theta})}{\partial\theta_k}\right)\hat{\rho}^{[n,m]}\right\rbrace=0\\ \Rightarrow &4{\rm Im}\sum_n{\rm Tr}\left(\sum_{l_n}\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[n]}\right)=0, \end{split}$$ since $\sum_{l_n/l_m}\hat{\Pi}_{l_n/l_m}^{(n/m)\dagger}(\boldsymbol{\theta})\hat{\Pi}_{l_n/l_m}^{(n/m)}(\boldsymbol{\theta})=\mathbb{1}_2 \, \forall n,m$. Clearly, the attainability of the quantum enhancement to the estimation precision is determined solely by the one-particle reduced density matrices of the probe state. Now, in terms of the evolved probe state $\hat{\rho}(\boldsymbol{\theta})$, (\[eq:noisy\_qfim2\]) takes the following form. We get the below $C_Q$ from $J_Q$ defined in the $S+B$ space by tracing out the bath $B$, and this is equivalent to $C_Q$ in terms of the initial state. $$\begin{split} C_Q^{jk}&=4{\rm Re}\left[{\rm Tr}\left(\sum_l\hat{\Pi}_l(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\hat{\rho}(\boldsymbol{\theta})\right)-{\rm Tr}\left(i\sum_p\hat{\Pi}_p(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_p^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\hat{\rho}(\boldsymbol{\theta})\right){\rm Tr}\left(i\sum_r\hat{\Pi}_r(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_r^\dagger(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}(\boldsymbol{\theta})\right)\right]\\ &=4{\rm Re}\sum_n\left[{\rm Tr}\left(\sum_{l_n}\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})\hat{\rho}^{[n]}(\boldsymbol{\theta})\right)\right.\\ &\left.-{\rm Tr}\left(i\sum_{l_{n_1}}\hat{\Pi}_{l_{n_1}}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_{n_1}}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{\rho}^{[n]}(\boldsymbol{\theta})\right){\rm Tr}\left(i\sum_{l_{n_2}}\hat{\Pi}_{l_{n_2}}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_{n_2}}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[n]}(\boldsymbol{\theta})\right)\right]\\ &+4{\rm Re}\sum_{n\neq m}\sum_{l_n,l_m}\left[{\rm Tr}\left\lbrace\left(\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\otimes\frac{\partial\hat{\Pi}_{l_m}^{(m)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_{l_m}^{(m)\dagger}(\boldsymbol{\theta})\right)\hat{\rho}^{[n,m]}(\boldsymbol{\theta})\right\rbrace\right.\\ &\left.-{\rm Tr}\left(i\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{\rho}^{[n]}(\boldsymbol{\theta})\right){\rm Tr}\left(i\hat{\Pi}_{l_m}^{(m)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_m}^{(m)\dagger}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[m]}(\boldsymbol{\theta})\right)\right]\\ &=C_Q^{jk,[n]}+C_Q^{jk,[n,m]}. \end{split}$$ Clearly, if the final probe state is a product state, we get $C_Q^{jk,[n,m]}=0$, such that a scaling of $1/\sqrt{N}$ can be attained at best. This implies that noise in the channel should introduce quantum correlations between the particles of the probe state, in order to provide quantum advantage in achieving an estimation precision scaling of $1/N$ or better. Also, if both $\hat{\rho}^{[n]}(\boldsymbol{\theta})$ and $\hat{\rho}^{[n,m]}(\boldsymbol{\theta})$ are maximally mixed, we get $C_Q^{jk,[n,m]}=0$. This implies that a lot of noise in the channel can introduce too much quantum correlations between the particles of the probe state, such that a best precision scaling of $1/\sqrt{N}$ can be achieved. Thus, some amount of noise in the quantum channel can act as a feature rather than a bug by introducing quantum correlations into the system, but excessive noise destroys the achievable quantum advantage with $N$ parallel resources. Beating the Heisenberg Limit {#sec:heisenberg} ============================ We show in Appendix \[sec:app11\] that unless the following condition is also satisfied by the channel Kraus operators: $$\label{eq:saturate_cond} \begin{split} \sum_l\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}\hat{\Pi}_l^\dagger(\boldsymbol{\theta})=\sum_l\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}&\Rightarrow\sum_l{\rm Tr}\left[\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\hat{\Pi}_l(\boldsymbol{\theta})\hat{\Pi}_l^\dagger(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}\right]=\sum_l{\rm Tr}\left[\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}\right]\\ &\Rightarrow\sum_l\hat{\Pi}_l(\boldsymbol{\theta})\hat{\Pi}_l^\dagger(\boldsymbol{\theta})=\mathbb{1}, \end{split}$$ i.e. unless the channel is unital, any noise in the channel may beat the Heisenberg limit, when (\[eq:qfim\_bound\_saturate\]) is satisfied. However, since the Heisenberg limit is not ultimate, e.g. see Refs. [@AL; @BL; @RB; @RL; @BFCG; @WMC; @ARCPHLD; @BDDFSC; @BDFSBC; @CS; @NKDBSM; @JPJMNS; @KR; @TACLA], although this has sparked some controversy [@ZPK; @LP; @HWX; @GM; @GLM; @HBZW; @RSDHZ], now the question is what is the fundamental ultimate quantum limit to the achievable estimation precision in the presence of optimal amount of noise in a non-unitary quantum channel. In other words, what should the quantity $C_Q$ look like when the precision achievable is maximum in a non-unitary channel. It is fairly easy to see that for optimal quantity of noise in the channel, the two-particle reduced density operators of the evolved probe state should be a maximally entangled mixed state (MEMS) and the one-particle reduced density operators of the evolved probe state should be a maximally mixed state [@AIDS; @VAM; @LZFFL]. Therefore, we must have the reduced density operators of the evolved probe state as follows: $\hat{\rho}^{[n]}(\boldsymbol{\theta})=\mathbb{1}_2/2$ and $\hat{\rho}^{[n,m]}_{MEMS}(\boldsymbol{\theta})\neq \hat{\rho}^{[n]}(\boldsymbol{\theta})\otimes\hat{\rho}^{[m]}(\boldsymbol{\theta})$. Then, the fundamental quantum limit to the achievable estimation precision in a noisy channel is given by the following: $$\begin{split} J_{SH}^{jk}&={\rm Re}\sum_n\left[2{\rm Tr}\left(\sum_{l_n}\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})\right)\right.\\ &\left.-{\rm Tr}\left(i\sum_{l_{n_1}}\hat{\Pi}_{l_{n_1}}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_{n_1}}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\right){\rm Tr}\left(i\sum_{l_{n_2}}\hat{\Pi}_{l_{n_2}}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_{n_2}}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_k}\right)\right]\\ &+{\rm Re}\sum_{n\neq m}\sum_{l_n,l_m}\left[4{\rm Tr}\left\lbrace\left(\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\otimes\frac{\partial\hat{\Pi}_{l_m}^{(m)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_{l_m}^{(m)\dagger}(\boldsymbol{\theta})\right)\hat{\rho}^{[n,m]}_{MEMS}(\boldsymbol{\theta})\right\rbrace\right.\\ &\left.-{\rm Tr}\left(i\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\right){\rm Tr}\left(i\hat{\Pi}_{l_m}^{(m)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_m}^{(m)\dagger}(\boldsymbol{\theta})}{\partial\theta_k}\right)\right]=J_{SH}^{jk,[n]}+J_{SH}^{jk,[n,m]}, \end{split}$$ where we have used the subscript “$SH$" to denote ‘super-Heisenberg’ [@NKDBSM] fundamental quantum estimation precision limit. The set of POVMs from (\[eq:qfim\_bound\_measure1\]) then saturates this ultimate limit. Note that a maximally discordant mixed state (MDMS) need not be maximally entangled [@GGZ]. In fact, it can be not entangled at all, but then it can be at best as nonclassical as (and not more nonclassical than) a maximally entangled pure state [@PGACHW], and therefore, cannot allow to beat the Heisenberg limit. Note, however, that in order for entanglement to be activated from the quantum correlations in the probe state, multi-particle unitary maps (such as CNOT gates) are required [@MCWV; @PGACHW], if there was no entanglement in the initial probe state already or any entanglement in the initial probe state vanishes even if leaving the probe state maximally discordant. The Kraus representation of the channel is non-unique and is invariant under arbitrary unitary maps and so the above equations are invariant under addition of such unitary maps. But unless the quantum correlations are activated into entanglement, the above best estimation precision cannot be achieved. Thus, the active ancilla-assisted scheme from Ref. [@DDM] can be strictly better than the passive ancilla-assisted scheme, since mixed entangled states can be more nonclassical than mixed separable states [@PGACHW]. Note that a unitary operator is also a Kraus operator, and an identity operator is trivially unitary. Now, without the additional unitary maps, that can activate entanglement from quantum correlations in the probe state, the best estimation precision limit is determined by the two-particle reduced density matrices of the evolved probe state being separable and maximally discordant (MDMS) [@GGZ], i.e. the two-particle reduced density matrices having maximal dissonance [@MPSVW]. Therefore, we must have $\hat{\rho}^{[n,m]}_{MDMS}(\boldsymbol{\theta})\neq \hat{\rho}^{[n]}(\boldsymbol{\theta})\otimes\hat{\rho}^{[m]}(\boldsymbol{\theta})$, and then the fundamental limit is given by: $$\begin{split} J_Q^{jk}&=4{\rm Re}\sum_n\left[{\rm Tr}\left(\sum_{l_n}\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})\hat{\rho}^{[n]}(\boldsymbol{\theta})\right)\right.\\ &\left.-{\rm Tr}\left(i\sum_{l_{n_1}}\hat{\Pi}_{l_{n_1}}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_{n_1}}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{\rho}^{[n]}(\boldsymbol{\theta})\right){\rm Tr}\left(i\sum_{l_{n_2}}\hat{\Pi}_{l_{n_2}}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_{n_2}}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[n]}(\boldsymbol{\theta})\right)\right]\\ &+4{\rm Re}\sum_{n\neq m}\sum_{l_n,l_m}\left[{\rm Tr}\left\lbrace\left(\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\otimes\frac{\partial\hat{\Pi}_{l_m}^{(m)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_{l_m}^{(m)\dagger}(\boldsymbol{\theta})\right)\hat{\rho}^{[n,m]}_{MDMS}(\boldsymbol{\theta})\right\rbrace\right.\\ &\left.-{\rm Tr}\left(i\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{\rho}^{[n]}(\boldsymbol{\theta})\right){\rm Tr}\left(i\hat{\Pi}_{l_m}^{(m)}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_m}^{(m)\dagger}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[m]}(\boldsymbol{\theta})\right)\right]=J_Q^{jk,[n]}+J_Q^{jk,[n,m]}, \end{split}$$ which corresponds to a precision scaling of $1/N$ for maximal pairwise quantum correlations, without entanglement, amongst the final probe particles [@MCWV], since mixed separable states can be as nonclassical as entangled pure states [@PGACHW]. Since the best estimation precision achievable with quantum correlations without entanglement coincides with and does not beat the Heisenberg limit, we used the subscript “$Q$" above. Next, with the additional unitary maps and entanglement activated from the quantum correlations in the probe state, since the super-Heisenberg limit is obtained for the two-particle reduced density operators of the evolved probe state being maximally entangled and the one-particle reduced density operators being maximally mixed, the super-Heisenberg limit corresponds to a precision scaling of $1/N^2$ for maximal pairwise quantum correlations including entanglement amongst the final probe particles [@BFCG]. This is because mixed entangled bipartite states can be twice as nonclassical as maximally entangled bipartite pure states [@PGACHW]. Note that the precision scaling that could be achieved, e.g. in Ref. [@BFCG], using two-particle Hamiltonians for a unitary channel, is achieved using one-particle Kraus operators for a noisy channel here, i.e. local noise inducing quantum correlations including entanglement amongst the two particles [@SKB; @OCMBRM]. Notice that we did not get precision scaling better than $1/N$ when we studied the unitary channel case in this paper, since we considered only one-particle Hamiltonians. If we further considered $\gamma$-particle (instead of one-particle) Kraus operators for the noisy channel case here, with $\gamma>1$, each set of Kraus operators can generate quantum correlations including entanglement induced by a common bath amongst the $\gamma$ particles [@DB; @BFP]. Then, the best super-Heisenberg precision scaling of $1/N^{2\gamma}$ may be attained, that is known to be only attainable using $2\gamma$-particle Hamiltonians for a unitary channel. For example, using three-particle Kraus operators for a noisy channel, the best precision scaling of $1/N^6$ can be achieved, that is otherwise known to be possible with six-particle Hamiltonians for a unitary channel. This is again because mixed entangled states can be twice as nonclassical as pure entangled states [@PGACHW]. Considering again one-particle Kraus operators for the noisy channel, although the quantum Cramér-Rao bound (QCRB) can be beaten in the system space, the QCRB for the enlarged system plus bath space, for which the evolution is unitary, is not beaten. This also holds for multi-particle Kraus operators for the channel, where entanglement is induced by common baths. This implies that the estimation in the system space alone is not unbiased, when the QCRB, and therefore, the Heisenberg limit are beaten [@WM; @LP]. However, when the estimation involving measurements beats the QCRB, and therefore, the Heisenberg limit, it does not violate Robertson’s generalized formulation of Heisenberg’s uncertainty relation [@HPR; @NC; @WM], that does not include the measurement process. Note that the QCRB can be derived from the general Heisenberg’s uncertainty relation, upon considering that the estimator is unbiased [@WM]. Thus, beating the QCRB implies that the estimator bias is no longer zero (also see Appendix \[sec:app1\]), but does not violate the general Heisenberg’s uncertainty principle. Nonetheless, without including measurements, it is noteworthy that entanglement amongst the particles of a state allows for lower bounds for the dispersions of non-commuting observables than that furnished by the traditional Heisenberg’s uncertainty relation, originally derived for one particle [@GR]. Finally, note that the super-Heisenberg limit will not necessarily be strictly less than the Heisenberg limit, such as when there are quantum correlations without entanglement in the evolved probe state. Moreover, if the two-particle reduced density matrices of the initial probe state are already maximally entangled, the super-Heisenberg limit will equal the Heisenberg limit. This is because it is only entanglement generated in the channel, i.e. in the evolution stage, that can contribute to a precision scaling better than the Heisenberg limit, and entanglement in the preparation and measurement stages are inessential [@RB]. Furthermore, the Heisenberg limit is not beaten, when the Kraus operators of the channel satisfy the condition (\[eq:saturate\_cond\]). When the QCRB and the Heisenberg limit are not beaten, the estimator in the $S$ space alone will be unbiased. Otherwise, when they are beaten, the estimator in the $S$ space alone will be biased and may be of limited interest in practice. The upper bound (\[eq:noisy\_qfim2\]) to the QFIM reduces to the following actual QFIM, when (\[eq:saturate\_cond\]) is satisfied: $$\begin{split} J_Q^{jk}&=4{\rm Re}\sum_n\left[{\rm Tr}\left(\sum_{l_n}\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[n]}\right)-{\rm Tr}\left(\sum_{l_{n_1}}\frac{\partial\hat{\Pi}_{l_{n_1}}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{\Pi}_{l_{n_1}}^{(n)}(\boldsymbol{\theta})\hat{\rho}^{[n]}\right){\rm Tr}\left(\sum_{l_{n_2}}\hat{\Pi}_{l_{n_2}}^{(n)\dagger}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_{n_2}}^{(n)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[n]}\right)\right]\\ &+4{\rm Re}\sum_{n\neq m}\sum_{l_n,l_m}\left[{\rm Tr}\left\lbrace\left(\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\otimes\hat{\Pi}_{l_m}^{(m)\dagger}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_m}^{(m)}(\boldsymbol{\theta})}{\partial\theta_k}\right)\hat{\rho}^{[n,m]}\right\rbrace\right.\\ &\left.-{\rm Tr}\left(\frac{\partial\hat{\Pi}_{l_n}^{(n)\dagger}(\boldsymbol{\theta})}{\partial\theta_j}\hat{\Pi}_{l_n}^{(n)}(\boldsymbol{\theta})\hat{\rho}^{[n]}\right){\rm Tr}\left(\hat{\Pi}_{l_m}^{(m)\dagger}(\boldsymbol{\theta})\frac{\partial\hat{\Pi}_{l_m}^{(m)}(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}^{[m]}\right)\right]=J_Q^{jk,[n]}+J_Q^{jk,[n,m]}. \end{split}$$ This was the case for unital channel of the form in Section \[sec:qfim\_bound\]. Notice that if the initial probe state is maximally mixed, i.e. $\hat{\rho}=\mathbb{1}_{2^N}/2^N$, we get $\hat{\rho}(\boldsymbol{\theta})=\mathbb{1}_{2^N}/2^N$ too in that section. This is why quantum correlations are reduced, and cannot be created from any classical correlation in the probe state by the noise in a unital channel [@SKB], and so the QCRB and the Heisenberg limit are not beaten and the estimator remains unbiased. When there are no correlations or too much quantum correlations in the two-particle reduced density matrix of the initial probe state, the best achievable precision scaling is $1/\sqrt{N}$ with a unital channel, like the unitary channel case. Thus, as long as (\[eq:saturate\_cond\]) is satisfied, a noisy channel can at best attain the Heisenberg limit but not beat it, so that the estimator remains unbiased. However, (\[eq:saturate\_cond\]) will not be satisfied by non-unital channels, such as local dissipation of the form in Section \[sec:mag\_fld\], so that quantum correlations can be created from classical correlations in the probe state by noise in the channel. Notice that in this case, if the initial probe state is maximally mixed, the evolved state will not be maximally mixed. Thus, it may be possible to beat the Heisenberg limit with non-unital channels, and the estimator would be biased when the Heisenberg limit is beaten. Moreover, the fact that dissonance is more robust to decoherence than entanglement [@WSFVB] suggests that it is more probable to attain the Heisenberg limit with a mixed state input than a pure entangled state input to a unital channel. In fact, it may not be possible at all to attain the Heisenberg limit with an input pure entangled state because of entanglement sudden death [@AMHMSWSRD; @YE]. Furthermore, since dissonance can grow and give rise to entanglement in the presence of dissipation, it is more probable to attain or surpass the Heisenberg limit with a mixed state input than a pure entangled state input to a non-unital channel. In fact, it is never possible to attain or surpass the Heisenberg limit with an input pure entangled state because of no initial classical correlations and entanglement sudden death. On the other hand, the fact that entanglement is the intrinsic and minimal discord capturing nonlocal quantum correlations, as opposed to dissonance, which is the extrinsic discord capturing local quantum correlations that cannot be shared [@SL; @SZ], is the reason why the Heisenberg limit can be surpassed only when entanglement and not just dissonance is generated in a non-unital channel fed with a mixed state. In summary, it may appear that noisy quantum states or channels may require the same or less resources to achieve as much as noiseless quantum states or channels, by exploiting additional resources from the environment. That is why, the overall resources required by the noisy cases in the enlarged noiseless system plus bath space are the same as those known to be required by the noiseless cases in the system space alone. However, any channel can be expressed by Kraus operators, which has the same effect as performing a measurement and discarding the result. To have a measurement on a pure state that is the same as the measurement of the pure state after noise, one would just need to have a POVM that combines the POVM elements used for the mixed state with the Kraus operators of the channel, without requiring any extra resource. Thus, a precision scaling of $1/N^{2\gamma}$ can, in principle, be achieved with a pure initial probe state evolving through a unitary channel, described by $\gamma$-particle Hamiltonians, by using a POVM, that combines the POVM elements used here with the $\gamma$-particle Kraus operators of the noisy channel and the Kraus operators used to prepare the initial mixed probe state considered here. Thus, entangling measurements [@RGMSSGB] may also contribute to a precision scaling surpassing the Heisenberg limit, unlike as noted earlier. Similarly, a precision scaling of $1/N^{2\gamma}$ can, in principle, be also achieved with a mixed initial probe state evolving through a unitary channel, described by $\gamma$-particle Hamiltonians, by using a POVM, that combines the POVM elements used here with the $\gamma$-particle Kraus operators of the noisy channel considered here. But using entangling measurements with our noisy channel, it is possible to obtain even better precision scaling, so the noisy case is still superior. Nonetheless, although it may likewise seem that it should be possible too to achieve a precision scaling of $1/N^{2\gamma}$ with a pure initial probe state evolving through the noisy channel, described by $\gamma$-particle Kraus operators, by using a POVM, obtained by combining the POVM elements used here with the Kraus operators used to prepare the initial mixed probe state from the pure state, that is not true even if the initial pure probe state is maximally entangled and/or if the channel is non-unital. This is because of no initial classical or local quantum correlations in the probe state and sudden death of any entanglement in the probe state caused by the noise in the channel, as discussed earlier. This is the distinct important advantage, unique to mixed state metrology [@MCWV]. Conclusion {#sec:conc} ========== We studied fundamental quantum limits in noisy quantum multiparameter estimation using a quantum Fisher information matrix (QFIM) defined in terms of anti-symmetric logarithmic derivatives (ALDs), that lend a convenient way to study noisy metrology. We presented a QFIM for multiparameter estimation using a mixed probe state evolving unitarily. We then considered a mixed state evolving via a noisy channel, and presented an upper bound to the QFIM for this general-most case. We found that the bounds are such that the quantum enhancement in the estimation precision is provided by the two-particle reduced density matrices and the attainability of the quantum enhancement is solely determined by the one-particle reduced density matrices of the initial probe state, when the channel is described by one-particle evolution operators. We showed conditions and accordingly measurements to saturate these explicitly computable bounds (e.g. in terms of the Kraus operators of the channel), not known to exist with conventional symmetric logarithmic derivatives (SLDs) for these general-most cases. We saw that the Heisenberg limit can be achieved even in these most general noisy cases. Moreover, for the most part of the past century since the inception of quantum physics, weird quantum phenomena, such as superposition and entanglement, were perceived as bugs, until the $80$s when the scientists started to exploit them as features [@CK]. Today, the biggest hurdle to quantum technologies, e.g. in building a scalable quantum computer, is noise. The results here suggest that some noise in the initial probe state or the quantum channel can actually serve as a feature rather than a bug, because we saw that the achievable estimation precision scaling in the presence of noise is not possible in the absence of any noise in the initial probe state or the quantum channel. Noise in the initial probe state or the channel provides with a quantum advantage by introducing quantum correlations into the system. However, too much noise in the initial probe state or the channel is detrimental, since it introduces too much quantum correlations into the system, and, in turn, harms the quantum advantage achievable with $N$ parallel resources. Furthermore, we found that it is possible to beat the Heisenberg limit by exploiting the noise in the quantum channel. The fundamental super-Heisenberg precision limit for non-unitary channel is then determined by two-particle reduced density operators of the evolved probe state being maximally entangled and one-particle reduced density operators being maximally mixed, and corresponds to a precision scaling of $1/N^2$, achieved with one-particle Kraus operators. Further, using $\gamma$-particle (instead of one-particle) Kraus operators for a noisy channel, where $\gamma>1$, the best scaling of $1/N^{2\gamma}$ can be attained, that is known to be only possible with $2\gamma$-particle Hamiltonians for a noiseless channel. Such a precision scaling can be achieved with an initial pure or mixed probe state evolving through a unitary channel without requiring additional resources, but not with an initial pure probe state evolving through a noisy channel. These results may be experimentally demonstrated, as part of future work, with more practically implementable measurements that may exist than those presented here. This work was partially supported by the UK National Quantum Technologies Programme (EP/M01326X/1, EP/M013243/1). The author thanks Christos Gagatsos, Dominic Branford, Animesh Datta, Ranjith Nair, Mankei Tsang, Andy Chia, Pieter Kok, Rafał Demkowicz-Dobrzański, Dominic Berry, Pragya Shukla, Rahul Gupta, Anindya Banerji, Sai Vinjanampathy and Jamie Friel for stimulating discussions in relation to this work. Proof for $\nu V\left[\boldsymbol{\tilde{\theta}}(m)\right]\geq \left[J_C(\boldsymbol{\theta})\right]^{-1}\geq \left[J_Q(\boldsymbol{\theta})\right]^{-1}$ {#sec:app1} ========================================================================================================================================================== Here, we prove the following quantum Cramér-Rao inequality, as claimed in Section \[sec:qcrb\]: $$\label{eq:supp_ald_qcrb} \nu V\left[\boldsymbol{\tilde{\theta}}(m)\right]\geq \left[J_C(\boldsymbol{\theta})\right]^{-1}\geq \left[J_Q(\boldsymbol{\theta})\right]^{-1},$$ where $\nu$ is the number of times the experiment is repeated, $$\label{eq:supp_est_err_cov} V\left[\boldsymbol{\tilde{\theta}}(m)\right] = \sum_m p(m|\boldsymbol{\theta}) \left(\boldsymbol{\tilde{\theta}}(m)-\boldsymbol{\theta}\right)\left(\boldsymbol{\tilde{\theta}}(m)-\boldsymbol{\theta}\right)^T =: \Sigma$$ is the estimation error covariance, $$\label{eq:supp_fim1} J_C^{jk} = \sum_m\frac{1}{p(m|\boldsymbol{\theta})}\frac{\partial}{\partial\theta_j}p(m|\boldsymbol{\theta})\frac{\partial}{\partial\theta_k}p(m|\boldsymbol{\theta})$$ is the classical Fisher information matrix (FIM), and $$\label{eq:supp_qfim1} J_Q^{jk} = \frac{1}{2}{\rm Tr}\left[\left(\hat{L}_j^\dagger \hat{L}_k+\hat{L}_k^\dagger \hat{L}_j\right)\hat{\rho}(\boldsymbol{\theta})\right]$$ is the quantum Fisher information matrix (QFIM), with the operators $\hat{L}_k$ satisfying $$\label{eq:supp_ald_diffeqn1} \frac{1}{2}\left(\hat{L}_k\hat{\rho}(\boldsymbol{\theta})+\hat{\rho}(\boldsymbol{\theta})\hat{L}_k^\dagger\right)=\frac{\partial}{\partial\theta_k}\hat{\rho}(\boldsymbol{\theta}).$$ The proof is adapted from Ref. [@TWC] for frequentist multiparameter estimation problem here. The estimates $\boldsymbol{\tilde{\theta}}(m) = \left[\begin{array}{cccc} \tilde{\theta}_1(m) & \tilde{\theta}_2(m) & \hdots & \tilde{\theta}_q(m) \end{array}\right]^T$ of the parameters $\boldsymbol{\theta} = \left[\begin{array}{cccc} \theta_1 & \theta_2 & \hdots & \theta_q \end{array}\right]^T$ are unbiased, if $$\label{eq:supp_unbiased_est} \sum_m p(m|\boldsymbol{\theta})\tilde{\theta}_j(m) = \theta_j \qquad \forall j,$$ where $p(m|\boldsymbol{\theta})={\rm Tr}\left(\hat{P}_m\hat{\rho}(\boldsymbol{\theta})\right)$ is the conditional probability to obtain the outcome $m$ from a measurement performed on the evolved probe state $\hat{\rho}(\boldsymbol{\theta})$ via a positive operator valued measure (POVM) $\{\hat{P}_m\}$, given that the parameters have the value $\boldsymbol{\theta}$. Differentiating (\[eq:supp\_unbiased\_est\]) with respect to $\theta_k$, we get $$\label{eq:supp_deljk} %\begin{split} \delta_{jk}=\sum_m\left(\tilde{\theta}_j(m)-\theta_j\right)\frac{\partial p(m|\boldsymbol{\theta})}{\partial\theta_k} ={\rm Re}\sum_m\left(\tilde{\theta}_j(m)-\theta_j\right){\rm Tr}\left[\hat{P}_m\hat{L}_k\hat{\rho}(\boldsymbol{\theta})\right]. %\end{split}$$ Then, following Ref. [@TWC], since $\nu\geq 1$, we get: $$%\begin{split} \boldsymbol{v}^T\boldsymbol{u}=\sum_ju_jv_j\leq A^TB,\qquad \boldsymbol{w}^T\boldsymbol{u}=\sum_ku_kw_k\leq{\rm Re}\left[{\rm Tr}\left(C^\dagger D\right)\right], %\end{split}$$ where $\boldsymbol{u}$, $\boldsymbol{v}$, $\boldsymbol{w}$ are arbitrary real column vectors, and $$\begin{split} A^T&=\sum_kv_k\frac{\partial p(m|\boldsymbol{\theta})}{\partial\theta_k}\frac{1}{\sqrt{p(m|\boldsymbol{\theta})}},\qquad B=\sum_ju_j\left(\tilde{\theta}_j(m)-\theta_j\right)\sqrt{\nu}\sqrt{p(m|\boldsymbol{\theta})},\\ C^\dagger&=\sum_lw_l\sqrt{\hat{P}_m}\hat{L}_l\sqrt{\hat{\rho}(\boldsymbol{\theta})},\qquad \quad \, \, \, D=\sum_ju_j\left(\tilde{\theta}_j(m)-\theta_j\right)\sqrt{\nu}\sqrt{\hat{\rho}(\boldsymbol{\theta})}\sqrt{\hat{P}_m}. \end{split}$$ We assume that $\boldsymbol{v}^T\boldsymbol{u}$ and $\boldsymbol{w}^T\boldsymbol{u}$ are positive, which are valid assumptions given how we set these later. Then, $$\label{eq:supp_schwarz} \begin{split} \left(\boldsymbol{v}^T\boldsymbol{u}\right)^2&\leq\left(A^TB\right)^2\leq\left(A^TA\right)\left(B^TB\right),\\ \left(\boldsymbol{w}^T\boldsymbol{u}\right)^2&\leq\left|{\rm Tr}\left(C^\dagger D\right)\right|^2\leq{\rm Tr}\left(C^\dagger C\right){\rm Tr}\left(D^\dagger D\right), \end{split}$$ where the second inequalities in both lines are Schwarz inequalities. Now, note that $A^TA=\boldsymbol{v}^TJ_C\boldsymbol{v}$, where $J_C$ is a real, symmetric and positive semidefinite classical Fisher information matrix (FIM) as defined in (\[eq:supp\_fim1\]), ${\rm Tr}\left(C^\dagger C\right)=\boldsymbol{w}^TJ_Q\boldsymbol{w}$, where $J_Q$ is a real, symmetric and positive semidefinite quantum Fisher information matrix (QFIM) as defined in (\[eq:supp\_qfim1\]), and $B^TB={\rm Tr}\left(D^\dagger D\right)=\boldsymbol{u}^T\nu\Sigma\boldsymbol{u}$, where $\nu\Sigma$ is the estimation error covariance matrix as defined in (\[eq:supp\_est\_err\_cov\]). Substituting these in (\[eq:supp\_schwarz\]), we find that $$%\begin{split} \left(\boldsymbol{v}^TJ_C\boldsymbol{v}\right)\left(\boldsymbol{u}^T\nu\Sigma\boldsymbol{u}\right) \geq \left(\boldsymbol{v}^T\boldsymbol{u}\right)\left(\boldsymbol{u}^T\boldsymbol{v}\right),\qquad \left(\boldsymbol{w}^TJ_Q\boldsymbol{w}\right)\left(\boldsymbol{u}^T\nu\Sigma\boldsymbol{u}\right) \geq \left(\boldsymbol{w}^T\boldsymbol{u}\right)\left(\boldsymbol{u}^T\boldsymbol{w}\right). %\end{split}$$ Setting $\boldsymbol{v}=J_C^{-1}\boldsymbol{u}$ implies that $$\boldsymbol{u}^T\left(\nu\Sigma-J_C^{-1}\right)\boldsymbol{u} \geq 0,$$ for arbitrary real vectors $\boldsymbol{u}$. Since $\nu\Sigma-J_C^{-1}$ is real and symmetric, this implies that $\nu\Sigma-J_C^{-1}$ is positive semidefinite. Also, setting $\boldsymbol{w}=J_Q^{-1}\boldsymbol{u}$ implies that $$\boldsymbol{u}^T\left(\nu\Sigma-J_Q^{-1}\right)\boldsymbol{u} \geq 0.$$ Since $\nu\Sigma-J_Q^{-1}$ is real and symmetric, this implies that $\nu\Sigma-J_Q^{-1}$ is positive semidefinite. We now take $\boldsymbol{v}=\boldsymbol{w}$. Then, we have $$%\begin{split} \boldsymbol{v}^T\boldsymbol{u}\leq A^TB={\rm Re}\left[{\rm Tr}\left(C^\dagger D\right)\right] \Rightarrow\left(\boldsymbol{v}^T\boldsymbol{u}\right)\left(\boldsymbol{u}^T\boldsymbol{v}\right)\leq\left|{\rm Tr}\left(C^\dagger D\right)\right|^2\leq{\rm Tr}\left(C^\dagger C\right){\rm Tr}\left(D^\dagger D\right)=\left(\boldsymbol{v}^TJ_Q\boldsymbol{v}\right)\left(\boldsymbol{u}^T\nu\Sigma\boldsymbol{u}\right). %\end{split}$$ Then, again setting $\boldsymbol{v}=J_C^{-1}\boldsymbol{u}$ imply that $$\left(\boldsymbol{u}^TJ_C^{-1}\boldsymbol{u}\right)\left(\boldsymbol{u}^TJ_C^{-1}\boldsymbol{u}\right)\leq\left(\boldsymbol{u}^TJ_C^{-1}J_QJ_C^{-1}\boldsymbol{u}\right)\left(\boldsymbol{u}^T\nu\Sigma\boldsymbol{u}\right).$$ Now, since $\boldsymbol{u}^T\left(\nu\Sigma-J_C^{-1}\right)\boldsymbol{u} \geq 0$, we get from above $$%\begin{split} \boldsymbol{u}^TJ_C^{-1}\boldsymbol{u}\leq \boldsymbol{u}^TJ_C^{-1}J_QJ_C^{-1}\boldsymbol{u} \Rightarrow J_C^{-1}\leq J_C^{-1}J_QJ_C^{-1} \Rightarrow J_C^{-1}\geq J_Q^{-1}. %\end{split}$$ Thus, we have (\[eq:supp\_ald\_qcrb\]). Saturability of ALD-based QCRB {#sec:app2} ============================== Here, we prove that an ALD-based QCRB can be saturated when the expectation of the commutator of the ALDs vanishes, as claimed in Section \[sec:qcrb\]: $$\label{eq:supp_qcrb_saturate} {\rm Tr}\left[\left(\hat{L}_j^\dagger \hat{L}_k-\hat{L}_k^\dagger \hat{L}_j\right)\hat{\rho}(\boldsymbol{\theta})\right]={\rm Tr}\left(\left[\hat{L}_j,\hat{L}_k\right]\hat{\rho}(\boldsymbol{\theta})\right)= 0,$$ where the operators $\hat{L}_k$ are anti-Hermitian. The proof presented here is directly adapted from Ref. [@RJD] for ALDs, and relies on the fact that it is enough to show that the QFIM bound is equivalent to the Holevo bound when (\[eq:supp\_qcrb\_saturate\]) is satisfied, because the Holevo bound is a tighter bound, known to be asymptotically saturable. Given that the operators $\hat{L}_k$ are anti-Hermitian and satisfy $$\frac{1}{2}\left(\hat{L}_k\hat{\rho}(\boldsymbol{\theta})-\hat{\rho}(\boldsymbol{\theta})\hat{L}_k\right)=\frac{\partial}{\partial\theta_k}\hat{\rho}(\boldsymbol{\theta}),$$ and the QFIM $J_Q$ is given by (\[eq:supp\_qfim1\]), then (\[eq:supp\_ald\_qcrb\]) implies that for a given cost matrix $G$, the estimation cost is bounded by $$\label{eq:supp_helstrom_bound} {\rm tr}\left(G\nu V\left[\boldsymbol{\tilde{\theta}}(m)\right]\right) \geq {\rm tr}\left(GJ_Q^{-1}\right),$$ where ${\rm tr}$ denotes the trace of a matrix in distinction from ${\rm Tr}$ for an operator. Then, the achievable estimation uncertainty is lower-bounded by the Holevo Cramér-Rao bound [@RJD; @HNMH]: $$\label{eq:supp_holevo_bound} {\rm tr}\left(G\nu V\left[\boldsymbol{\tilde{\theta}}(m)\right]\right) \geq \min_{\{\hat{X}_j\}}\left\lbrace{\rm tr}\left(G{\rm Re}W\right)+||G{\rm Im}W||_1\right\rbrace,$$ where $||\cdot||_1$ is the operator trace norm, the elements of the matrix $W$ are [@HNMH] $$W_{jk}={\rm Tr}\left(\hat{X}_j^\dagger\hat{X}_k\hat{\rho}(\boldsymbol{\theta})\right),$$ and the minimization is performed over the operators $\hat{X}_j$ satisfying $$\frac{1}{2}{\rm Tr}\left[\left(\hat{X}_j^\dagger\hat{L}_k+\hat{L}_k^\dagger\hat{X}_j\right)\hat{\rho}(\boldsymbol{\theta})\right]=\delta_{jk}.$$ In our case, the operators $\hat{X}_j$ are also anti-Hermitian. The bound (\[eq:supp\_holevo\_bound\]) is stronger than the bound (\[eq:supp\_helstrom\_bound\]), the right hand side of which can be rewritten in the form [@RJD]: $$\label{eq:supp_helstrom_qcrb} {\rm tr}\left(GJ_Q^{-1}\right)=\min_{\{\hat{X}_j\}}{\rm tr}\left(G{\rm Re}W\right).$$ Then, the solution to the minimization problem in (\[eq:supp\_helstrom\_qcrb\]) is [@RJD] $$\hat{X}_j = \sum_k \left(G^{-1}\Lambda\right)_{jk}\hat{L}_k = \sum_k \left(J_Q^{-1}\right)_{jk}\hat{L}_k,$$ where $\Lambda$ is a matrix of Lagrange multipliers, chosen so that $G^{-1}\Lambda J_Q=\mathbb{1}$. Now, the cost matrix $G$ and the QFIM $J_Q$ are assumed to be strictly positive. Firstly, we assume that (\[eq:supp\_qcrb\_saturate\]) holds for all $j$, $k$. We saw that the optimal $\hat{X}_j=\sum_k\left(J_Q^{-1}\right)_{jk}\hat{L}_k$ are linear combinations of $\hat{L}_j$. This implies that ${\rm Tr}\left(\left[\hat{X}_j,\hat{X}_k\right]\hat{\rho}(\boldsymbol{\theta})\right)=0$ for all $j$, $k$. Hence, the same set of $\hat{X}_j$ minimizes the Holevo bound, since it makes the second term in (\[eq:supp\_holevo\_bound\]) to equal zero. Thus, (\[eq:supp\_qcrb\_saturate\]) is a sufficient condition for saturating the ALD-based QCRB corresponding to the QFIM (\[eq:supp\_qfim1\]). Secondly, we assume that the Holevo bound coincides with the QFIM bound, and so for the $\hat{X}_j$ that minimize both (\[eq:supp\_helstrom\_bound\]) and (\[eq:supp\_holevo\_bound\]), the second term in (\[eq:supp\_holevo\_bound\]) must equal zero. Since $G$ is strictly positive, the matrix ${\rm Im}W$ must be zero and hence ${\rm Tr}\left(\left[\hat{X}_j,\hat{X}_k\right]\hat{\rho}(\boldsymbol{\theta})\right)=0$ for all $j$, $k$. However, the $\hat{X}_j$ that minimizes (\[eq:supp\_helstrom\_bound\]) is $\hat{X}_j=\sum_k\left(J_Q^{-1}\right)_{jk}\hat{L}_k$. Inverting this formula, we get $\hat{L}_j=\sum_k\left(\hat{J}_Q\right)_{jk}\hat{X}_k$. Hence, (\[eq:supp\_qcrb\_saturate\]) holds for all $j$, $k$ and is also a necessary condition for saturating the ALD-based QCRB corresponding to the QFIM (\[eq:supp\_qfim1\]). The states $\hat{\rho}^N$ are permutationally invariant {#sec:app3} ======================================================= Here, we show that the first order and second order reduced density matrices are as claimed in Section \[sec:mag\_fld\] for the magnetic field example. First, considering the $N=2$ case: $$\begin{split} \hat{\rho}_k^{N=2} = \frac{1}{2}&\left[\hat{E}_0^{\otimes 2}|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|\hat{E}_0^{\otimes 2}+(\hat{E}_0\otimes\hat{E}_1)|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|(\hat{E}_0\otimes\hat{E}_1)\right.\\ &\left.+(\hat{E}_1\otimes\hat{E}_0)|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|(\hat{E}_1\otimes\hat{E}_0)+\hat{E}_1^{\otimes 2}|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|\hat{E}_1^{\otimes 2}\right.\\ &\left.+\hat{E}_0^{\otimes 2}|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{-},\phi_k^{-}|\hat{E}_0^{\otimes 2}+(\hat{E}_0\otimes\hat{E}_1)|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{-},\phi_k^{-}|(\hat{E}_0\otimes\hat{E}_1)\right.\\ &\left.+(\hat{E}_1\otimes\hat{E}_0)|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{-},\phi_k^{-}|(\hat{E}_1\otimes\hat{E}_0)+\hat{E}_1^{\otimes 2}|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{-},\phi_k^{-}|\hat{E}_1^{\otimes 2}\right.\\ &\left.+\hat{E}_0^{\otimes 2}|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{+},\phi_k^{+}|\hat{E}_0^{\otimes 2}+(\hat{E}_0\otimes\hat{E}_1)|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{+},\phi_k^{+}|(\hat{E}_0\otimes\hat{E}_1)\right.\\ &\left.+(\hat{E}_1\otimes\hat{E}_0)|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{+},\phi_k^{+}|(\hat{E}_1\otimes\hat{E}_0)+\hat{E}_1^{\otimes 2}|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{+},\phi_k^{+}|\hat{E}_1^{\otimes 2}\right.\\ &\left.+\hat{E}_0^{\otimes 2}|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|\hat{E}_0^{\otimes 2}+(\hat{E}_0\otimes\hat{E}_1)|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|(\hat{E}_0\otimes\hat{E}_1)\right.\\ &\left.+(\hat{E}_1\otimes\hat{E}_0)|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|(\hat{E}_1\otimes\hat{E}_0)+\hat{E}_1^{\otimes 2}|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|\hat{E}_1^{\otimes 2}\right]. \end{split}$$ Then, tracing out the second qubit, we get: $$\begin{split} {\rm Tr}_{2}\left[\rho_k^{N=2}\right] =&\frac{1}{2}\left[\hat{E}_0|\phi_k^{+}\rangle\langle\phi_k^{+}|\hat{E}_0\langle\phi_k^{+}|\hat{E}_0^2|\phi_k^{+}\rangle +\hat{E}_0|\phi_k^{+}\rangle\langle\phi_k^{+}|\hat{E}_0\langle\phi_k^{+}|\hat{E}_1^2|\phi_k^{+}\rangle\right.\\ &\left.+\hat{E}_1|\phi_k^{+}\rangle\langle\phi_k^{+}|\hat{E}_1\langle\phi_k^{+}|\hat{E}_0^2|\phi_k^{+}\rangle +\hat{E}_1|\phi_k^{+}\rangle\langle\phi_k^{+}|\hat{E}_1\langle\phi_k^{+}|\hat{E}_1^2|\phi_k^{+}\rangle\right.\\ &\left.+\hat{E}_0|\phi_k^{+}\rangle\langle\phi_k^{-}|\hat{E}_0\langle\phi_k^{+}|\hat{E}_0^2|\phi_k^{-}\rangle +\hat{E}_0|\phi_k^{+}\rangle\langle\phi_k^{-}|\hat{E}_0\langle\phi_k^{+}|\hat{E}_1^2|\phi_k^{-}\rangle\right.\\ &\left.+\hat{E}_1|\phi_k^{+}\rangle\langle\phi_k^{-}|\hat{E}_1\langle\phi_k^{+}|\hat{E}_0^2|\phi_k^{-}\rangle +\hat{E}_1|\phi_k^{+}\rangle\langle\phi_k^{-}|\hat{E}_1\langle\phi_k^{+}|\hat{E}_1^2|\phi_k^{-}\rangle\right.\\ &\left.+\hat{E}_0|\phi_k^{-}\rangle\langle\phi_k^{+}|\hat{E}_0\langle\phi_k^{-}|\hat{E}_0^2|\phi_k^{+}\rangle +\hat{E}_0|\phi_k^{-}\rangle\langle\phi_k^{+}|\hat{E}_0\langle\phi_k^{-}|\hat{E}_1^2|\phi_k^{+}\rangle\right.\\ &\left.+\hat{E}_1|\phi_k^{-}\rangle\langle\phi_k^{+}|\hat{E}_1\langle\phi_k^{-}|\hat{E}_0^2|\phi_k^{+}\rangle +\hat{E}_1|\phi_k^{-}\rangle\langle\phi_k^{+}|\hat{E}_1\langle\phi_k^{-}|\hat{E}_1^2|\phi_k^{+}\rangle\right.\\ &\left.+\hat{E}_0|\phi_k^{-}\rangle\langle\phi_k^{-}|\hat{E}_0\langle\phi_k^{-}|\hat{E}_0^2|\phi_k^{-}\rangle +\hat{E}_0|\phi_k^{-}\rangle\langle\phi_k^{-}|\hat{E}_0\langle\phi_k^{-}|\hat{E}_1^2|\phi_k^{-}\rangle\right.\\ &\left.+\hat{E}_1|\phi_k^{-}\rangle\langle\phi_k^{-}|\hat{E}_1\langle\phi_k^{-}|\hat{E}_0^2|\phi_k^{-}\rangle +\hat{E}_1|\phi_k^{-}\rangle\langle\phi_k^{-}|\hat{E}_1\langle\phi_k^{-}|\hat{E}_1^2|\phi_k^{-}\rangle\right]\\ =&\frac{1}{2}\left[\hat{E}_0|\phi_k^{+}\rangle\langle\phi_k^{+}|\hat{E}_0\langle\phi_k^{+}|\hat{E}_0^2+\hat{E}_1^2|\phi_k^{+}\rangle +\hat{E}_1|\phi_k^{+}\rangle\langle\phi_k^{+}|\hat{E}_1\langle\phi_k^{+}|\hat{E}_0^2+\hat{E}_1^2|\phi_k^{+}\rangle\right.\\ &\left.+\hat{E}_0|\phi_k^{+}\rangle\langle\phi_k^{-}|\hat{E}_0\langle\phi_k^{+}|\hat{E}_0^2+\hat{E}_1^2|\phi_k^{-}\rangle +\hat{E}_1|\phi_k^{+}\rangle\langle\phi_k^{-}|\hat{E}_1\langle\phi_k^{+}|\hat{E}_0^2+\hat{E}_1^2|\phi_k^{-}\rangle\right.\\ &\left.+\hat{E}_0|\phi_k^{-}\rangle\langle\phi_k^{+}|\hat{E}_0\langle\phi_k^{-}|\hat{E}_0^2+\hat{E}_1^2|\phi_k^{+}\rangle +\hat{E}_1|\phi_k^{-}\rangle\langle\phi_k^{+}|\hat{E}_1\langle\phi_k^{-}|\hat{E}_0^2+\hat{E}_1^2|\phi_k^{+}\rangle\right.\\ &\left.+\hat{E}_0|\phi_k^{-}\rangle\langle\phi_k^{-}|\hat{E}_0\langle\phi_k^{-}|\hat{E}_0^2+\hat{E}_1^2|\phi_k^{-}\rangle +\hat{E}_1|\phi_k^{-}\rangle\langle\phi_k^{-}|\hat{E}_1\langle\phi_k^{-}|\hat{E}_0^2+\hat{E}_1^2|\phi_k^{-}\rangle\right]\\ =&\frac{1}{2}\left[\hat{E}_0|\phi_k^{+}\rangle\langle\phi_k^{+}|\hat{E}_0\langle\phi_k^{+}|\phi_k^{+}\rangle +\hat{E}_1|\phi_k^{+}\rangle\langle\phi_k^{+}|\hat{E}_1\langle\phi_k^{+}|\phi_k^{+}\rangle\right.\\ &\left.+\hat{E}_0|\phi_k^{+}\rangle\langle\phi_k^{-}|\hat{E}_0\langle\phi_k^{+}|\phi_k^{-}\rangle +\hat{E}_1|\phi_k^{+}\rangle\langle\phi_k^{-}|\hat{E}_1\langle\phi_k^{+}|\phi_k^{-}\rangle\right.\\ &\left.+\hat{E}_0|\phi_k^{-}\rangle\langle\phi_k^{+}|\hat{E}_0\langle\phi_k^{-}|\phi_k^{+}\rangle +\hat{E}_1|\phi_k^{-}\rangle\langle\phi_k^{+}|\hat{E}_1\langle\phi_k^{-}|\phi_k^{+}\rangle\right.\\ &\left.+\hat{E}_0|\phi_k^{-}\rangle\langle\phi_k^{-}|\hat{E}_0\langle\phi_k^{-}|\phi_k^{-}\rangle +\hat{E}_1|\phi_k^{-}\rangle\langle\phi_k^{-}|\hat{E}_1\langle\phi_k^{-}|\phi_k^{-}\rangle\right]\\ =&\frac{1}{2}\left[\hat{E}_0|\phi_k^{+}\rangle\langle\phi_k^{+}|\hat{E}_0+\hat{E}_1|\phi_k^{+}\rangle\langle\phi_k^{+}|\hat{E}_1+\hat{E}_0|\phi_k^{-}\rangle\langle\phi_k^{-}|\hat{E}_0+\hat{E}_1|\phi_k^{-}\rangle\langle\phi_k^{-}|\hat{E}_1\right]\\ =&\frac{1}{2}\left[\sum_{r=0}^1\hat{E}_r\left(|\phi_k^{+}\rangle\langle\phi_k^{+}|+|\phi_k^{-}\rangle\langle\phi_k^{-}|\right)\hat{E}_r\right]=\frac{\mathbb{1}_2}{2}. \end{split}$$ Similarly, considering the $N=3$ case, and then tracing out the third qubit, we get: $$\begin{split} {\rm Tr}_3\left[\hat{\rho}_k^{N=3}\right] =& \frac{1}{2}\left[\hat{E}_0^{\otimes 2}|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|\hat{E}_0^{\otimes 2}+(\hat{E}_0\otimes\hat{E}_1)|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|(\hat{E}_0\otimes\hat{E}_1)\right.\\ &\left.+(\hat{E}_1\otimes\hat{E}_0)|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|(\hat{E}_1\otimes\hat{E}_0)+\hat{E}_1^{\otimes 2}|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|\hat{E}_1^{\otimes 2}\right.\\ &\left.+\hat{E}_0^{\otimes 2}|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|\hat{E}_0^{\otimes 2}+(\hat{E}_0\otimes\hat{E}_1)|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|(\hat{E}_0\otimes\hat{E}_1)\right.\\ &\left.+(\hat{E}_1\otimes\hat{E}_0)|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|(\hat{E}_1\otimes\hat{E}_0)+\hat{E}_1^{\otimes 2}|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|\hat{E}_1^{\otimes 2}\right]\\ =&\frac{1}{2}\left[\hat{E}_0^{\otimes 2}\left(|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|+|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|\right)\hat{E}_0^{\otimes 2}\right.\\ &\left.+(\hat{E}_0\otimes\hat{E}_1)\left(|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|+|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|\right)(\hat{E}_0\otimes\hat{E}_1)\right.\\ &\left.+(\hat{E}_1\otimes\hat{E}_0)\left(|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|+|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|\right)(\hat{E}_1\otimes\hat{E}_0)\right.\\ &\left.+\hat{E}_1^{\otimes 2}\left(|\phi_k^{+},\phi_k^{+}\rangle\langle\phi_k^{+},\phi_k^{+}|+|\phi_k^{-},\phi_k^{-}\rangle\langle\phi_k^{-},\phi_k^{-}|\right)\hat{E}_1^{\otimes 2}\right]\\ =&\frac{1}{4}\left[\mathbb{1}_2\otimes\mathbb{1}_2+\hat{E}_0^{\otimes 2}\left(\hat{\sigma}_k\otimes\hat{\sigma}_k\right)\hat{E}_0^{\otimes 2}+(\hat{E}_0\otimes\hat{E}_1)\left(\hat{\sigma}_k\otimes\hat{\sigma}_k\right)(\hat{E}_0\otimes\hat{E}_1)\right.\\ &\left.+(\hat{E}_1\otimes\hat{E}_0)\left(\hat{\sigma}_k\otimes\hat{\sigma}_k\right)(\hat{E}_1\otimes\hat{E}_0)+\hat{E}_1^{\otimes 2}\left(\hat{\sigma}_k\otimes\hat{\sigma}_k\right)\hat{E}_1^{\otimes 2}\right]\\ =&\frac{1}{4}\left[\mathbb{1}_2\otimes\mathbb{1}_2+\left(\sum_{r=0}^1\hat{E}_r\hat{\sigma}_k\hat{E}_r\right)\otimes\left(\sum_{s=0}^1\hat{E}_s\hat{\sigma}_k\hat{E}_s\right)\right], \end{split}$$ and so on. Proof for $C_Q(\boldsymbol{\theta}) \geq J_Q(\boldsymbol{\theta})$ {#sec:app5} ================================================================== Here, we prove that the quantity $C_Q(\boldsymbol{\theta})$ is indeed an upper bound to the quantity $J_Q(\boldsymbol{\theta})$ for the evolved probe state $\hat{\rho}(\boldsymbol{\theta})$, as claimed in Section \[sec:noisy\]. Consider the following relationship of the Bures fidelity with the quantum Fisher information matrix (QFIM), where the QFIM is real, symmetric and positive semidefinite but more general and not necessarily composed of symmetric logarithmic derivatives (SLDs): $$\label{eq:supp_bures1} F\left(\hat{\rho}(\boldsymbol{\theta}),\hat{\rho}(\boldsymbol{\theta+\epsilon})\right) =1-\frac{1}{4}\sum_{j,k}\epsilon_j\epsilon_k{\rm Tr}\left[\frac{\hat{L}_j^\dagger\hat{L}_k+\hat{L}_k^\dagger\hat{L}_j}{2}\hat{\rho}(\boldsymbol{\theta})\right],$$ where $\boldsymbol{\theta}$ is assumed to be the actual value of the vector of unknown parameters, $\boldsymbol{\epsilon}$ is an infinitesimal increment in $\boldsymbol{\theta}$, and $0 \leq F(\hat{\rho}_1,\hat{\rho}_2)={\rm Tr}\left(\sqrt{\sqrt{\hat{\rho}_1}\hat{\rho}_2\sqrt{\hat{\rho}_1}}\right) \leq 1$ is the Bures fidelity between two given states $\hat{\rho}_1$ and $\hat{\rho}_2$ [@BC; @NC; @MGAP; @YZF; @MH; @JDN]. Here, (\[eq:supp\_bures1\]) holds, when the operators $\hat{L}_k$ are not necessarily Hermitian and satisfy: $$\label{eq:supp_bures_proof1} \frac{1}{2}\left(\hat{L}_k\hat{\rho}(\boldsymbol{\theta})+\hat{\rho}(\boldsymbol{\theta})\hat{L}_k^\dagger\right)=\frac{\partial\hat{\rho}(\boldsymbol{\theta})}{\partial\theta_k}.$$ This can be seen as follows. When the operators $\hat{L}_k$ are Hermitian, such that $\hat{L}_k^\dagger = \hat{L}_k$, as is the convention, the Bures metric $d_B$ and Bures distance $D_B$ are defined and related to the fidelity $F$ for infinitesimal $\boldsymbol{\epsilon}$ as follows [@BC; @DRFG]: $$\label{eq:supp_bures_proof2} d_B^2\left(\hat{\rho}(\boldsymbol{\theta}),\hat{\rho}(\boldsymbol{\theta+\epsilon})\right)=D_B^2\left(\hat{\rho}(\boldsymbol{\theta}),\hat{\rho}(\boldsymbol{\theta+\epsilon})\right)=2\left[1-F\left(\hat{\rho}(\boldsymbol{\theta}),\hat{\rho}(\boldsymbol{\theta+\epsilon})\right)\right]=\frac{1}{2}\sum_{j,k}\epsilon_j\epsilon_k{\rm Tr}\left[\frac{\hat{L}_j\hat{L}_k+\hat{L}_k\hat{L}_j}{2}\hat{\rho}(\boldsymbol{\theta})\right],$$ where $\hat{L}_k$ are the SLDs satisfying: $$\frac{1}{2}\left(\hat{L}_k\hat{\rho}(\boldsymbol{\theta})+\hat{\rho}(\boldsymbol{\theta})\hat{L}_k\right)=\frac{\partial\hat{\rho}(\boldsymbol{\theta})}{\partial\theta_k}.$$ However, if the operators $\hat{L}_k$ are not necessarily Hermitian and rather satisfy (\[eq:supp\_bures\_proof1\]), then (\[eq:supp\_bures\_proof2\]) becomes: $$d_B^2\left(\hat{\rho}(\boldsymbol{\theta}),\hat{\rho}(\boldsymbol{\theta+\epsilon})\right)=D_B^2\left(\hat{\rho}(\boldsymbol{\theta}),\hat{\rho}(\boldsymbol{\theta+\epsilon})\right)=2\left[1-F\left(\hat{\rho}(\boldsymbol{\theta}),\hat{\rho}(\boldsymbol{\theta+\epsilon})\right)\right]=\frac{1}{2}\sum_{j,k}\epsilon_j\epsilon_k{\rm Tr}\left[\frac{\hat{L}_j^\dagger\hat{L}_k+\hat{L}_k^\dagger\hat{L}_j}{2}\hat{\rho}(\boldsymbol{\theta})\right].$$ Then, clearly (\[eq:supp\_bures1\]) is obtained from the above. We must comment here that there is a lot of inconsistency in the literature about the relationship between $d_B$, $D_B$ and $F$. We here used the relationship originally presented in Ref. [@BC]. Now, for our case in this paper, the operators $\hat{L}_k$ are anti-symmetric logarithmic derivatives (ALDs), such that $\hat{L}_k^\dagger = -\hat{L}_k$. We have from (\[eq:supp\_bures1\]): $$\label{eq:supp_bures2} F\left(\hat{\rho}(\boldsymbol{\theta}),\hat{\rho}(\boldsymbol{\theta+\epsilon})\right)=1-\frac{1}{4}\sum_{j,k}\epsilon_j\epsilon_kJ_Q^{jk}(\boldsymbol{\theta}).$$ Now, since fidelity is non-decreasing with respect to partial trace (See Refs. [@FR; @DN; @MH; @NC], for example), we have: $$\label{eq:supp_bures3} F\left(\hat{\rho}(\boldsymbol{\theta}),\hat{\rho}(\boldsymbol{\theta+\epsilon})\right)=F\left({\rm Tr}_B\left[\hat{\rho}_{SB}(\boldsymbol{\theta})\right],{\rm Tr}_B\left[\hat{\rho}_{SB}(\boldsymbol{\theta+\epsilon})\right]\right)\geq F\left(\hat{\rho}_{SB}(\boldsymbol{\theta}),\hat{\rho}_{SB}(\boldsymbol{\theta+\epsilon})\right)=1-\frac{1}{4}\sum_{j,k}\epsilon_j\epsilon_kC_Q^{jk}(\boldsymbol{\theta}).$$ Clearly, from (\[eq:supp\_bures2\]) and (\[eq:supp\_bures3\]), we have (like in Ref. [@YZF]): $$\label{eq:supp_qfim_upper_bound} C_Q(\boldsymbol{\theta}) \geq J_Q(\boldsymbol{\theta}).$$ An alternative argument for (\[eq:supp\_qfim\_upper\_bound\]) to hold is that the quantum Fisher information (for both single and multiparamter cases) is an operator monotone function, non-increasing with respect to partial trace [@DDM; @PG], noting that the partial trace is a completely positive and trace-preserving map from $S+B$ space to $S$ space. Note that, even though we did not explicitly invoke Uhlmann’s theorem here, the inequality in (\[eq:supp\_bures3\]) is the monotonicity property of fidelity and is a consequence of Uhlmann’s theorem. Thus, extending the argument from Ref. [@EFD] to the multiparameter case, the equality in (\[eq:supp\_qfim\_upper\_bound\]) is achieved by minimizing $C_Q(\boldsymbol{\theta})$ over all Kraus representations of the quantum channel. Hence, there are an infinitude of Kraus representations of the channel that lead to $C_Q(\boldsymbol{\theta})=J_Q(\boldsymbol{\theta})$. POVM to attain QCRB for Pure State Input via Unitary Channel {#sec:app6} ============================================================ Here, we prove that, as claimed in Section \[sec:qcrb\], the set of POVMs $\{\hat{P}_{m1}\}$ of cardinality $q+2$, comprising the following $q+1$ elements, $$\label{eq:supp_qcrb_measure1} \hat{P}_0 = \hat{\rho}(\boldsymbol{\theta}) = \hat{U}(\boldsymbol{\theta})|\psi\rangle\langle\psi|\hat{U}^\dagger(\boldsymbol{\theta}),\qquad \hat{P}_m = \frac{\partial\hat{U}(\boldsymbol{\theta})}{\partial\theta_m}|\psi\rangle\langle\psi|\frac{\partial\hat{U}^\dagger(\boldsymbol{\theta})}{\partial\theta_m} \qquad \forall m=1,\ldots,q,$$ together with one element $\hat{P}_n=\hat{P}_{q+1}:=|\phi_n\rangle\langle\phi_n|$ that accounts for the normalisation, saturates the ALD-based QCRB, provided (\[eq:supp\_qcrb\_saturate\]) is satisfied for every pair of ALDs. The proof is adapted from Ref. [@HBDW], noting that for pure state and unitary channel our ALD-based QCRB coincides with the SLD-based QCRB, and it is enough to demonstrate that using the set of POVMs $\{\hat{P}_{m1}\}$ the quantum Fisher information matrix (QFIM) equals the classical Fisher information matrix (FIM), when (\[eq:supp\_qcrb\_saturate\]) is satisfied. The set of POVMs must be complete, i.e. $\sum_{m1}\hat{P}_{m1}=\mathbb{1}$. Consider that the initial probe state is $\hat{\rho}=|\psi\rangle\langle\psi|$. Then, we use the short notations $$|\psi_{\boldsymbol{\theta}}\rangle=\hat{U}(\boldsymbol{\theta})|\psi\rangle, \qquad |\partial_{\theta_k}\psi_{\boldsymbol{\theta}}\rangle=\frac{\partial}{\partial\theta_k}|\psi_{\boldsymbol{\theta}}\rangle=\frac{\partial\hat{U}(\boldsymbol{\theta})}{\partial\theta_k}|\psi\rangle.$$ The elements of the quantum Fisher information matrix (QFIM) are given by [@HBDW; @BD] $$J_Q^{jk} = 4{\rm Re}\left[\langle\partial_{\theta_j}\psi_{\boldsymbol{\theta}}|\partial_{\theta_k}\psi_{\boldsymbol{\theta}}\rangle-\langle\partial_{\theta_j}\psi_{\boldsymbol{\theta}}|\psi_{\boldsymbol{\theta}}\rangle\langle\psi_{\boldsymbol{\theta}}|\partial_{\theta_k}\psi_{\boldsymbol{\theta}}\rangle\right].$$ The elements of the corresponding classical Fisher information matrix (FIM) $J_C$ are given by [@HBDW] $$J_C^{jk}=\sum_{m=0}^{q+1}\frac{\partial_{\theta_j}p(m|\boldsymbol{\theta})\partial_{\theta_k}p(m|\boldsymbol{\theta})}{p(m|\boldsymbol{\theta})}=\sum_m\frac{4{\rm Re}\left[\langle\partial_{\theta_j}\psi_{\boldsymbol{\theta}}|\hat{P}_m|\psi_{\boldsymbol{\theta}}\rangle\right]{\rm Re}\left[\langle\psi_{\boldsymbol{\theta}}|\hat{P}_m|\partial_{\theta_k}\psi_{\boldsymbol{\theta}}\rangle\right]}{\langle\psi_{\boldsymbol{\theta}}|\hat{P}_m|\psi_{\boldsymbol{\theta}}\rangle}.$$ The component of the FIM corresponding to the POVM element $\hat{P}_0=|\psi_{\boldsymbol{\theta}}\rangle\langle\psi_{\boldsymbol{\theta}}|$ is $$\label{eq:supp_p0} 4{\rm Re}\left[\langle\partial_{\theta_j}\psi_{\boldsymbol{\theta}}|\psi_{\boldsymbol{\theta}}\rangle\right]{\rm Re}\left[\langle\psi_{\boldsymbol{\theta}}|\partial_{\theta_k}\psi_{\boldsymbol{\theta}}\rangle\right]=0.$$ The above quantity vanishes because ${\rm Re}\left[\langle\partial_{\theta_j}\psi_{\boldsymbol{\theta}}|\psi_{\boldsymbol{\theta}}\rangle\right]=0$ for any parameter $\theta_k$ [@HBDW; @BCM]. Next, the component of the FIM corresponding to the POVM element $\hat{P}_n=\hat{P}_{q+1}=|\phi_n\rangle\langle\phi_n|$ is $$\label{eq:supp_pn} \frac{4{\rm Re}\left[\langle\partial_{\theta_j}\psi_{\boldsymbol{\theta}}|\hat{P}_n|\psi_{\boldsymbol{\theta}}\rangle\right]{\rm Re}\left[\langle\psi_{\boldsymbol{\theta}}|\hat{P}_n|\partial_{\theta_k}\psi_{\boldsymbol{\theta}}\rangle\right]}{\langle\psi_{\boldsymbol{\theta}}|\hat{P}_n|\psi_{\boldsymbol{\theta}}\rangle}=4{\rm Re}\left[\langle\partial_{\theta_j}\psi_{\boldsymbol{\theta}}|\phi_n\rangle\langle\phi_n|\partial_{\theta_k}\psi_{\boldsymbol{\theta}}\rangle\right],$$ since $\langle\psi_{\boldsymbol{\theta}}|\hat{P}_n|\psi_{\boldsymbol{\theta}}\rangle$ is, by definition, real. The remaining components $\hat{P}_k$ for $k=1,\ldots,q$ may be similarly computed, and we get $$\label{eq:supp_fim} J_C^{jk}=4\sum_{m=1}^q{\rm Re}\left[\langle\partial_{\theta_j}\psi_{\boldsymbol{\theta}}|\partial_{\theta_m}\psi_{\boldsymbol{\theta}}\rangle\langle\partial_{\theta_m}\psi_{\boldsymbol{\theta}}|\partial_{\theta_k}\psi_{\boldsymbol{\theta}}\rangle\right]+4{\rm Re}\left[\langle\partial_{\theta_j}\psi_{\boldsymbol{\theta}}|\phi_n\rangle\langle\phi_n|\partial_{\theta_k}\psi_{\boldsymbol{\theta}}\rangle\right].$$ Now, note that, for the completeness of the set of POVMs, we require $$\label{eq:supp_povm} \sum_{m=1}^q|\partial_{\theta_m}\psi_{\boldsymbol{\theta}}\rangle\langle\partial_{\theta_m}\psi_{\boldsymbol{\theta}}|+|\phi_n\rangle\langle\phi_n|=\mathbb{1}-|\psi_{\boldsymbol{\theta}}\rangle\langle\psi_{\boldsymbol{\theta}}|.$$ Substituting (\[eq:supp\_povm\]) in (\[eq:supp\_fim\]), we get $$\label{eq:supp_sld_fim} J_C^{jk}=4{\rm Re}\left[\langle\partial_{\theta_j}\psi_{\boldsymbol{\theta}}|\partial_{\theta_k}\psi_{\boldsymbol{\theta}}\rangle-\langle\partial_{\theta_j}\psi_{\boldsymbol{\theta}}|\psi_{\boldsymbol{\theta}}\rangle\langle\psi_{\boldsymbol{\theta}}|\partial_{\theta_k}\psi_{\boldsymbol{\theta}}\rangle\right]=J_Q^{jk}.$$ POVM to attain QCRB for Mixed State Input via Unitary Channel {#sec:app7} ============================================================= Here, we prove that, as claimed in Section \[sec:qcrb\], the set of POVMs $\{\hat{P}_{m2}\}$ of cardinality $q+2$, comprising the following $q+1$ elements, $$\label{eq:supp_qcrb_measure2} \hat{P}_0 = \hat{\rho}(\boldsymbol{\theta})=\hat{U}(\boldsymbol{\theta})\hat{\rho}\hat{U}^\dagger(\boldsymbol{\theta}),\qquad \hat{P}_m = \frac{\partial\hat{\rho}(\boldsymbol{\theta})}{\partial\theta_m}=\left[\frac{\partial\hat{U}(\boldsymbol{\theta})}{\partial\theta_m}\hat{\rho}\hat{U}^\dagger(\boldsymbol{\theta})+\hat{U}(\boldsymbol{\theta})\hat{\rho}\frac{\partial\hat{U}^\dagger(\boldsymbol{\theta})}{\partial\theta_m}\right] \quad \forall m=1,\ldots,q,$$ together with one element $\hat{P}_n=\hat{P}_{q+1}$ that accounts for the normalisation, saturates the ALD-based QCRB, provided (\[eq:supp\_qcrb\_saturate\]) is satisfied for every pair of ALDs. The elements of the QFIM with $\hat{\rho}_{\boldsymbol{\theta}}:=\hat{\rho}(\boldsymbol{\theta})$ and $\hat{U}_{\boldsymbol{\theta}}:=\hat{U}(\boldsymbol{\theta})$ are: $$\label{eq:supp_ald_qfim} J_Q^{jk}=\frac{1}{2}{\rm Tr}\left[\left(\hat{L}_j^\dagger\hat{L}_k+\hat{L}_k^\dagger\hat{L}_j\right)\hat{\rho}_{\boldsymbol{\theta}}\right]=4{\rm Re}\left[{\rm Tr}\left(\hat{U}_{\boldsymbol{\theta}}\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}^\dagger\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{\rho}_{\boldsymbol{\theta}}\right)+{\rm Tr}\left(\hat{U}_{\boldsymbol{\theta}}\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{\rho}_{\boldsymbol{\theta}}\right){\rm Tr}\left(\hat{U}_{\boldsymbol{\theta}}\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{\rho}_{\boldsymbol{\theta}}\right)\right],$$ where we used $\hat{L}_k=2\left[\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}\hat{U}_{\boldsymbol{\theta}}^\dagger-{\rm Tr}\left(\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{\rho}_{\boldsymbol{\theta}}\right)\right]$ (as taken in Sections \[sec:qfim\] and \[sec:noisy\]), that satisfy: $$2\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}}=\hat{L}_k\hat{\rho}_{\boldsymbol{\theta}}+\hat{\rho}_{\boldsymbol{\theta}}\hat{L}_k^\dagger, \qquad \hat{L}_k^\dagger=-\hat{L}_k,$$ noting that $\hat{U}_{\boldsymbol{\theta}}\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}^\dagger=-\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}\hat{U}_{\boldsymbol{\theta}}^\dagger$, arising from $\hat{U}_{\boldsymbol{\theta}}\hat{U}_{\boldsymbol{\theta}}^\dagger=\mathbb{1}$ upon differentiating both sides with respect to $\theta_k$. Also, the elements of the FIM $J_C$, as defined in (\[eq:supp\_fim1\]), are: $$\label{eq:supp_fim1_unitary} J_C^{jk}=\sum_m\frac{1}{p(m|\boldsymbol{\theta})}\frac{\partial}{\partial\theta_j}p(m|\boldsymbol{\theta})\frac{\partial}{\partial\theta_k}p(m|\boldsymbol{\theta})=\sum_m \frac{1}{{\rm Tr}\left(\hat{P}_m\hat{\rho}_{\boldsymbol{\theta}}\right)}\frac{\partial}{\partial\theta_j}{\rm Tr}\left(\hat{P}_m\hat{\rho}_{\boldsymbol{\theta}}\right)\frac{\partial}{\partial\theta_k}{\rm Tr}\left(\hat{P}_m\hat{\rho}_{\boldsymbol{\theta}}\right).$$ Consider that we are interested in saturating the bound at a specific point $\theta_s$ in the space of $\boldsymbol{\theta}$, as in Ref. [@HBDW]. Then, (\[eq:supp\_p0\]) here becomes: $$\label{eq:supp_p0_unitary} \frac{{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right){\rm Tr}\left(\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right)}{{\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}^2\right)}=0,$$where ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right)=0$ for any parameter $\theta_k$, as an extension of Refs. [@HBDW; @BCM]. This can be seen as follows. Given that $\hat{\rho}_{\boldsymbol{\theta}s}$ is not necessarily pure, we must have ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right)\leq 0$, arising upon differentiation with respect to $\theta_j$ from ${\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}^2\right)\leq 1$, for which ${\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}^2\right)$ is clearly non-decreasing. However, since $\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}$ is a POVM element, we must have ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right)=\langle\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\rangle=p(j|\theta_s)$, which being a probability cannot be negative. Here, $\langle\cdot\rangle$ denotes expectation with respect to $\hat{\rho}_{\boldsymbol{\theta}s}$. Hence, we must have ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right)=0$. For example, when the state $\hat{\rho}_{\boldsymbol{\theta}s}$ is maximally mixed, i.e. $\hat{\rho}_{\boldsymbol{\theta}s}=\mathbb{1}_d/d$, where $d$ is the dimension of the Hilbert space upon which the state $\hat{\rho}_{\boldsymbol{\theta}s}$ is defined, we have ${\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}^2\right)=1/d$, and consequently, ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right)=0$. On the other hand, if $\hat{\rho}_{\boldsymbol{\theta}s}$ is pure, we must have ${\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}^2\right)=1$, and consequently, ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right)=0$ again. Next, proceeding in a manner similar to Ref. [@HBDW] for the terms of the FIM for $m=1,\ldots,q$, we take $\hat{\rho}_{\boldsymbol{\theta}}=\hat{\rho}_{\boldsymbol{\theta}s}+\delta\theta_r\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}$. Clearly, ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_m}\hat{\rho}_{\boldsymbol{\theta}s}\right)=0$ (even for $j=m$), arising from ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right)=0$ upon differentiating both sides with respect to $\theta_m$, and noting that ${\rm Tr}\left(\partial_{\theta_j}\partial_{\theta_m}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right)=\langle\partial_{\theta_j}\partial_{\theta_m}\hat{\rho}_{\boldsymbol{\theta}s}\rangle=0$, since $\langle\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\rangle=0$. In general, we must have ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_m\right)=0, \, \forall m=0,1,\ldots,q+1$. Thus, we have ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_m}\hat{\rho}_{\boldsymbol{\theta}}\right)=\delta\theta_r{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_m}\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}\right)$, ${\rm Tr}\left(\partial_{\theta_m}\hat{\rho}_{\boldsymbol{\theta}}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)=\delta\theta_r{\rm Tr}\left(\partial_{\theta_m}\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)$, and ${\rm Tr}\left(\partial_{\theta_m}\hat{\rho}_{\boldsymbol{\theta}}\hat{\rho}_{\boldsymbol{\theta}}\right)=\delta\theta_r^2{\rm Tr}\left(\partial_{\theta_m}\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}\right)$. Then, we get $$\label{eq:supp_pm_unitary} \begin{split} \sum_{m=1}^q\frac{\delta\theta_r^2{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_m}\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}\right){\rm Tr}\left(\partial_{\theta_m}\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)}{\delta\theta_r^2{\rm Tr}\left(\partial_{\theta_m}\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}\right)} &=\sum_{m=1}^q\frac{{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_m}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right){\rm Tr}\left(\partial_{\theta_m}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)}{{\rm Tr}\left(\partial_{\theta_m}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)}\\ &=\sum_{m=1}^q{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_k}\partial_{\theta_m}\hat{\rho}_{\boldsymbol{\theta}s}\right), \end{split}$$ since the limiting expression for the elements of the FIM at the point $\theta_s$ should be independent of the direction in which the state is expanded to calculate the above [@HBDW], such that we can choose $r=j$ or $r=k$ for our convenience. Also, we get $$\label{eq:supp_pn_unitary} \frac{{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right){\rm Tr}\left(\hat{P}_{q+1}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)}{{\rm Tr}\left[\hat{P}_{q+1}\left(\hat{\rho}_{\boldsymbol{\theta}s}+\delta\theta_r\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}\right)\right]}=\frac{{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right){\rm Tr}\left(\hat{P}_{q+1}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)}{{\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right)+\delta\theta_r{\rm Tr}\left(\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right)}=0,$$ since ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right)=0$ for the normalising element $\hat{P}_{q+1}$. Thus, (\[eq:supp\_fim\]) here becomes: $$\label{eq:supp_fim_unitary} J_C^{jk}=\sum_{m=1}^{q}{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_k}\hat{P}_m\right)=-{\rm Tr}\left(\partial_{\theta_j}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_m\right)=-\sum_{m=1}^q{\rm Tr}\left(\partial_{\theta_j}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_m}\hat{\rho}_{\boldsymbol{\theta}s}\right),$$ where the second equality arises from ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_m\right)=0$ upon differentiating both sides with respect to $\theta_k$. Furthermore, (\[eq:supp\_povm\]) here becomes: $$\label{eq:supp_povm2_unitary} \sum_{m=1}^q\partial_{\theta_m}\hat{\rho}_{\boldsymbol{\theta}s}=\mathbb{1}-\hat{\rho}_{\boldsymbol{\theta}s}-\hat{P}_{q+1}.$$ Then, (\[eq:supp\_sld\_fim\]) here becomes $$\label{eq:supp_ald_fim} \begin{split} J_C^{jk}&=-{\rm Tr}\left(\partial_{\theta_j}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)-{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)+{\rm Tr}\left(\partial_{\theta_j}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right)\\ &=-{\rm Tr}\left(\partial_{\theta_j}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)-{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\right){\rm Tr}\left(\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)\\ &=-4{\rm Re}\left[{\rm Tr}\left(\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}^\dagger\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}\hat{\rho}\right)+{\rm Tr}\left(\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{U}_{\boldsymbol{\theta}}\hat{\rho}\right){\rm Tr}\left(\hat{U}_{\boldsymbol{\theta}}^\dagger\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}\hat{\rho}\right)\right]\\ &=4{\rm Re}\left[{\rm Tr}\left(\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}^\dagger\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}\hat{\rho}\right)+{\rm Tr}\left(\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{U}_{\boldsymbol{\theta}}\hat{\rho}\right){\rm Tr}\left(\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{U}_{\boldsymbol{\theta}}\hat{\rho}\right)\right]\\ &=4{\rm Re}\left[{\rm Tr}\left(\hat{U}_{\boldsymbol{\theta}}\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}^\dagger\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{\rho}_{\boldsymbol{\theta}}\right)+{\rm Tr}\left(\hat{U}_{\boldsymbol{\theta}}\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{\rho}_{\boldsymbol{\theta}}\right){\rm Tr}\left(\hat{U}_{\boldsymbol{\theta}}\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{\rho}_{\boldsymbol{\theta}}\right)\right]=C_Q^{jk}. \end{split}$$ Here, we used the fact that $\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{U}_{\boldsymbol{\theta}}=-\hat{U}_{\boldsymbol{\theta}}^\dagger\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}$, arising from $\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{U}_{\boldsymbol{\theta}}=\mathbb{1}$ upon differentiating both sides with respect to $\theta_k$, and that ${\rm Tr}\left(\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}\hat{\rho}\hat{U}_{\boldsymbol{\theta}}^\dagger\right)=-{\rm Tr}\left(\hat{U}_{\boldsymbol{\theta}}\hat{\rho}\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}^\dagger\right)$, arising from ${\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}\right)={\rm Tr}\left(\hat{U}_{\boldsymbol{\theta}}\hat{\rho}\hat{U}_{\boldsymbol{\theta}}^\dagger\right)=1$ upon differentiating both sides with respect to $\theta_k$, and that $2{\rm Re}\left[\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}^\dagger\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}\right]=-2{\rm Re}\left[\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}^\dagger\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}\right]$, arising from $\hat{U}_{\boldsymbol{\theta}}^\dagger\hat{U}_{\boldsymbol{\theta}}=\mathbb{1}$ upon differentiating both sides with respect to $\theta_k$ and then $\theta_j$. Also, ${\rm Tr}\left(\partial_{\theta_j}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right)=0$, since ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right)=0$. Note that ${\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_m\right)=0, \, \forall m=1,\ldots,q$, but ${\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right)\geq 0$, such that $$\sum_{m=1}^q{\rm Tr}\left[\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_m\right]=0 \Rightarrow {\rm Tr}\left[\hat{\rho}_{\boldsymbol{\theta}s}\left(\mathbb{1}-\hat{\rho}_{\boldsymbol{\theta}s}-\hat{P}_{q+1}\right)\right]=0 \Rightarrow {\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}^2\right)=1-{\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right)\leq 1,$$ where the equality holds, when ${\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right)=0$, and consequently, $\hat{\rho}_{\boldsymbol{\theta}s}$ is pure. However, we have from above that ${\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right)=p\left(q+1|\theta_s\right)=1-{\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}^2\right)$, which upon differentiation with respect to $\theta_j$ yields ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right)+{\rm Tr}\left[\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_j}\hat{P}_{q+1}\right]=-2{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right)=0$. Clearly, from (\[eq:supp\_povm2\_unitary\]), upon differentiating both sides with respect to $\theta_j$, multiplying both sides by $\hat{\rho}_{\boldsymbol{\theta}s}$, which is positive definite, and then taking trace of both sides, we get ${\rm Tr}\left[\partial_{\theta_j}\hat{P}_{q+1}\hat{\rho}_{\boldsymbol{\theta}s}\right]=-{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right)-\sum_{m=1}^q{\rm Tr}\left(\partial_{\theta_j}\partial_{\theta_m}\hat{\rho}_{\boldsymbol{\theta}s}\hat{\rho}_{\boldsymbol{\theta}s}\right)=0$. Thus, we indeed have ${\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\hat{P}_{q+1}\right)=0$, as used earlier. Condition to saturate Upper Bound to QFIM {#sec:app8} ========================================= Here, we prove that, as claimed in Section \[sec:noisy\], the following is a necessary and sufficient condition $$\label{eq:supp_qfim_bound_saturate} {\rm Im}\left[\sum_l{\rm Tr}\left\lbrace\left(\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}\right)\hat{\rho}\right\rbrace\right]=0 \qquad \forall j,k$$ for the following upper bound to the ALD-based QFIM to be saturated: $$\label{eq:supp_qfim_bound} C_Q^{jk}=4{\rm Re}\left[{\rm Tr}\left(\sum_l\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}\hat{\rho}\right)+{\rm Tr}\left(\sum_p\frac{\partial\hat{\Pi}_p^\dagger(\boldsymbol{\theta})}{\partial\theta_j}\hat{\Pi}_p(\boldsymbol{\theta})\hat{\rho}\right){\rm Tr}\left(\sum_r\frac{\partial\hat{\Pi}_r^\dagger(\boldsymbol{\theta})}{\partial\theta_k}\hat{\Pi}_r(\boldsymbol{\theta})\hat{\rho}\right)\right].$$ Consider that our initial probe state is pure, i.e. $\hat{\rho}=|\psi\rangle\langle\psi|$. Then, the unitary evolution $\hat{U}_{SB}(\boldsymbol{\theta})$ in the $S+B$ space can be considered equivalent to the output impure state $\sum_l\hat{\Pi}_l(\boldsymbol{\theta})|\psi\rangle\langle\psi|\hat{\Pi}_l^\dagger(\boldsymbol{\theta})$ of the noisy channel in the system $S$ space, subsequently purified by extending the $S$ space, introducing ancillas $B$. For the sake of clarity, we use the notation $\hat{U}_{\boldsymbol{\theta}}^{(S+B)}:=\hat{U}_{S+B}(\boldsymbol{\theta})$ here, in distinction from $\hat{U}_{SB}(\boldsymbol{\theta})$. The overall output is then a pure state denoted as $\hat{\rho}_{\boldsymbol{\theta}}^{S+B}=|\psi_{\boldsymbol{\theta}}^{S+B}\rangle\langle\psi_{\boldsymbol{\theta}}^{S+B}|$. Then, the QCRB (\[eq:supp\_ald\_qcrb\]) in the $S+B$ space can be saturated, when (\[eq:supp\_qcrb\_saturate\]) leading here to $$\label{eq:supp_qfim_sb_saturate} {\rm Im}\left[{\rm Tr}\left\lbrace\left(\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}^{(S+B)\dagger}\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}^{(S+B)}\right)\left(|\psi\rangle\langle\psi|\otimes|0_B\rangle\langle 0_B|\right)\right\rbrace\right]=0$$ is satisfied, where $|0_B\rangle$ is a vacuum state ancillary bath. Tracing out $B$ in (\[eq:supp\_qfim\_sb\_saturate\]), we get (\[eq:supp\_qfim\_bound\_saturate\]) as a necessary condition for the set of POVMs $\{\hat{P}_{n2}\}$ to result in (\[eq:supp\_povm\_noisy\]) (See Appendix \[sec:app9\]), since the operators $\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}$ do not act on $B$. Now, consider that the initial probe state $\hat{\rho}$ is not pure. It can be purified by extending the system $S$ space, introducing ancillas $S'$. Then, (\[eq:supp\_qfim\_bound\_saturate\]) can be applied to the pure state $|\psi^{S+S'}\rangle$ in the initial enlarged $S+S'$ space. Since the operators $\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}$ do not act on $S'$, we get (\[eq:supp\_qfim\_bound\_saturate\]) again as a necessary condition for the set of POVMs $\{\hat{P}_{n3}\}$ to result in (\[eq:supp\_ald\_fim\_ss\]) (See Appendix \[sec:app10\]). Next, we assume that the condition (\[eq:supp\_qfim\_bound\_saturate\]) saturates the upper bound (\[eq:supp\_qfim\_bound\]) to the QFIM. We consider that the initial probe state is pure. Then, the output impure state of the noisy channel in the $S$ space can be purified by extending the final system $S$ space by introducing ancillas $B$. Since both the input and output states are pure, the channel in the $S+B$ space is unitary $\hat{U}_{\boldsymbol{\theta}}^{(S+B)}$. Then, since the operators $\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}$ do not act on $B$, (\[eq:supp\_qfim\_bound\_saturate\]) saturating (\[eq:supp\_qfim\_bound\]) in the $S$ space implies that (\[eq:supp\_qfim\_sb\_saturate\]) saturates the QCRB (\[eq:supp\_ald\_qcrb\]) in the $S+B$ space. Thus, (\[eq:supp\_qfim\_bound\_saturate\]) is a sufficient condition for the set of POVMs $\{\hat{P}_{n2}\}$ to result in (\[eq:supp\_povm\_noisy\]). Now, considering that the initial probe state is not pure, it can be purified by extending the initial system $S$ space by introducing ancillas $S'$. Then, since the operators $\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}$ do not act on $S'$, (\[eq:supp\_qfim\_bound\_saturate\]) saturating (\[eq:supp\_qfim\_bound\]) in the $S$ space implies that (\[eq:supp\_qfim\_sb\_saturate\]) saturates the QCRB (\[eq:supp\_ald\_qcrb\]) in the $S+B+S'$ space, with $\hat{U}_{\boldsymbol{\theta}}^{(S+B)}$ replaced by $\hat{U}_{\boldsymbol{\theta}}^{(S+B+S')}$ and $|\psi\rangle$ replaced by $|\psi^{S+S'}\rangle$. Thus, we again get (\[eq:supp\_qfim\_bound\_saturate\]) as a sufficient condition for the set of POVMs $\{\hat{P}_{n3}\}$ to result in (\[eq:supp\_ald\_fim\_ss\]). POVM to attain QFIM Upper Bound for Pure State Input via Noisy Channel {#sec:app9} ====================================================================== Here, we prove that, as claimed in Section \[sec:noisy\], the set of POVMs $\{\hat{P}_{n2}\}$ of cardinality $q+2$, comprising the following $q+1$ elements, $$\hat{P}_0 = \hat{\rho}(\boldsymbol{\theta})=\sum_l\hat{\Pi}_l(\boldsymbol{\theta})|\psi\rangle\langle\psi|\hat{\Pi}_l^\dagger(\boldsymbol{\theta}),\quad \hat{P}_m = \frac{\partial\hat{\rho}(\boldsymbol{\theta})}{\partial\theta_m}=\sum_l\left[\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_m}|\psi\rangle\langle\psi|\hat{\Pi}_l^\dagger(\boldsymbol{\theta})+\hat{\Pi}_l(\boldsymbol{\theta})|\psi\rangle\langle\psi|\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_m}\right] \quad \forall m=1,\ldots,q,$$together with one element accounting for normalisation, saturates (\[eq:supp\_qfim\_bound\]), provided (\[eq:supp\_qfim\_bound\_saturate\]) is satisfied. We again consider initial pure state $\hat{\rho}=|\psi\rangle\langle\psi|$, and the unitary evolution in the $S+B$ space, $\hat{U}_{\boldsymbol{\theta}}^{(S+B)}$ here, in distinction from $\hat{U}_{SB}(\boldsymbol{\theta})$ used in Section \[sec:noisy\]. Then, the elements of the QFIM, as in (\[eq:supp\_ald\_qfim\]), in terms of the initial pure state $|\psi\rangle$ here are $$\begin{split} J_Q^{jk,S+B}&=4{\rm Re}\left[{\rm Tr}\left(\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}^{(S+B)\dagger}\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}^{(S+B)}\left(|\psi\rangle\langle\psi|\otimes|0_B\rangle\langle 0_B|\right)\right)\right.\\ &\left.+{\rm Tr}\left(\partial_{\theta_j}\hat{U}_{\boldsymbol{\theta}}^{(S+B)\dagger}\hat{U}_{\boldsymbol{\theta}}^{(S+B)}\left(|\psi\rangle\langle\psi|\otimes|0_B\rangle\langle 0_B|\right)\right){\rm Tr}\left(\partial_{\theta_k}\hat{U}_{\boldsymbol{\theta}}^{(S+B)\dagger}\hat{U}_{\boldsymbol{\theta}}^{(S+B)}\left(|\psi\rangle\langle\psi|\otimes|0_B\rangle\langle 0_B|\right)\right)\right], \end{split}$$ where $|0_B\rangle$ is a vacuum state ancillary bath. Tracing out $B$ from above, we get the upper bound (\[eq:supp\_qfim\_bound\]) to the QFIM in terms of the initial pure state $|\psi\rangle$ in the $S$ space: $$C_Q^{jk}=4{\rm Re}\left[{\rm Tr}\left(\sum_l\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}|\psi\rangle\langle\psi|\right)+{\rm Tr}\left(\sum_p\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}p}^\dagger\hat{\Pi}_{\boldsymbol{\theta}p}|\psi\rangle\langle\psi|\right){\rm Tr}\left(\sum_r\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}r}^\dagger\hat{\Pi}_{\boldsymbol{\theta}r}|\psi\rangle\langle\psi|\right)\right],$$ where we used the short notations $\hat{\Pi}_{\boldsymbol{\theta}l}=\hat{\Pi}_l(\boldsymbol{\theta})$ and $\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}=\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_k}$. Now, since the operators, $\hat{\Pi}_{\boldsymbol{\theta}l}$ and $\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}$, do not act on $B$, when (\[eq:supp\_qfim\_bound\_saturate\]) is satisfied, the bound $C_Q$ is the actual QFIM $J_Q$ in the $S$ space, with the set of POVMs $\{\hat{P}_{n2}\}$ saturating the corresponding QCRB for unital channel (see Appendix \[sec:app11\]). Also, the elements of the FIM $J_C$, as defined in (\[eq:supp\_fim1\]), are again as in (\[eq:supp\_fim1\_unitary\]). Consider again that we are interested in saturating the bound at a specific point $\theta_s$ in the space of $\boldsymbol{\theta}$, as in Ref. [@HBDW]. Then, (\[eq:supp\_p0\_unitary\]) and (\[eq:supp\_pm\_unitary\]) remain the same. But, (\[eq:supp\_pn\_unitary\]) becomes $$\frac{{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}|\Phi_n\rangle\langle\Phi_n|\right){\rm Tr}\left(|\Phi_n\rangle\langle\Phi_n|\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)}{{\rm Tr}\left[|\Phi_n\rangle\langle\Phi_n|\left(\hat{\rho}_{\boldsymbol{\theta}s}+\delta\theta_r\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}\right)\right]}=\frac{\langle\Phi_n|\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}|\Phi_n\rangle\langle\Phi_n|\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}|\Phi_n\rangle}{\langle\Phi_n|\hat{\rho}_{\boldsymbol{\theta}s}|\Phi_n\rangle+\delta\theta_r\langle\Phi_n|\partial_{\theta_r}\hat{\rho}_{\boldsymbol{\theta}s}|\Phi_n\rangle}=0,$$ since $\langle\Phi_n|\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}|\Phi_n\rangle=0$. Here, we used the normalising element $|\Phi_n\rangle\langle\Phi_n|$, in distinction from $|\phi_n\rangle\langle\phi_n|$ used in (\[eq:supp\_pn\]). Then, (\[eq:supp\_fim\_unitary\]) remains the same. But, (\[eq:supp\_povm2\_unitary\]) here becomes: $$\label{eq:supp_povm2} \sum_{m=1}^q\partial_{\theta_m}\hat{\rho}_{\boldsymbol{\theta}s}=\mathbb{1}-\hat{\rho}_{\boldsymbol{\theta}s}-|\Phi_n\rangle\langle\Phi_n|.$$ Then, (\[eq:supp\_ald\_fim\]) here becomes $$\label{eq:supp_povm_noisy} \begin{split} J_C^{jk}&=-{\rm Tr}\left(\partial_{\theta_j}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)-{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)+{\rm Tr}\left(\partial_{\theta_j}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}|\Phi_n\rangle\langle\Phi_n|\right)\\ &=-{\rm Tr}\left(\partial_{\theta_j}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)-{\rm Tr}\left(\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}\right){\rm Tr}\left(\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}\right)\\ &=-4{\rm Re}\left[{\rm Tr}\left(\sum_l\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}l}|\psi\rangle\langle\psi|\right)\right]-4{\rm Tr}\left(\sum_p\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}p}^\dagger\hat{\Pi}_{\boldsymbol{\theta}p}|\psi\rangle\langle\psi|\right){\rm Tr}\left(\sum_r\hat{\Pi}_{\boldsymbol{\theta}r}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}r}|\psi\rangle\langle\psi|\right)\\ &=4{\rm Re}\left[{\rm Tr}\left(\sum_l\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}|\psi\rangle\langle\psi|\right)+{\rm Tr}\left(\sum_p\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}p}^\dagger\hat{\Pi}_{\boldsymbol{\theta}p}|\psi\rangle\langle\psi|\right){\rm Tr}\left(\sum_r\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}r}^\dagger\hat{\Pi}_{\boldsymbol{\theta}r}|\psi\rangle\langle\psi|\right)\right]=C_Q^{jk}, \end{split}$$ noting that $\langle\psi|\hat{O}|\psi\rangle$ is, by definition, real for some operator $\hat{O}$. Here, we used the fact that $\sum_l\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\Pi}_{\boldsymbol{\theta}l}=-\sum_l\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}$, arising from $\sum_l\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\Pi}_{\boldsymbol{\theta}l}=\mathbb{1}$ upon differentiating both sides with respect to $\theta_k$, and that $\sum_l{\rm Tr}\left(\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}|\psi\rangle\langle\psi|\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right)=-\sum_l{\rm Tr}\left(\hat{\Pi}_{\boldsymbol{\theta}l}|\psi\rangle\langle\psi|\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right)$, arising from ${\rm Tr}\left(\hat{\rho}_{\boldsymbol{\theta}s}\right)=\sum_l{\rm Tr}\left(\hat{\Pi}_{\boldsymbol{\theta}l}|\psi\rangle\langle\psi|\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right)=1$ upon differentiating both sides with respect to $\theta_k$, and that $2{\rm Re}\left[\sum_l\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\right]=-2{\rm Re}\left[\sum_l\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}l}\right]$, arising from $\sum_l\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\Pi}_{\boldsymbol{\theta}l}=\mathbb{1}$ upon differentiating both sides with respect to $\theta_k$ and then $\theta_j$. Also, ${\rm Tr}\left(\partial_{\theta_j}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}|\Phi_n\rangle\langle\Phi_n|\right)=\langle\Phi_n|\partial_{\theta_j}\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}s}|\Phi_n\rangle=0$, since $\langle\Phi_n|\partial_{\theta_j}\hat{\rho}_{\boldsymbol{\theta}s}|\Phi_n\rangle=0$. POVM to attain QFIM Upper Bound for Mixed State Input via Noisy Channel {#sec:app10} ======================================================================= Here, we prove that, as claimed in Section \[sec:noisy\], the set of POVMs $\{\hat{P}_{n3}\}$ of cardinality $q+2$, comprising the following $q+1$ elements, $$\hat{P}_0 = \hat{\rho}(\boldsymbol{\theta})=\sum_l\hat{\Pi}_l(\boldsymbol{\theta})\hat{\rho}\hat{\Pi}_l^\dagger(\boldsymbol{\theta}),\qquad \hat{P}_m = \frac{\partial\hat{\rho}(\boldsymbol{\theta})}{\partial\theta_m}=\sum_l\left[\frac{\partial\hat{\Pi}_l(\boldsymbol{\theta})}{\partial\theta_m}\hat{\rho}\hat{\Pi}_l^\dagger(\boldsymbol{\theta})+\hat{\Pi}_l(\boldsymbol{\theta})\hat{\rho}\frac{\partial\hat{\Pi}_l^\dagger(\boldsymbol{\theta})}{\partial\theta_m}\right] \quad \forall m=1,\ldots,q,$$ together with one element accounting for normalisation, saturates (\[eq:supp\_qfim\_bound\]), provided (\[eq:supp\_qfim\_bound\_saturate\]) is satisfied. Consider that the initial probe state $\hat{\rho}$ is impure. It can be purified by extending the system $S$ space, introducing ancillas $S'$. Then, proceeding in a similar manner as in the previous section for the pure state $|\psi^{S+S'}\rangle$ in the initial enlarged $S+S'$ space, the upper bound (\[eq:supp\_qfim\_bound\]) to the QFIM in terms of the initial state $\hat{\rho}$ in the $S$ space is given by $$\label{eq:supp_qfim_bound_noisy} C_Q^{jk}=4{\rm Re}\left[{\rm Tr}\left(\sum_l\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\right)+{\rm Tr}\left(\sum_p\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}p}^\dagger\hat{\Pi}_{\boldsymbol{\theta}p}\hat{\rho}\right){\rm Tr}\left(\sum_r\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}r}^\dagger\hat{\Pi}_{\boldsymbol{\theta}r}\hat{\rho}\right)\right],$$ since the operators, $\hat{\Pi}_{\boldsymbol{\theta}l}$ and $\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}$ do not act on $S'$. Moreover, when (\[eq:supp\_qfim\_bound\_saturate\]) is satisfied, the bound $C_Q$ is the actual QFIM $J_Q$ in the $S$ space, with the set of POVMs $\{\hat{P}_{n3}\}$ saturating the corresponding QCRB for unital channel (see Appendix \[sec:app11\]). Also, again the elements of the FIM $J_C$, as defined in (\[eq:supp\_fim1\]), are as in (\[eq:supp\_fim1\_unitary\]). Consider again that we are interested in saturating the bound at a specific point $\theta_s$ in the space of $\boldsymbol{\theta}$, as in Ref. [@HBDW]. Then, (\[eq:supp\_p0\_unitary\]), (\[eq:supp\_pm\_unitary\]), (\[eq:supp\_pn\_unitary\]), (\[eq:supp\_fim\_unitary\]), (\[eq:supp\_povm2\_unitary\]) remain the same, with the normalising element again being $\hat{P}_{q+1}$. But, (\[eq:supp\_ald\_fim\]), which was (\[eq:supp\_povm\_noisy\]) in the last section, becomes: $$\label{eq:supp_ald_fim_ss} J_C^{jk}=4{\rm Re}\left[{\rm Tr}\left(\sum_l\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\right)+{\rm Tr}\left(\sum_p\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}p}^\dagger\hat{\Pi}_{\boldsymbol{\theta}p}\hat{\rho}\right){\rm Tr}\left(\sum_r\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}r}^\dagger\hat{\Pi}_{\boldsymbol{\theta}r}\hat{\rho}\right)\right]=C_Q^{jk},$$ which is as in (\[eq:supp\_qfim\_bound\_noisy\]). Noise in Channel can allow to beat the Heisenberg Limit {#sec:app11} ======================================================= Here, we prove that noise in the quantum channel can allow to beat the Heisenberg precision limit, as claimed in Section \[sec:heisenberg\], when the following condition is satisfied by the Kraus operators $\hat{\Pi}_{\boldsymbol{\theta}l}=\hat{\Pi}_{l}(\boldsymbol{\theta})$ of the quantum channel: $$\label{eq:supp_saturate_cond1} {\rm Im}\left[\sum_l{\rm Tr}\left\lbrace\left(\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\right)\hat{\rho}\right\rbrace\right]=0, \qquad \forall \, j, k.$$ We have $$\label{eq:supp_partial_rho1} \hat{\rho}_{\boldsymbol{\theta}}=\sum_l\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\Rightarrow\partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}}=\sum_l\left[\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger+\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right].$$ Next, (\[eq:supp\_saturate\_cond1\]) saturates an ALD-based QCRB, corresponding to: $$\label{eq:supp_partial_rho2} \partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}}=\frac{1}{2}\left[\hat{O}_k\hat{\rho}+\hat{\rho}\hat{O}_k^\dagger\right]=\sum_l\left[\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}+\hat{\rho}\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right],$$ where the ALDs are chosen to be: $$\hat{O}_k=2\sum_l\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}.$$ Note that the choice of ALD need not be unique. Our purpose here is that it is enough to find one instance where the Heisenberg limit can be beaten. Also, strictly speaking, the above is not a valid ALD, since it is not a function of the probe state, hence our choice of the ALDs in the main text. Moreover, (\[eq:supp\_partial\_rho2\]) is expressed in terms of the initial probe state and not the evolved probe state. However, for the purposes of our proof here, it suffices to consider the above for simplicity without loss of generality. We start with assuming that when (\[eq:supp\_saturate\_cond1\]) is satisfied, the upper bound (\[eq:supp\_qfim\_bound\]) to the QFIM equals the actual QFIM. In other words, the corresponding lower bound to the Heisenberg limit equals the Heisenberg limit, when (\[eq:supp\_saturate\_cond1\]) is satisfied, which is possible, when we have: $$\label{eq:supp_saturate_cond2} \sum_l\left[\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger+\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right]=\sum_l\left[\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}+\hat{\rho}\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right],$$ following from (\[eq:supp\_partial\_rho1\]) and (\[eq:supp\_partial\_rho2\]). Now, the above is possible only when the following condition is satisfied: $$\label{eq:supp_saturate_cond3} \begin{split} \sum_l\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger&=\sum_l\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\\ \Rightarrow{\rm Tr}\left(\sum_l\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right)&={\rm Tr}\left(\sum_l\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\right)\\ \Rightarrow{\rm Tr}\left(\sum_l\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\right)&={\rm Tr}\left(\sum_l\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\right)\\ \Rightarrow{\rm Tr}\left(\sum_l\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\right)&={\rm Tr}\left(\sum_l\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\rho}\right)\\ \Rightarrow\sum_l{\rm Tr}\left[\left(\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\right)\hat{\rho}\right]&=\sum_l{\rm Tr}\left[\left(\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\right)\hat{\rho}\right], \end{split}$$ which is possible when we have $\sum_l\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger=\mathbb{1}$, i.e. when the channel is unital. Thus, for unital channels the upper bound (\[eq:supp\_qfim\_bound\]) to the QFIM equals the actual QFIM. Hence, if the channel is non-unital, then the upper bound (\[eq:supp\_qfim\_bound\]) to the QFIM can be strictly larger than the actual QFIM, so that the Heisenberg limit may be beaten. Note that (\[eq:supp\_saturate\_cond3\]) modifies (\[eq:supp\_saturate\_cond1\]) to: $$\label{eq:supp_saturate_cond4} {\rm Im}\left[\sum_l{\rm Tr}\left\lbrace\left(\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\right)\hat{\rho}\right\rbrace\right]=0, \qquad \forall \, j, k.$$ The fact that the channel indeed needs to be unital for the last line in (\[eq:supp\_saturate\_cond3\]) to hold may not be evident without an extra summation index. Let us, therefore, reconfirm this. First, note that (\[eq:supp\_saturate\_cond4\]) saturates an ALD-based QCRB, corresponding to: $$\label{eq:supp_partial_rho3} \partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}}=\frac{1}{2}\left[\hat{L}_k\hat{\rho}+\hat{\rho}\hat{L}_k^\dagger\right]=\sum_l\left[\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}+\hat{\rho}\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\Pi}_{\boldsymbol{\theta}l}\right],$$ where the ALDs are chosen to be: $$\hat{L}_k=2\sum_l\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}.$$ Now, in terms of the evolved probe state $\hat{\rho}_{\boldsymbol{\theta}}=\hat{\rho}(\boldsymbol{\theta})$, the condition (\[eq:supp\_saturate\_cond1\]) becomes: $$\label{eq:supp_saturate_cond5} {\rm Im}\left[\sum_l{\rm Tr}\left\lbrace\left(\hat{\Pi}_{\boldsymbol{\theta}l}\partial_{\theta_j}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right)\hat{\rho}_{\boldsymbol{\theta}}\right\rbrace\right]=0, \qquad \forall \, j, k.$$ This is obtained from the saturability condition corresponding to $\hat{\rho}^{(S+B)}_{\boldsymbol{\theta}}$ in the $S+B$ space, by tracing out the bath $B$. And this is equivalent to the saturability condition (\[eq:supp\_saturate\_cond1\]). Next, (\[eq:supp\_saturate\_cond5\]) saturates an ALD-based QCRB, corresponding to: $$\label{eq:supp_partial_rho4} \partial_{\theta_k}\hat{\rho}_{\boldsymbol{\theta}}=\frac{1}{2}\left[\hat{Q}_k\hat{\rho}_{\boldsymbol{\theta}}+\hat{\rho}_{\boldsymbol{\theta}}\hat{Q}_k^\dagger\right]=\sum_l\left[\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\rho}_{\boldsymbol{\theta}}+\hat{\rho}_{\boldsymbol{\theta}}\hat{\Pi}_{\boldsymbol{\theta}l}\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right],$$ where the ALDs are chosen to be: $$\hat{Q}_k=2\sum_l\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger.$$ Then, (\[eq:supp\_saturate\_cond2\]) holds, when the following holds: $$\label{eq:supp_saturate_cond6} \sum_l\left[\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}+\hat{\rho}\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\Pi}_{\boldsymbol{\theta}l}\right]=\sum_l\left[\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\rho}_{\boldsymbol{\theta}}+\hat{\rho}_{\boldsymbol{\theta}}\hat{\Pi}_{\boldsymbol{\theta}l}\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right],$$ that follows from (\[eq:supp\_partial\_rho3\]) and (\[eq:supp\_partial\_rho4\]). Now, let us consider both $\hat{\rho}$ and $\hat{\rho}_{\boldsymbol{\theta}}$ to be maximally mixed. Then, (\[eq:supp\_saturate\_cond6\]) becomes: $$\label{eq:supp_saturate_cond7} \begin{split} \sum_l\left[\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}+\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\Pi}_{\boldsymbol{\theta}l}\right]&=\sum_l\left[\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger+\hat{\Pi}_{\boldsymbol{\theta}l}\partial_{\theta_k}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right]\\ \Rightarrow\sum_l\partial_{\theta_k}\left[\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\Pi}_{\boldsymbol{\theta}l}\right]&=\sum_l\partial_{\theta_k}\left[\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\right]\\ \Rightarrow\mathbb{1}&=\sum_l\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger \qquad \because \, \sum_l\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger\hat{\Pi}_{\boldsymbol{\theta}l}=\mathbb{1}, \end{split}$$ where the last line follows from the previous line without an additional constant, since we must also have: $$\hat{\rho}_{\boldsymbol{\theta}}=\sum_l\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\rho}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger \Rightarrow\mathbb{1}=\sum_l\hat{\Pi}_{\boldsymbol{\theta}l}\hat{\Pi}_{\boldsymbol{\theta}l}^\dagger,$$ when both $\hat{\rho}$ and $\hat{\rho}_{\boldsymbol{\theta}}$ are maximally mixed. Indeed, both the initial and evolved probe states can be maximally mixed, only if the noisy channel is unital. Thus, the channel indeed needs to be unital for (\[eq:supp\_saturate\_cond6\]), and therefore, (\[eq:supp\_saturate\_cond3\]) to hold.
--- abstract: 'We investigate the simplest gauge theory for spontaneous R-parity breaking and its testability at the LHC. This theory, based on a local B-L gauge symmetry, can be considered as the simplest framework for understanding the origin of the R-parity violating interactions, giving rise to potential lepton number violating signals and suppressed baryon number violating operators. The full spectrum of the theory and the constraints coming from neutrino masses are analyzed in detail. We discuss the proton decay issue and the possible dark matter candidates. In order to assess the testability of the theory we study the properties of the new gauge boson, the neutralino decays and the main production channels for the charged sleptons at the LHC. We find that final states with four charged leptons, three of them with the same-sign, and four jets are the most striking signals for the testability of the lepton number violation associated with spontaneous R-parity violation at the LHC.' address: - | Center for Cosmology and Particle Physics (CCPP)\ New York University, 4 Washington Place, NY 10003, USA - | International School for Advanced Studies (SISSA)\ Via Bonomea 265, 34136 Trieste, Italy author: - 'Pavel Fileviez P[é]{}rez' - Sogee Spinner --- Introduction ============ The Large Hadron Collider (LHC) will hopefully soon discover the underlying theory for the TeV scale and might allow us to understand a more fundamental law of nature. For more than three decades the idea of Supersymmetry has attracted the attention of many experts in the particle physics community and the minimal supersymmetric extension of the Standard Model [@MSSM1; @MSSM2; @MSSM3] (MSSM) is still considered one of the most appealing candidates for the theory of particle physics at the TeV scale. It is well known that the MSSM provides an understanding of why the SM-like Higgs boson is light, contains a cold dark matter candidate, allows for the unification of the gauge couplings and allows for the mechanism of electroweak baryogenesis to explain the baryon asymmetry in the universe. There are several open issues in the MSSM, one of them being the origin of the discrete symmetry R-parity [@R1; @R2]. This symmetry plays a major role in the MSSM and it is defined as $R=(-1)^{3(B-L)+2S}$, where $B$, $L$ and $S$ stand for baryon number, lepton number and spin, respectively. In many MSSM studies it is assumed that R-parity is conserved or explicitly broken without understanding the origin of this symmetry. However, the fate of R-parity is crucial for the discovery of supersymmetry since, as is well known, R-parity conservation give rise to channels with multi-jets, multi-leptons and missing energy at the LHC, while signatures of broken R-parity are multi-leptons, multi-jets, and missing energy due to the SM neutrinos only. The simplest and most elegant framework for the origin of R-parity is based on local B-L symmetry. This connection was explored for the first time in Ref. [@Hayashi], and in Ref. [@Mohapatra] a simpler scenario was studied. See also Ref. [@Martin] for a complete discussion of how to gauge R-parity.[^1] Recently, we have investigated the simplest B-L models in Refs. [@Rp1; @Rp2; @Rp5; @Rp6] and found the following main result [*[The simplest theories based on local B-L make the following prediction:\ R-parity must be spontaneously broken at the TeV scale and\ one expects to observe lepton number violation at the LHC ! ]{}*]{} In this letter we study in detail the theory proposed in Ref. [@Rp2] which can be considered as the simplest gauge theory for R-parity violation. In this context the only way to break local B-L and obtain the MSSM after symmetry breaking is to give a vacuum expectation value to one of the right-handed sneutrino required by anomaly cancellation. One of the most important features of this theory is that the B-L and R-parity breaking scales are determined by the soft supersymmetric breaking scale. This idea was studied for the first time in Ref. [@Rp1] which defined the simplest left-right symmetric model. We review the theory and symmetry breaking mechanism in Sections II and III. The full spectrum of the theory [@Rp2] is discussed in Section IV and the constraints coming from neutrino masses in Section VI. We discuss the proton decay issue and the possible dark matter candidates in Section V. In order to understand the testability of the theory we study the properties of the new gauge boson, the neutralino decays in Section VII and in Section VIII the main production channels for the charged sleptons at the Large Hadron Collider. We find that the channels with four charged leptons (three with the same electric charge) and four jets give us the most striking signals for the testability of lepton number violation at the LHC. The Minimal Gauge Theory for Spontaneous R-parity Violation =========================================================== The simplest gauge theory for spontaneous R-parity breaking was proposed in Ref. [@Rp2]. In this context one can understand dynamically the origin of the R-parity violating terms in the MSSM. Here we discuss the structure of the theory and the full spectrum. - [**]{}: This theory is based on the gauge group $$SU(3)_C \otimes SU(2)_L \otimes U(1)_Y \otimes U(1)_{B-L},$$ and the different matter chiral superfields are given by $$\hat{Q} = \left( \begin{array} {c} \hat{u} \\ \hat{d} \end{array} \right) \ \sim \ (2,1/3,1/3), \ \ \hat{L} = \left( \begin{array} {c} \hat{\nu} \\ \hat{e} \end{array} \right) \ \sim \ (2,-1,-1),$$ $$\hat{u}^c \ \sim \ (1,-4/3,-1/3), \ \ \hat{d}^c \ \sim \ (1,2/3,-1/3), \ \ \hat{e}^c \ \sim \ (1,2,1).$$ In order to cancel the $B-L$ anomalies one introduces three chiral superfields for the right-handed neutrinos: $$\hat{\nu}^c \ \sim \ (1,0,1).$$ - [**]{}: The Higgs sector is composed of two Higgs chiral superfields as in the MSSM $$\hat{H}_u = \left( \begin{array} {c} \hat{H}_u^+ \\ \hat{H}_u^0 \end{array} \right) \ \sim \ (2, 1, 0), \ \ \ \hat{H}_d = \left( \begin{array} {c} \hat{H}_d^0 \\ \hat{H}_d^- \end{array} \right) \ \sim \ (2, -1, 0).$$ - [*:*]{} With this field content the superpotential reads as $${\cal W}_{BL}={\cal W}_{MSSM} \ + \ Y_\nu \ \hat{L}^T \ i \sigma_2 \ \hat{H}_u \ \hat{\nu}^c,$$ where $$\begin{aligned} {\cal W}_{MSSM} &=& Y_u \ \hat{Q}^T \ i \sigma_2 \ \hat{H}_u \ \hat{u}^c \ + \ Y_d \ \hat{Q}^T \ i \sigma_2 \ \hat{H}_d \ \hat{d}^c \ + \ Y_e \ \hat{L}^T \ i \sigma_2 \ \hat{H}_d \ \hat{e}^c \ + \ \mu \ \hat{H}_u^T \ i \sigma_2 \ \hat{H}_d. \nonumber \\\end{aligned}$$ In addition to the superpotential, the model is also specified by the soft terms: $$\begin{aligned} \nonumber V_{soft} & = & m_{\tilde \nu^c}^2 { \mathopen{}\left| {\tilde{\nu}^c}\right| }^2 \ + \ m_{\tilde L}^2 \ { \mathopen{}\left| {\tilde L}\right| }^2 \ + \ m_{\tilde e^c}^2 \ { \mathopen{}\left| {\tilde e^c}\right| }^2 \ + \ m_{H_u}^2 { \mathopen{}\left| {H_u}\right| }^2 + m_{H_d}^2 { \mathopen{}\left| {H_d}\right| }^2 \ + \ \left( \frac{1}{2} M_{BL} \tilde{B^{'}} \tilde{B^{'}} \right. \nonumber \\ & + & \left. A_\nu \ \tilde{L}^T \ i \sigma_2 \ H_u \ \tilde{\nu}^c \ + \ B\mu \ H_u^T \ i \sigma_2 \ H_d \ + \ \mathrm{h.c.} \right) \ + \ V_{soft}^{MSSM}, \label{soft}\end{aligned}$$ where the terms not shown here correspond to terms in the soft MSSM potential. Since we have a new gauge symmetry in the theory we need to modify the kinetic terms for all MSSM matter superfields, and include the kinetic term for right-handed neutrino superfields $${\cal{L}}_{Kin} (\nu^c) = \int d^2 \theta d^2 \bar{\theta} \ (\hat{\nu}^c)^\dagger e^{g_{BL} \hat{V}_{BL}} \hat{\nu}^c.$$ Here $\hat{V}_{BL}$ is the B-L vector superfield. Using these interactions we can study the full spectrum of the theory. Electroweak and B-L Symmetry Breaking ===================================== As in the MSSM, electroweak symmetry is broken by the vevs of $H_u^0$ and $H_d^0$, while $U(1)_{B-L}$ is broken due to the vev of right-handed sneutrinos. Notice that this is the only field which can break local $B-L$ and give mass to the new neutral gauge boson in the theory. Therefore, the theory predicts spontaneous R-parity violation. It is important to mention that the $B-L$ and R-parity breaking scales are determined by the soft supersymmetric breaking scale, and one must expect lepton number violation at the LHC. The neutral fields are defined as $$\begin{aligned} H_u^0 &=& \frac{1}{\sqrt{2}} \left( v_u \ + \ h_u \right) \ + \ \frac{i}{\sqrt{2}} A_u, \\ H_d^0 &=& \frac{1}{\sqrt{2}} \left( v_d \ + \ h_d \right) \ + \ \frac{i}{\sqrt{2}} A_d, \\ \tilde{\nu}^i &=& \frac{1}{\sqrt{2}} \left( v_{L}^{i} \ + \ h_L^{i} \right) \ + \ \frac{i}{\sqrt{2}} A_L^{i} , \\ \tilde{\nu}^c_i &=& \frac{1}{\sqrt{2}} \left( v_{R}^{i} \ + \ h_R^{i} \right) \ + \ \frac{i} {\sqrt{2}} A_R^{i}, \end{aligned}$$ and the relevant scalar potential reads as $$\begin{aligned} V &=& V_F \ + \ V_D \ + \ V_{soft}, \\ V_F &=& |\mu|^2 |H_u^0|^2 \ + \ | - \mu H_d^0 + \tilde{\nu}_i Y_\nu^{ij} \tilde{\nu}^c_j |^2 \ + \ \sum_{i} | Y_\nu^{ij} \tilde{\nu}^c_j|^2 |H_u^0|^2 \ + \ \sum_{j} |\tilde{\nu}_i Y_\nu^{ij}|^2 |H_u^0|^2, \\ V_D &=& \frac{(g_1^2 + g_2^2)}{8} \left( |H_u^0|^2 - |H_d^0|^2 - \sum_{i} |\tilde{\nu}_i|^2 \right)^2 \ + \ \frac{g_{BL}^2}{8} \left( \sum_{i} ( |\tilde{\nu}^c_i|^2 - |\tilde{\nu}_i|^2 ) \right)^2, \\ V_{soft} &=& (\tilde{\nu}^c_i)^\dagger m_{\tilde{\nu}^c_{ij}}^2 \tilde{\nu}^c_j \ + \ \tilde{\nu}_i^\dagger m_{\tilde{L}_{ij}}^2 \tilde{\nu}_j \ + \ m_{H_u}^2 |H_u^0|^2 \ + \ m_{H_d}^2 |H_d^0|^2 \ + \ \left( \tilde{\nu}_i a_\nu^{ij} \tilde{\nu}_j^c H_u^0 - B \mu H_u^0 H_d^0 \ + \ \rm{h.c.}\right). \nonumber \\\end{aligned}$$ Using the above scalar potential and assuming that all parameters are real we can find the minimization conditions $$\begin{aligned} & v_u \left[ \mu^2 \ + \ \frac{1}{2} Y_\nu^{ij} v_R^j Y_\nu^{ik} v_R^k \ + \ \frac{1}{2} v_L^i Y_\nu^{ij} v_L^k Y_\nu^{kj} \ + \ \frac{g_1^2 + g_2^2}{8} \left( v_u^2 - v_d^2 - v_L^i v_L^i \right) \ + \ m_{H_u}^2 \right] \nonumber \\ \label{Eq1} & + \frac{1}{\sqrt{2}} v_L^i a_\nu^{ij} v_R^j \ - \ B \mu v_d =0, \\ \label{Eq2} & v_d \left[ \mu^2 \ - \ \frac{(g_1^2 + g_2^2)}{8} \left( v_u^2 - v_d^2 - v_L^i v_L^i \right) + m_{H_d}^2 \right] \ - \ \frac{1}{\sqrt{2}} \mu v_L^i Y_\nu^{ij} v_R^j \ - \ B \mu v_u =0,\end{aligned}$$ $$\begin{aligned} & \frac{1}{2} v_L^i Y_\nu^{ij} v_R^j v_L^m Y_\nu^{mk} \ - \ \frac{1}{\sqrt{2}} \mu v_d v_L^i Y_\nu^{ik} \ + \ \frac{1}{2} v_u^2 Y_\nu^{ij} v_R^j Y_\nu^{ik} \ + \ \frac{g_{BL}^2}{8} \left( v_R^i v_R^i - v_L^i v_L^i \right) v_R^k \nonumber \\ \label{Eq3} & \frac{1}{2} v_R^i \left[ (m_{\tilde{\nu}^c}^2)_{ki} + (m_{\tilde{\nu}^c}^2)_{ik} \right] \ + \ \frac{1}{\sqrt{2}} v_L^i a_\nu^{ik} v_u =0, \\ & \frac{1}{2} v_L^i Y_\nu^{ij} v_R^j Y_\nu^{km} v_R^m \ - \ \frac{1}{\sqrt{2}} \mu v_d Y_\nu^{kj} v_R^j \ + \ \frac{1}{2} v_u^2 v_L^i Y_\nu^{ij} Y_\nu^{kj} - \frac{(g_1^2 + g_2^2)}{8} \left( v_u^2-v_d^2 - v_L^i v_L^i \right) v_L^k \nonumber \\ \label{Eq4} & - \frac{g_{BL}^2}{8} \left( v_R^i v_R^i - v_L^i v_L^i \right) v_L^k \ + \ \frac{1}{2} v_L^i \left[ (m_{\tilde{L}}^2)_{ki} + (m_{\tilde{L}}^2)_{ik} \right] \ + \ \frac{1}{\sqrt{2}} a_\nu^{kj} v_R^j v_u=0.\end{aligned}$$ In order to have phenomenological allowed solutions the $v_L^i$ have to be small, and the $v_R^i$ have to be much larger than $v_u, v_d$ and $v_L^i$. Up to negligibly small terms[^2] the right-handed sneutrino acquire a vev in only one family. A possible solution and the one used throughout this paper is $v_R^i=(0,0,v_R)$. In this case: $$\begin{aligned} \label{vR.sln} v_R^2 &\approx& -\frac{8 (m_{\tilde{\nu}^c}^2)_{33}}{g_{BL}^2}, \\ v_L^k &\approx & \frac{v_R}{\sqrt{2}} \frac{\left( \mu v_d Y_\nu^{k3} - a_\nu^{k3} v_u \right)}{\left[ (m_{\tilde{L}}^2)_{kk} - \frac{(g_1^2 + g_2^2)}{8} (v_u^2 - v_d^2) - \frac{g_{BL}^2}{8} v_R^2\right]}.\end{aligned}$$ Notice that the minimization conditions for $v_u$, Eq. (\[Eq1\]), and $v_d$, Eq. (\[Eq2\]) are not greatly altered from their MSSM equivalents since the extra terms are very small. Radiative Symmetry Breaking --------------------------- In the MSSM, the large top Yukawa coupling drives the up-type soft Higgs mass squared parameter to negative values for generic boundary conditions leading to radiative electroweak symmetry breaking [@Ibanez:1982fr]; a celebrated success of the MSSM. A valid question is then if the same success is possible in achieving a tachyonic right-handed sneutrino mass in this $B-L$ model as required by Eq. (\[vR.sln\]). Unfortunately, this is not possible through a large Yukawa coupling since the Yukawa couplings of the right-handed neutrino are all dictated to be small by neutrino masses. However, there is an alternate possibility whereby a positive mass squared parameter for the right-handed sneutrino at the high scale will run to a tachyonic value at the low scale. This is due to the presence of the so-called $S$-term (due to $D$-term contributions to the RGE) in the soft mass RGE, as discussed for this $B-L$ model in [@Ambroso:2009jd; @Ambroso:2009sc; @Ambroso:2010pe]. A short outline of the mechanism follows. The RGE for the right-handed sneutrino soft mass squared parameter is $$16 \pi^2 \frac{d m_{\tilde \nu^c}^2}{d t} = - 3 g_{BL}^2 \left|M_{BL}\right|^2 + \frac{3}{4} g_{BL}^2 S_{BL},$$ with $$S_{BL} = \text{Tr}\left(2 m_{\tilde Q}^2 - m_{\tilde u^c}^2 - m_{\tilde d^c}^2 - 2 m_{\tilde L}^2 + m_{\tilde e^c}^2 + m_{\tilde \nu^c}^2\right),$$ where the trace is over the three generations of the fermions and the soft mass parameters in the trace are for the squark doublet, right-handed up squark, right-handed down squark, slepton doublet, right-handed charged slepton and right-handed sneutrino, respectively. The gaugino mass term always drives the sneutrino mass parameter positive at the low scale but if the overall sign of the $S$-term is positive, it could lead to the opposite effect. Such an effect would require a non-zero $S$-term at the high scale, which is not possible if the soft masses are universal across the generations of each flavor. An example of a suitable boundary condition with minimal variation from the popular MSUGRA Ansatz is universal boundary conditions for all sfermions except for the right-handed sneutrinos, which might have the boundary conditions $$m_{\tilde \nu^c_1}^2 = m_{\tilde \nu^c_2}^2 = P \, m_0^2, \quad \quad m_{\tilde \nu^c_3}^2 = Q \, m_0^2,$$ where $m_0$ is the universal mass and $P > 1$ and $Q<1$. The boundary condition for $S_{BL}$ is then $$S_{BL} = (2 P + Q - 1)m_0^2,$$ having the necessary sign to contribute negatively to the sneutrino soft mass parameter as it is evolved from the high scale down. The necessary sizes of $P$ and $Q$ depend on the size of the gaugino mass parameter which has the opposite effect, see [@Ambroso:2009jd; @Ambroso:2009sc; @Ambroso:2010pe] for more details. So while, the traditional radiative symmetry breaking from universal boundary conditions is not possible in these models, it is possible to radiatively break $B-L$ through this $S$-term starting from a positive value. For the implementation of the radiative mechanism in the non-minimal model see Ref. [@Fate]. Mass Spectrum and Lepton Number Violation ========================================= R-Parity Violating Interactions -------------------------------- After symmetry breaking lepton number is spontaneously broken in the form of bilinear R-parity violating interactions. There are no trilinear R-parity violating interactions at the renormalizable level. These bilinear interactions mix the leptons with the Higgsinos and gauginos: $$\frac{1}{2} g_{BL} v_{R} ( \nu_3^c \tilde B^{'} ), \quad \frac{1}{2} g_{2} v_L^i ( \nu_i \tilde W^0 ), \quad \frac{1}{\sqrt{2}} g_{2} v_L^i (e_i \tilde W^+ ),$$ $$\frac{1}{2} g_{1} v_L ^i( \nu_i \tilde B ), \quad \frac{1}{\sqrt{2}} Y_\nu^{i3} v_R ( L^T_i i\sigma_2 \ \tilde{H}_u ), \quad \frac{1}{\sqrt{2}} Y_\nu^{i3} v_L^i ( \tilde{H}_u^0 \ \nu^c_3 ), \quad \frac{1}{\sqrt{2}} Y_e^i v_L^i ( \tilde{H}_d^- \ e^c_i ).$$ The first term is new and is the only term not suppressed by neutrino masses. The fifth term corresponds to the so-called $\epsilon$ term, and second, third and fourth terms are small but important for the decay of neutralinos and charginos. See Section V for the discussion of the baryon number violating operators. There are also lepton number violating interactions coming from the soft terms and the B-L D-term. From $V_{soft}$ one gets $$A_\nu^{i3} \frac{v_R}{\sqrt{2}} \tilde{L}_i^T \ i \sigma_2 \ H_u,$$ while from the D-term one finds $$g_{BL}^2 v_R \ \tilde{\nu}^c \left( \tilde{q}^\dagger \frac{1}{6} \tilde{q} \ - \ \tilde{l}^\dagger \frac{1}{2} \tilde{l} \right).$$ As one can expect these terms are important to understand the scalar sector of the theory. Mass Spectrum ------------- The neutral gauge boson associated to the $B-L$ gauge group is $Z_{BL}$. Using the covariant derivate for the right-handed sneutrinos, $D_\mu \tilde{\nu}^c = \partial_\mu \tilde{\nu}^c + i \frac{g_{BL}}{2} B_\mu^{'} \tilde{\nu}^c$, the mass term for $Z_{BL}$ is: $$M_{Z_{BL}}=\frac{g_{BL}}{2} v_R.$$ Now, using the experimental collider constraint [@Carena:2004xs]: $$\label{ZBL.const} \frac{M_{Z_{BL}}}{g_{BL}} \geq 3 \text{ TeV},$$ and Eq. (21) one finds the condition $$| (m_{\tilde{\nu}^c)_{33}} | > 2.12 \ g_{BL} \ \rm{TeV}.$$ Then, if $g_{BL} =0.1$ the soft mass above has to be larger than 200 GeV. This condition can be easily satisfied without assuming a very heavy spectrum for the supersymmetric particles. As in any supersymmetric theory where $R$-parity is broken all the fermions with the same quantum numbers mix and form physical states which are linear combinations of the original fields. The neutralinos in this theory are a linear combination of the fields, $\left(\nu_i, \ \nu^c_j, \ \tilde B', \ \tilde B, \ \tilde W^0, \ \tilde H_d^0, \ \tilde H_u^0\right)$. Then, the neutralino mass matrix is given by $${\cal M}_{N} = \begin{pmatrix} 0 & \frac{1}{\sqrt{2}} \ Y_\nu^{ij} v_u & -\frac{1}{2} g_{BL} \ v_L^i & -\frac{1}{2} g_1 \ v_L^i & \frac{1}{2} g_2 \ v_L^i & 0 & \frac{1}{\sqrt{2}} \ Y_\nu^{ij} \ v_R^j \\ \frac{1}{\sqrt{2}} \ Y_\nu^{ij} \ v_u & 0 & \frac{1}{2} g_{BL} \ v_R^j & 0 & 0 & 0 & \frac{1}{\sqrt{2}} \ Y_\nu^{ij} \ v_L^i \\ -\frac{1}{2} g_{BL} \ v_L^i & \frac{1}{2} g_{BL} \ v_R^j & M_{BL} & 0 & 0 & 0 & 0 \\ -\frac{1}{2} g_1 \ v_L^i & 0 & 0 & M_1 & 0 & -\frac{1}{2} g_1 v_d & \frac{1}{2} g_1 v_u \\ \frac{1}{2} g_2 \ v_L^i & 0 & 0 & 0 & M_2 & \frac{1}{2} g_2 v_d & -\frac{1}{2} g_2 v_u \\ 0 & 0 & 0 & -\frac{1}{2} g_1 v_d & \frac{1}{2} g_2 v_d & 0 & -\mu \\ \frac{1}{\sqrt{2}} \ Y_\nu^{ij} \ v_R^j & \frac{1}{\sqrt{2}} \ Y_\nu^{ij} \ v_L^i & 0 & \frac{1}{2} g_1 v_u & -\frac{1}{2} g_2 v_u & -\mu & 0 \end{pmatrix}. \label{neutralino}$$ We have discussed above that only one right-handed sneutrinos get a vev, $v_R^i=(0,0,v_R)$. Now, integrating out the neutralinos one can find the mass matrix for the light neutrinos. In this case one has three active neutrinos and two sterile neutrinos, and the mass matrix in the basis $(\nu_e, \nu_\mu, \nu_\tau, \nu^c_e, \nu^c_\mu)$ is given by $$\label{Mnu} M_\nu = \begin{pmatrix} A \ v_{L}^i v_{L}^j + B \ \left[Y_\nu^{i3} v_{L}^j + Y_\nu^{j3} v_{L}^i \right] + C \ Y_\nu^{i3} Y_\nu^{j3} & \frac{1}{\sqrt{2}} v_u Y_\nu^{i \beta} \\ \frac{1}{\sqrt{2}} v_u Y_\nu^{\alpha j} & O_{2\times2} \end{pmatrix},$$ where $$\begin{aligned} A & = \frac{2 \mu^2}{\tilde m^3}, \ \ \ B = \left(\frac{v_u}{\sqrt{2} v_R} + \frac{\sqrt{2} \mu v_d v_R}{\tilde m^3}\right), \ \ \ C = \left(\frac{2 M_{BL} v_u^2}{g_{BL}^2 v_R^2} + \frac{v_d^2 v_R^2}{\tilde m^3}\right), \end{aligned}$$ $$\begin{aligned} \tilde m^{3} & = \frac { 4 \left[ \mu v_u v_d \left(g_1^2 M_2 + g_2^2 M_1 \right) - 2 M_1 M_2 \mu^2 \right] } {g_1^2 M_2 + g_2^2 M_1} .\end{aligned}$$ Here $\alpha$ and $\beta$ take only the values 1 and 2. From the experimental limits on active neutrino masses we obtain $(Y_\nu)_{i \alpha} \lesssim 10^{-12}$. This can be compared to $(Y_\nu)_{i 3} \lesssim 10^{-5}$, which is less constrained because of the TeV scale seesaw suppression. It has been pointed out in Ref. [@Barger:2010iv; @Ghosh:2010hy] (and earlier in a different context [@Mohapatra]) that this theory predicts the existence of two light sterile neutrinos which are degenerate or lighter than the active neutrinos, a so-called $3+2$ neutrino model. A sample possible spectrum is displayed in Fig. \[nu.spec\]. ![Sample spectra for neutrino masses in the normal and inverted hierarchies.[]{data-label="nu.spec"}](imgNuSpec) Recently, it has been shown in [@Hamann:2010bk] that precision cosmology and big-bang nucleosynthesis mildly favor extra radiation in the universe beyond photons and ordinary neutrinos, lending support to the existence of sub-eV sterile neutrinos. In this theory the chargino mass matrix, in the basis $\left(e^c_j, \ \tilde W^+, \ \tilde H_u^+\right)$ and $\left(e_i, \ \tilde W^-, \ \tilde H_d^-\right)$, is given by $$\label{chargino} {\cal M}_{{\tilde \chi}^{\pm}} = \left( \begin{array}{cc} 0 & M_C \\ M_C^T & 0 \end{array} \right),$$ with $$M_C = \begin{pmatrix} -\frac{1}{\sqrt{2}} Y_e^{ij} v_d & 0 & \frac{1}{\sqrt{2}} Y_e^{ij} v_L^j \\ \frac{1}{\sqrt{2}} \ g_2 v_L^i & M_2 & \frac{1}{\sqrt{2}} g_2 v_d \\ -\frac{1}{\sqrt{2}} Y_\nu^{ij} v_R^j & \frac{1}{\sqrt{2}} g_2 v_u & \mu \end{pmatrix}.$$ In the sfermion sector, the mass matrices $\mathcal{M}_{\tilde u}^2$, and $\mathcal{M}_{\tilde d}^2$ for squarks, and $\mathcal{M}_{\tilde e}^2$ for charged sleptons, in the basis $\left(\tilde f, \ {\tilde f}^{c*} \right)$, are given by $$\begin{aligned} \mathcal{M}_{\tilde u}^2&=&\left(\begin{array}{cc} m_{\tilde Q}^2 \ + \ m_{u}^2 \ + \ \left(\frac{1}{2} \ - \ \frac{2}{3} s_W^2 \right) \ M_Z^2 \ c_{2\beta} \ + \ \frac{1}{3} D_{BL} & \frac{1}{\sqrt 2} \left(a_u \ v_u - Y_u \ \mu \ v_d\right) \\ \frac{1}{\sqrt 2} \left(a_u \ v_u - Y_u \ \mu \ v_d\right) & m_{\tilde u^c}^2 \ + \ m_{u}^2 \ + \ \frac{2}{3} M_Z^2 \ c_{2\beta}\ s_W^2 \ - \ \frac{1}{3} D_{BL} \end{array}\right), \nonumber \\\end{aligned}$$ $$\begin{aligned} \mathcal{M}_{\tilde d}^2 &=& \left(\begin{array}{cc} m_{\tilde Q}^2 \ + \ m_{d}^2 \ - \ \left(\frac{1}{2} \ - \ \frac{1}{3} \ s^2_W \right) M_Z^2 \ c_{2 \beta} \ + \ \frac{1}{3} D_{BL} & \frac{1}{\sqrt 2} \left(Y_d \ \mu \ v_u - a_d \ v_d\right) \\ \frac{1}{\sqrt 2} \left(Y_d \ \mu \ v_u - a_d \ v_d\right) & m_{\tilde d^c}^2 \ + \ m_{d}^2 \ - \ \frac{1}{3} \ M_Z^2 \ c_{2\beta} \ s^2_W \ - \ \frac{1}{3} D_{BL} \end{array}\right), \nonumber \\ \\ \mathcal{M}_{\tilde e}^2 &=& \left(\begin{array}{cc} \label{Selectron.Mass} m_{\tilde L}^2 \ + \ m_{e}^2 \ - \ \left( \frac{1}{2} \ - s_W^2 \right) M_Z^2 \ c_{2\beta} \ - \ D_{BL} & \frac{1}{\sqrt 2} \left(Y_e \ \mu \ v_u - a_e \ v_d\right) \\ \frac{1}{\sqrt 2} \left(Y_e \ \mu \ v_u - a_e \ v_d\right) & m_{\tilde e^c}^2 \ + \ m_{e}^2 \ - \ M_Z^2 \ c_{2\beta}\ s_W^2 \ + \ D_{BL} \end{array}\right), \nonumber \\ \end{aligned}$$ where $c_{2\beta} = \cos 2\beta$, $s_W = \sin\theta_W$ and $$D_{BL} \equiv \frac{1}{8} \ g_{BL}^2 v^2_R= \frac{1}{2} M_{Z_{BL}}^2.$$ $m_u, \ m_d$ and $m_e$ are the respective fermion masses and $a_u, \ a_d$ and $a_e$ are the trilinear $a$-terms corresponding to the Yukawa couplings $Y_u, \ Y_d$ and $Y_e$. Typically, it is assumed that substantial left-right mixing occurs only in the third generation. Regardless, the physical states are related to the gauge states by $$\begin{aligned} \label{eq_squarkmixing} \begin{pmatrix} \tilde f_1 \\ \tilde f_2 \end{pmatrix} & = \begin{pmatrix} \cos \theta_{\tilde f} & \sin \theta_{\tilde f} \\ - \sin \theta_{\tilde f} & \cos \theta_{\tilde f} \end{pmatrix} \begin{pmatrix} \tilde f \\ \tilde f^{c*} \end{pmatrix}.\end{aligned}$$ The masses in the sneutrino sector are given by $$\begin{aligned} M_{\tilde{\nu}_i}^2 &=& m_{\tilde{L}_i}^2 \ + \ \frac{1}{2} M_{Z}^2 \cos 2 \beta - \frac{1}{2} M_{Z_{BL}}^2, \\ M_{\tilde{\nu}_3^c}^2 &=& M_{Z_{BL}}^2, \\ M_{\tilde{\nu}_\alpha^c}^2 &=& m_{\tilde{\nu}^c_\alpha}^2 \ + \ D_{BL},\end{aligned}$$ and $\alpha=1..2$. For simplicity we listed the mass matrices in the limit $v_L^i, a_\nu, Y_\nu \to 0$. For the most general expressions see Appendix A. It is important to mention that all sfermion masses are modified due to the existence of the B-L D-term. In order to understand the properties of the spectrum we assume a simplified spectrum for the superpartners. In the case of the sfermions we will assume for simplicity the same value for all soft masses. In this case if we neglect the left-right mixing the full spectrum of sfermions will be defined by $M_{SUSY}$ (the universal soft supersymmetry breaking mass), $\tan \beta$ and the mass of the B-L gauge boson. Using this simplified spectrum we show in Fig.1 the values for the sfermion masses for different values of $M_{Z_{BL}}$. Notice that the condition that left-handed slepton masses have to be positive impose a bound on the $M_{Z_{BL}}$ for a given value of $M_{SUSY}$, i.e. $M_{Z_{BL}} < \sqrt{2} M_{\tilde{L}}$. In this way, we can appreciate that the spectrum can be very constrained. As it is well-known, in the general case one cannot predict the spectrum since the soft masses are unknown. ![Spectrum for sfermion masses assuming the same value for all soft masses, $M_{SUSY}=1$ TeV and $\tan \beta=5$.[]{data-label="spectrum"}](Spectrum) Nucleon Stability and Dark Matter ================================= It is well-known that when R-parity is broken in a supersymmetric theory one has to understand the possible constraints coming from proton decay [@proton]. In the MSSM one has several interactions which could mediate proton decay at tree level and one-loop level. At the renormalizable level one has the lepton and baryon number violating interactions $${\cal{W}}_{RpV}=\epsilon \hat{L} \hat{H}_u \ + \ \lambda \hat{L} \hat{L} \hat{e}^c \ + \ \lambda^{'} \hat{Q} \hat{L} \hat{d}^c \ + \ \lambda^{''} \hat{u}^c \hat{d}^c \hat{d}^c,$$ which are not allowed in our theory before B-L breaking, and in general one has the dimension five operators $${\cal{W}}_{RpC}^5= \frac{\lambda_\nu}{\Lambda} \hat{L} \hat{L} \hat{H}_u \hat{H}_u \ + \ \frac{\lambda_L}{\Lambda} \hat{Q} \hat{Q} \hat{Q} \hat{L} \ + \ \frac{\lambda_R}{\Lambda} \hat{u}^c \hat{d}^c \hat{u}^c \hat{e}^c \ + \ \frac{\lambda_{\nu^c}}{\Lambda} \hat{u}^c \hat{d}^c \hat{d}^c \hat{\nu}^c.$$ Notice that the first term in the above equation is not allowed in our theory, but the last terms can mediate proton decay. The operators $QQQL$ and $u^c d^c u^c e^c$ mediate proton decay at one-loop level and typically the scale $\Lambda$ should be larger than $10^{17}$ GeV in order to satisfy the experimental bounds on proton decay. For a detailed discussion see Ref. [@proton]. Once B-L is broken by the vev of the right-handed sneutrinos one finds new contributions to proton decay at tree level. From the Yukawa coupling $Y_\nu \hat{L} \hat{H}_u \hat{\nu}^c$ one gets the lepton number violating interaction $L \tilde{H}_u$ and from the last term, $\hat{u}^c \hat{d}^c \hat{d}^c \hat{\nu}^c$, in the above equation one gets the interaction $\tilde{u}^c d^c s^c$. Using these interactions and integrating out the neutralinos and squarks we find the following constraint $$\frac{\lambda^{1123}_{\nu^c}}{\Lambda} \frac{Y_u Y_\nu^{i3}}{M_{\tilde{u}}^2} \frac{v_R^2}{2 M_{\tilde{\chi}^0}} \ < \ 10^{-30} \ \rm{GeV}^{-2},$$ from the channel $p \to K^+ \nu$. Then, assuming that $\lambda^{1123}_{\nu^c} \sim 1$, and $Y_{\nu}^{i3} \sim 10^{-6}$ (see Fig. \[vy\]) one gets $\Lambda > 10^{17}$ GeV. This constrain is similar to the one we get from the dimension five operators. Therefore, one can say that if the above couplings are order one the cutoff of the theory has to be large. Also one can think about possible scenarios where the couplings are small, see for example [@Yuval]. At first glance, finding a dark matter in $R$-parity violating theories seems hopeless. But while the traditional neutralino LSP case is no longer valid, the situation is not lost. As first discussed in [@Takayama:2000uz], such models can have an unstable LSP gravitino, with a lifetime longer than the age of the universe. The strong suppression on its lifetime is due to both the planck mass ($M_P$) suppression associated with its interaction strength and bilinear $R$-parity violation which is small due to neutrino masses and must facilitate the decay of the LSP. In the mass insertion approximation, this can be understood as the gravitino going into a photon and neutralino which then has some small mixing with the neutrino due to $R$-parity violation ($m_{\chi \, \nu}$), thereby allowing $\tilde G \to \gamma \nu$ as in Fig. 3. Adopting approximations made in [@Takayama:2000uz], the lifetime for the gravitino decaying into a photon and neutrino (in years) is about ![Gravitino decay into a photon and a neutrino. []{data-label="gravitino"}](imgGravitinoDK "fig:") (-30,20)[$\tilde G$]{} (-15,25)[$\gamma$]{} (-12.3, 8)[+]{} (-18, 8.5)[$\chi$]{} (-8, 0)[$\nu$]{} (-9.8, 10.4)[$m_{\chi \, \nu}$]{} $$\tau(\tilde G \to \gamma \nu) \sim 2 \times 10^{10} \left(\frac{m_{3/2}}{100 \text{ GeV}} \right)^{-3} \left(\frac{m_{\chi \nu}/m_\chi}{10^{-6}} \right)^{-2} \text{ years},$$ which for appropriate values of the gravitino mass leads to a long enough life time. Unlike in $R$-parity conserving models with a gravitino LSP, there are no issues with big bang nucleosynthesis from slow NLSP decay since the NLSP decays more promptly through $R$-parity violating interactions. Several interesting studies have been conducted on the signatures and constraints of unstable gravitino dark matter, see for example [@Takayama:2000uz; @Buchmuller:2007ui]. Experimental Constraints ======================== In order to understand the different lepton number violating decays in the theory we need to understand which are the main constraints from neutrino experiments. Today, we know well the numerical values for two of the neutrino mixings and the mass squared differences. The neutrino mixing matrix $V_{PMNS}$ is defined as $$V_{PMNS} = \begin{pmatrix} c_{12} \, c_{13} & c_{13} \, s_{12} & s_{13} \\ -c_{23} \, s_{12} - c_{12} \, s_{13} \, s_{23} & c_{12} \, c_{23} - s_{12} \, s_{13} \, s_{23} & c_{13} \, s_{23} \\ s_{12} \, s_{23} - c_{12} \, c_{23} \, s_{13} & -c_{12} \, s_{23} - c_{23} \, s_{12} \, s_{13} & c_{13} \, c_{23} \end{pmatrix},$$ where $c_{ij} = \cos \theta_{ij}$ and $s_{ij} = \sin \theta_{ij}$ with $0 \leq \theta_{ij} \leq \pi/2$. The physical neutrino masses are contained in $m_\nu=diag(m_{\nu_1},m_{\nu_2},m_{\nu_3})$. As it is well-known, there are two possible neutrino spectra: $$\begin{aligned} \begin{split} \text{Normal Hierarchy (NH):} & \quad m_{\nu_1}, \ \ m_{\nu_2} = \sqrt{ m_{\nu_1}^2 + \Delta m_{21}^2}, \ \ m_{\nu_3} = \sqrt{ m_{\nu_1}^2 + |\Delta m_{31}^2| }, \\ \text{Inverted Hierarchy (IH):} & \quad m_{\nu_1} = \sqrt{m_{\nu_3}^2 + |\Delta m_{31}^2|}, \ \ m_{\nu_2} = \sqrt{ m_{\nu_1}^2 + \Delta m_{21}^2}, \ \ m_{\nu_3}, \end{split}\end{aligned}$$ where [@arXiv:1103.0734] $$\begin{aligned} 7.27 \times 10^{-5} \text{eV}^2 \leq & \, \Delta m_{21}^2 \, \leq 8.03 \times 10^{-5} \text{ eV}^2, \\ 2.17 \times 10^{-3} \text{ eV}^2 < & \, |\Delta m_{31}^2| \, < 2.54 \times 10^{-3} \text{ eV}^2,\end{aligned}$$ are the solar and atmospheric mass squared differences, respectively. In order to understand the allowed values for the vevs of the left-handed sneutrinos and the Dirac Yukawa couplings we assume for simplicity that the off-diagonal block matrices in Eq. (\[Mnu\]) are zero, hence decoupling the two light sterile neutrinos from the active ones. In this case, all neutrino masses and mixing originate from the upper-left block matrix in Eq. (\[Mnu\]), which we label $m_\nu$. The flavor pattern, and hence the rank of this matrix, dictates that one neutrino will be massless. This matrix is diagonalized by the PMNS matrix $$m_\nu = V_\text{PMNS}^T \ M_\nu \ V_\text{PMNS},$$ where $m_\nu = \text{diag}(0, \sqrt{\Delta m_{21}^2}, \sqrt{|\Delta m_{31}^2|})$ in the Normal Hierarchy and $m_\nu = \text{diag}(\sqrt{|\Delta m_{31}^2|}, \sqrt{|\Delta m_{31}^2| + \Delta m_{21}^2}, 0)$ in the Inverted Hierarchy. This yields a system of six equations quadratic in the vevs of the right-handed sneutrinos and Yukawa couplings, although solving for these is not the most efficient way to proceed. Instead, notice that the product above yields the following six terms $$\begin{aligned} \begin{split} \label{VvRelation} V^j & \equiv v_{L}^i \, {V_\text{PMNS}}^{ij}, \\ Y^j & \equiv Y_\nu^{i3} \, V_\text{PMNS}^{ij}. \end{split}\end{aligned}$$ - [Normal Hierarchy]{} In the Normal Hierarchy one obtains the following six equations $$\begin{aligned} A V_1^2 + 2 B V_1 Y_1 + C Y_1^2 = & \, 0, \\ A V_1 V_2 + B \left(V_1 Y_2 + V_2 Y_1 \right)+ C Y_1 Y_2 = & \, 0, \\ A V_1 V_3 + B \left(V_1 Y_3 + V_3 Y_1 \right)+ C Y_1 Y_3 = & \, 0, \\ A V_2 V_3 + B \left(V_2 Y_3 + V_3 Y_2 \right) + C Y_2 Y_3 = & \, 0, \\ A V_2^2 + 2 B V_2 Y_2 + C Y_2^2 = &\, m_2, \\ A V_3^2 + 2 B V_3 Y_3 + C Y_3^2 = & \ m_3.\end{aligned}$$ In order for these equations to be consistent, $V_1 = Y_1 = 0$. While this condition represents some fine-tuning between parameters, it is a result of the simplifying assumption that the sterile and active light states decouple. In a more general scenario, this condition would not be necessary. The remaining system of equations, the last three, is underdetermined with three equations and four unknowns. Solving with respect to $Y_3$ yields $$\begin{aligned} Y_2 & = \epsilon_1 \sqrt{\frac{m_2}{m_3}} \sqrt { \frac { - Y_3^2 R - A m_3 } { R } }, \\ V_3 & = \frac { -B Y_3 + \epsilon_3 \sqrt{Y_3^2 R + A m_3} } { A }, \\ V_2 & = \frac { -B Y_2 + \epsilon_2 \sqrt{Y_2^2 R + A m_2} } { A }, \end{aligned}$$ with $$R \equiv B^2 - A C, \ \ \epsilon_1 = \pm 1, \ \ \epsilon_2 = \pm 1, \ \ \epsilon_3= \frac{\epsilon_1}{\epsilon_2} \frac{R Y_3}{\sqrt{R^2 Y_3^2}}.$$ Inverting Eq. (\[VvRelation\]) one translates these solutions to the variables of interest. The result being that specifying the SUSY spectrum and B-L parameters ($M_{Z_{BL}}$ and $g_{BL}$) as well as $Y_3$ specifies all the values for $v_{L}^{i}$ and $Y_\nu^{i3}$. - Inverted Hierarchy In the case of the inverted spectrum one can use the same procedure. However, one needs to make the following replacements: $m_1 \leftrightarrow m_3$, $Y_1 \leftrightarrow Y_3$, $V_1 \leftrightarrow V_3$. In this way when we solve the equations for $V_i$ and $Y_i$, one obtains $V_3=0$, $Y_3=0$ and the solutions above where we have made the previous substitutions. $\quad$ Parameter $\quad$ $\quad \quad$ Range $\quad \quad$ --------------------------- ----------------------------------- $M_1$ 100 - 1200 GeV $M_2$ 100 - 1200 GeV $|\mu|$ 100 - 1200 GeV $\tan \beta$ 3 - 50 $|Y_3|$ $10^{-7} - 10^{-5}$ $M_{BL}$ 100 - 1200 GeV $M_{Z_{BL}}$ 1000 GeV $g_{BL}$ 0.33 : Parameters and ranges scanned for the plots in Figs. \[vy\], \[LSP.DL.Bino\]– \[LSP.BR.Higgsino\].[]{data-label="tab:scan"} In Fig. \[vy\] we show the allowed values for the vevs of the left-handed sneutrinos and the Dirac Yukawa couplings for a scan over the parameters listed in Table \[tab:scan\]. As one can appreciate the allowed values for $v_L^i$ are in the range $(10^{-2}-10)$ MeV while the Yukawa couplings change between $10^{-7}$ and $10^{-5}$. Now, using these results we are ready to discuss all R-parity violating decays. ![ Allowed values for the $v_L^i$ versus $(Y_\nu)_{i3}$ in agreement with the neutrino masses and mixings constraints in the NH (IH) in a,c and e (b,d and f) for a scan over the parameters listed in Table \[tab:scan\]. []{data-label="vy"}](v1Y1plotNH "fig:") (-40,-4)[(a)]{} ![ Allowed values for the $v_L^i$ versus $(Y_\nu)_{i3}$ in agreement with the neutrino masses and mixings constraints in the NH (IH) in a,c and e (b,d and f) for a scan over the parameters listed in Table \[tab:scan\]. []{data-label="vy"}](v1Y1plotIH "fig:") (-40,-4)[(b)]{}\ ![ Allowed values for the $v_L^i$ versus $(Y_\nu)_{i3}$ in agreement with the neutrino masses and mixings constraints in the NH (IH) in a,c and e (b,d and f) for a scan over the parameters listed in Table \[tab:scan\]. []{data-label="vy"}](v2Y2plotNH "fig:") (-40,-4)[(c)]{} ![ Allowed values for the $v_L^i$ versus $(Y_\nu)_{i3}$ in agreement with the neutrino masses and mixings constraints in the NH (IH) in a,c and e (b,d and f) for a scan over the parameters listed in Table \[tab:scan\]. []{data-label="vy"}](v2Y2plotIH "fig:") (-40,-4)[(d)]{}\ ![ Allowed values for the $v_L^i$ versus $(Y_\nu)_{i3}$ in agreement with the neutrino masses and mixings constraints in the NH (IH) in a,c and e (b,d and f) for a scan over the parameters listed in Table \[tab:scan\]. []{data-label="vy"}](v3Y3plotNH "fig:") (-40,-4)[(e)]{} ![ Allowed values for the $v_L^i$ versus $(Y_\nu)_{i3}$ in agreement with the neutrino masses and mixings constraints in the NH (IH) in a,c and e (b,d and f) for a scan over the parameters listed in Table \[tab:scan\]. []{data-label="vy"}](v3Y3plotIH "fig:") (-40,-4)[(f)]{} Lepton Number Violation and Decays ================================== At this point, the relevant pieces of this model have be laid out and the question of interesting signals can now be tackled. Since lepton number is violated, same-sign dileptons and multijet are possible final states. Such signatures are interesting since they have no SM background. However, the final states depend critically on the nature of the LSP and since R-parity is violated the possibilities are more numerous than normal, *i.e.* colored and charged fields. These possibilities are briefly discuss in Appendix \[LSPs\]. We find that the most clear single of lepton number violation (and therefore the most interesting for us) results from the decays of a neutralino LSP through the process (see Fig \[Signal\]) [^3]: $$pp \ \to \gamma^*, Z^*, Z_{BL}^* \ \to \ \tilde{e}^*_i \tilde{e}_i \ \to \ e^{\pm}_i \ e^{\mp}_i \ e^{\mp}_j \ e^{\mp}_k \ 4j.$$ In order to quantify this signal we will continue by investigating the decays of the $Z_{BL}$ gauge boson, the charged slepton and the neutralino and in the next section, the production mechanism for the charged sleptons at the LHC. ![ Topology of the signals with multi-leptons. []{data-label="Signal"}](imgSignal "fig:") (-67,20)[$\tilde e^-_i$]{} B-L Gauge Boson Decays ---------------------- The $Z_{BL}$ boson can decay into a pair of charged fermions, light neutrinos, and into two sfermions. The partial widths for the decay into particles $P_1, P_2$ of masses $m_1, m_2$ are given by $$\Gamma(Z_{BL} \to P_1 P_2) = \frac{1}{16 \pi M_{Z_{BL}}} \, \left| \overline{\mm} ({Z_{BL}}\to P_1 P_2) \right|^2 \, \sqrt{ \left( 1 - \frac{(m_1 +m_2)^2}{M_{Z_{BL}}^2} \right) \left( 1 - \frac{(m_1 -m_2)^2}{M_{Z_{BL}^2}} \right) },$$ where the squared matrix elements for specific final states are in Appendix \[ZPDecay\]. The $Z_{BL}$ branching ratios are plotted in Fig. \[ZBL.BR\] for a fixed soft universal mass for all sfermions, $ M_{SUSY}=1$ TeV, versus the $Z_{BL}$ mass. Decay channels into SUSY particles only open up for a $Z_{BL}$ mass of around $1.2$ TeV and at much 1.4 TeV, the sleptons become tachyonic. As can be appreciated from Fig. \[spectrum\], tachyonic sleptons are reached before decay channels into the squarks can open. The branching ratios for the sleptons are divided up into sneutrinos, smuons plus selectrons and staus in anticipation of the associated signals. However, each individual slepton pair has the same branching ratio, about $2.5\%$ at $m_{Z_{BL}} = 1.4$ TeV. ![ $Z_{BL}$ branching ratios versus $Z_{BL}$ mass for a universal soft mass for all sfermions, $M_{SUSY}=1$ TeV. Here the subscript one refers to the lightest eigenstate in each family and this case corresponds to the purely left-handed slepton (zero mixing angle is assumed). Given the universal soft mass for the sfermions, only the left-handed slepton channels can be open. Right-handed squark channels can open for larger values of $M_{Z_{BL}}$ but only at the cost of unphysical tachyonic slepton masses. []{data-label="ZBL.BR"}](BrZBL1) In Fig. 6 we show the predictions for the total decay width and the invisible decays of the $Z_{BL}$ gauge boson. Since the new gauge boson can decay into five light neutrinos the invisible decay can be large, a few GeV when the mass is above 1 TeV. These properties of the $Z_{BL}$ are very important in order to discover this theory at the LHC. In summary, one can say that the new neutral gauge boson is B-L like with branching ratios $$\rm{Br}( Z_{BL} \to e^+_i e^-_i) \sim 40 \%, \ \rm{Br} (Z_{BL} \to \nu \nu) \sim 35 \% , \rm{Br}(Z_{BL} \to jj) \sim 20 \%, \rm{and} \ \rm{Br} (Z_{BL} \to \bar{t} t) \sim 5 \%,$$ since the branching ratios for the SUSY decays are small and the invisible decay can be large. For example when $M_{Z_{BL}} = 1.2 \ \rm{TeV}$ the invisible decay width is $\Gamma_{Z_{BL}}(\rm{invisible}) \sim 3$ GeV.\ ![The total decay width, $\Gamma(Z_{BL})$, versus $Z_{BL}$ mass for a universal soft mass for all sfermions, $M_{SUSY}=1$ TeV. []{data-label="GZBL"}](GZBL) Charged Slepton Decays {#slep.dk} ---------------------- The leading decay channels for the charged left-handed sleptons, $\tilde e_i^\pm$, are the decays into neutralinos and charginos $$\tilde e_i^\pm \to e_i^\pm \tilde \chi^0_a, \ \ \ \tilde e_i^\pm \to \nu \tilde \chi^\pm_A, \ \ \ \tilde e_i^\pm \to \tilde \nu_j W^\pm,$$ where $i,j$ are lepton generational indices, $a$ labels the neutralinos from lightest to heaviest and $A$ labels the charginos from lightest to heaviest. In addition to these decay modes there are various $R$-parity violating decays which only dominate when the slepton is the LSP. The last channel above usually involves an off-shell product particle (a three-body decay) and is therefore suppressed. The decay widths for the remaining two relevant channels are given by $$\begin{aligned} \label{slepton.neutralino} \Gamma(\tilde e_i^\pm \to e_i^\pm \tilde \chi^0_a) & = \frac{m_{\tilde e_i}}{ 32 \pi} \left( \left| g_{BL} N_{2a} + g_1 N_{3a} + g_2 N_{4a} \right|^2 + \left| \frac{2 \sqrt{2} m_{e_i}}{v_d} N_{5a}\right|^2 \right) \left(1 - \frac{m_{\tilde \chi^0_a}^2} {m_{\tilde e_i}^2}\right)^2, \\ \label{slepton.chargino} \Gamma(\tilde e_i^\pm \to \nu_j \tilde \chi^\pm_A) & = \frac{m_{\tilde e_i}}{ 16 \pi} \left| g_2 V^-_{1A} \right|^2 \left(1 - \frac{m_{\tilde \chi^-_A}^2} {m_{\tilde e_i}^2}\right)^2,\end{aligned}$$ where $N_{ab}$ diagonalizes the neutralino mass matrix and in the chargino sector one has $V^- X V^+ = \text{diag} \left(m_{\tilde \chi_1}, m_{\tilde \chi_2}\right).$ Due to the presence of the right-handed neutrino and $\tilde B'$, position 2, 3, 4 and 5 in the $N$ refer to $\tilde B', \tilde B, \tilde W \ \text{and}, \tilde{H}_d$, respectively. While it is hard to make predictions for the branching ratios of the charged sleptons decaying into a charged lepton and LSP without knowing the details of the spectrum, we briefly outline the best case scenario. Our final results will be given both with this best case scenario in mind and with arbitrary $\text{Br}(\tilde e_i \to e_i \tilde \chi_1^0)$. For a mostly bino LSP, it is possible that charged selectrons decay one hundred percent into the LSP since the charginos and other neutralinos could be heavier. For a wino LSP however, the lightest chargino channel is very likely to be open. Because of the factor of 2 difference between Eq. (\[slepton.neutralino\]), and Eq.(\[slepton.chargino\]) if all other neutralinos and charginos are kinematically disallowed, one can expect $$\frac{\Gamma(\tilde e_i^\pm \to e_i^\pm \tilde \chi^0_1)}{\Gamma(\tilde e_i^\pm \to \nu_j \tilde \chi^\pm_1)} \sim \frac{1}{2},$$ meaning a $33 \%$ branching ratio for a charged slepton into a wino LSP. Meanwhile, the left-handed selectron does not couple to the charged Higgsino. Therefore, in the same limit where all other neutralinos and charginos are out of kinematic range, the charged sleptons decays one hundred percent into the LSP for a Higgsino LSP. Neutralino Decays {#chi.decay} ----------------- The leading decay channels for the lightest neutralino, $\tilde{\chi}_1^0$, include $$\begin{aligned} \label{chi.decays} \tilde{\chi}_1^0 \to e^{\pm}_i W^{\mp}, \ \ \tilde{\chi}_1^0 \to \nu_i Z, \ \ \tilde{\chi}_1^0 \to \nu_i h_k, \ \ \tilde{\chi}_1^0 \to e_i^\pm H^\mp.\end{aligned}$$ The amplitude for the two first channels are proportional to the mixing between the leptons and neutralinos, while the last one is proportional to the Dirac-like Yukawa terms. While decays to all the MSSM Higgses are possible, typically, only the lightest MSSM Higgs, $h$ ($k = 1$), is light enough for the scenario we consider here and so we will only take it into account. A naive estimation of the decay width yields $$\Gamma (\tilde{\chi}_1^0) \sim \frac{g_2^2}{32 \pi} |V_{\nu \chi }|^2 M_{\chi},$$ where $V_{\chi \nu}$ is the mixing between the neutralino and neutrino which is proportional to $\sqrt{m_\nu/M_{\chi}}$. Assuming that $m_\nu < 0.1$ eV the decay length one finds $L(\tilde{\chi}_1^0) \gg 0.6$ mm. Therefore, even without making a detailed analysis of the decays of the lightest neutralino one expects signals with lepton number violation and displaced vertices in part of the parameter space. For a recent analysis of the neutralino decays in R-parity violating models see Ref. [@Porod; @Bobrovskyi:2011vx]. The specific decay width expressions are $$\begin{aligned} \label{lWL} \Gamma^{e_i W_L}&\equiv& \Gamma(\tilde{\chi}_a \to e_i^\pm W_L^\mp)= \frac{g_2^2}{64\pi M_W^2}|V_{i a}|^2 m_{\tilde{\chi}_a}^3 \left(1- \frac{m_W^2}{m_{\tilde{\chi}_a}^2} \right)^2, \\ \label{lWT} \Gamma^{e_i W_T}&\equiv&\Gamma(\tilde{\chi}_a \to e_i^\pm W_T^\mp)= \frac{g_2^2}{32\pi }|V_{i a}|^2 m_{\tilde{\chi}_a} \left(1- \frac{m_W^2}{m_{\tilde{\chi}_a}^2} \right)^2, \\ \label{nuZL} \Gamma^{\nu_i Z_L}&\equiv&\Gamma(\tilde{\chi}_a\to \nu_i Z_L)=\frac{g_2^2}{ 64\pi M_W^2}|V_{i a}|^2 m_{\tilde{\chi}_a}^3 \left(1- \frac{m_Z^2}{m_{\tilde{\chi}_a}^2} \right)^2, \\ \label{nuZT} \Gamma^{\nu_i Z_T}&\equiv&\Gamma(\tilde{\chi}_a \to \nu_i Z_T)=\frac{g_2^2}{ 32\pi c_W^2}|V_{i a}|^2 m_{\tilde{\chi}_a} \left(1- \frac{m_Z^2}{m_{\tilde{\chi}_a}^2} \right)^2, \\ \label{nuh} \Gamma^{\nu_i h}&\equiv&\Gamma(\tilde{\chi}_a \to \nu_i h) = \frac{g_2^2}{64\pi M_W^2} |V_{i a}|^2 \cos^2 \alpha \; m_{\tilde{\chi}_a}^3 \left(1- \frac{m_h^2}{m_{\tilde{\chi}_a}^2} \right)^2.\end{aligned}$$ Here $\alpha$ is the mixing angle in the Higgs sector and in the decoupling limit, $M_A^2 \gg M_Z^2$, which we assume: $\cos \alpha = \sin \beta$. The index $i$ indicates the generation of lepton and $a$ the neutralino with $a = 6$ is the heaviest and $a=1$ is the lightest. These expressions depend on the mixing between the light neutrinos and the neutralinos, $V_{i a}$, which is derived in Appendix \[Vla\]. Of course, only the decays of the LSP are relevant since the decays of the other neutralinos will be dominated by $R$-parity conserving decays, hence $a=1$ for our purposes. Hierarchy LSP Personality Bino Wino Higgsino ----------- -------------------------------- --------- --------- ----------------------- Decay Length 1.1 mm 0.03 mm $1 \times 10^{-4}$ mm 2Br$(\chi_1^0 \to e^- W^+)$ 4 $\%$ 2 $\%$ 13 $\%$ 2Br$(\chi_1^0 \to \mu^- W^+)$ 26 $\%$ 12 $\%$ 27 $\%$ NH 2Br$(\chi_1^0 \to \tau^- W^+)$ 61 $\%$ 54 $\%$ 30 $\%$ Br$(\chi_1^0 \to \nu Z^0)$ 10 $\%$ 29 $\%$ 28 $\%$ Br$(\chi_1^0 \to \nu h)$ 0 $\%$ 3 $\%$ 1 $\%$ Decay Length 0.6 mm 0.01 mm $1 \times 10^{-5}$ mm 2Br$(\chi_1^0 \to e^- W^+)$ 17 $\%$ 3 $\%$ 25 $\%$ 2Br$(\chi_1^0 \to \mu^- W^+)$ 36 $\%$ 32 $\%$ 19 $\%$ IH 2Br$(\chi_1^0 \to \tau^- W^+)$ 38 $\%$ 34 $\%$ 26 $\%$ Br$(\chi_1^0 \to \nu Z^0)$ 10 $\%$ 29 $\%$ 28 $\%$ Br$(\chi_1^0 \to \nu h)$ 0 $\%$ 3 $\%$ 1 $\%$ : Values of interest for a sample point in parameter space: $\epsilon_1 = \epsilon_2 = 1, Y_3 = Y_1 = 10^{-6}, M_{Z_{BL}} = M_{\tilde{B}'} = 1 \text{ TeV}, \tan \beta =5 \text{ and } m_h = 125 \text{ GeV}$. Here $M_1, M_2 \text{ and } \mu$ are 100 GeV, 500 GeV and 500 GeV for the bino LSP case, 500 GeV, 150 GeV and 500 GeV for the wino LSP case and 500 GeV, 500 GeV and 100 GeV for the Higgsino LSP case respectively. []{data-label="tab:test.points"} Table \[tab:scan\] displays the values of interest for a specific point in parameter space to gain an appreciation for possible values. Notice that in these scenarios the branching ratios for the channels with charged leptons can be large. In Figs. \[LSP.DL.Bino\]-\[LSP.DL.Higgsino\] are the decay lengths versus LSP mass resulting from a scan over all the possible values of $\epsilon_1$ and $\epsilon_2$ and over the parameters and ranges specified in Table \[tab:scan\]. The points are divided according to the largest component of the LSP and the neutrino hierarchy with a dominantly bino, wino and Higgsino LSP in the NH shown in (a) and for an IH in (b), respectively. The relevant decay lengths can be understood by studying the mixings in Eq. (\[neutralino\]). Since the higgsino-neutrino decay strength is the largest, $\sim Y_\nu v_R$, the Higgsino LSP has the shortest decay length. It is followed by the wino LSP with mixing $\sim g_2 v_L$ and finally the bino with coupling $\sim g_1 v_L$ and therefore the largest possible decay lengths. Displaced vertices associated with the lifetime of the LSP will only be discernible in a very limited part of the parameter space. ![ Decay length in millimeters versus LSP mass for a dominantly bino LSP in (a) for a NH and in (b) for an IH. Parameters are scanned according to the ranges specified in Table \[tab:scan\] and over all values of $\epsilon_1$ and $\epsilon_2$.[]{data-label="LSP.DL.Bino"}](imgDLNHBino "fig:") (-40,-4)[(a)]{} ![ Decay length in millimeters versus LSP mass for a dominantly bino LSP in (a) for a NH and in (b) for an IH. Parameters are scanned according to the ranges specified in Table \[tab:scan\] and over all values of $\epsilon_1$ and $\epsilon_2$.[]{data-label="LSP.DL.Bino"}](imgDLIHBino "fig:") (-40,-4)[(b)]{} ![ Decay length in millimeters versus LSP mass for a dominantly wino LSP in (a) for a NH and in (b) for an IH. Parameters are scanned according to the ranges specified in Table \[tab:scan\] and over all values of $\epsilon_1$ and $\epsilon_2$.[]{data-label="LSP.DL.Wino"}](imgDLNHWino "fig:") (-40,-4)[(a)]{} ![ Decay length in millimeters versus LSP mass for a dominantly wino LSP in (a) for a NH and in (b) for an IH. Parameters are scanned according to the ranges specified in Table \[tab:scan\] and over all values of $\epsilon_1$ and $\epsilon_2$.[]{data-label="LSP.DL.Wino"}](imgDLIHWino "fig:") (-40,-4)[(b)]{} ![Decay length in millimeters versus LSP mass for a dominantly Higgsino LSP in (a) for a NH and in (b) for an IH. Parameters are scanned according to the ranges specified in Table \[tab:scan\] and over all values of $\epsilon_1$ and $\epsilon_2$.[]{data-label="LSP.DL.Higgsino"}](imgDLNHHiggsino "fig:") (-40,-4)[(a)]{} ![Decay length in millimeters versus LSP mass for a dominantly Higgsino LSP in (a) for a NH and in (b) for an IH. Parameters are scanned according to the ranges specified in Table \[tab:scan\] and over all values of $\epsilon_1$ and $\epsilon_2$.[]{data-label="LSP.DL.Higgsino"}](imgDLIHHiggsino "fig:") (-40,-4)[(b)]{} The LSP branching ratios into the various possible channels versus the LSP mass are displayed in Figs. \[LSP.BR.Bino\]-\[LSP.BR.Higgsino\] scanning over the parameters in Table \[tab:scan\] and plotting the dominantly bino, wino and Higgsino LSP in (a) for a NH and in (b) for an IH. The lack of variance with scanned parameters displayed in the $\nu \; Z$ and $\nu \; h$ channels are due to the sum over all three flavors of neutrinos and also exists for the sum over the three charged lepton plus $W^\pm$ channels which total about $50 \%$ (or more if the other channels hadn’t fully turned on yet). Although it is not obvious from the Figs. \[LSP.BR.Bino\]-\[LSP.BR.Higgsino\], the branching ratio to the electron $W^\pm$ channel is always smaller then either the $\mu^\mp W^\pm$ or the $\tau^\mp W^\pm$ in the NH. Since we now know the properties of the neutralinos and selectrons decays we are ready to study the production channels. ![ LSP branching ratios versus LSP mass for a dominantly bino LSP in (a) for a NH and in (b) for an IH. Parameters are scanned according to the ranges specified in Table \[tab:scan\] and over all values of $\epsilon_1$ and $\epsilon_2$. []{data-label="LSP.BR.Bino"}](imgBrNHBino "fig:") (-40,-4)[(a)]{} ![ LSP branching ratios versus LSP mass for a dominantly bino LSP in (a) for a NH and in (b) for an IH. Parameters are scanned according to the ranges specified in Table \[tab:scan\] and over all values of $\epsilon_1$ and $\epsilon_2$. []{data-label="LSP.BR.Bino"}](imgBrIHBino "fig:") (-40,-4)[(b)]{} ![LSP branching ratios versus LSP mass for a dominantly wino LSP in (a) for a NH and in (b) for an IH. Parameters are scanned according to the ranges specified in Table \[tab:scan\] and over all values of $\epsilon_1$ and $\epsilon_2$.[]{data-label="LSP.BR.Wino"}](imgBrNHWino "fig:") (-40,-4)[(a)]{} ![LSP branching ratios versus LSP mass for a dominantly wino LSP in (a) for a NH and in (b) for an IH. Parameters are scanned according to the ranges specified in Table \[tab:scan\] and over all values of $\epsilon_1$ and $\epsilon_2$.[]{data-label="LSP.BR.Wino"}](imgBrIHWino "fig:") (-40,-4)[(b)]{} ![ LSP branching ratios versus LSP mass for a dominantly Higgsino LSP in (a) for a NH and in (b) for an IH. Parameters are scanned according to the ranges specified in Table \[tab:scan\] and over all values of $\epsilon_1$ and $\epsilon_2$. []{data-label="LSP.BR.Higgsino"}](imgBrNHHiggsino "fig:") (-40,-4)[(a)]{} ![ LSP branching ratios versus LSP mass for a dominantly Higgsino LSP in (a) for a NH and in (b) for an IH. Parameters are scanned according to the ranges specified in Table \[tab:scan\] and over all values of $\epsilon_1$ and $\epsilon_2$. []{data-label="LSP.BR.Higgsino"}](imgBrIHHiggsino "fig:") (-40,-4)[(b)]{} Production Mechanisms and Signals ================================= The lepton number violating signal discussed in the beginning of the previous section proceeds from the pair production of charged sleptons. While the MSSM contributions to this production drop rapidly with charged slepton mass (since that mass must be above the $Z$ threshold), in this model further contributions due to the $Z'$ resonance can significantly increase the cross section. These enhancement are discussed first followed by a study of the expected number of events at the LHC. Sleptons Production Mechanisms ------------------------------ The main production channel for the charged sleptons is through the photon, the Z gauge boson and the $Z_{BL}$ boson $$q(p_1) \bar{q}(p_2) \ \to \ \gamma, Z^*, Z^*_{BL} \ \to \ \tilde{e}^* (p_3) \tilde{e} (p_4).$$ The hadronic cross section is given by $$d \sigma_{pp \to \tilde{e}^* \tilde{e}} (s) = \sum_{q=u,d,c,s} \int_{\tau_0}^{1} d \tau \ \frac{d {\cal{L}}^{pp}_{q \bar{q}}}{d \tau} d \hat{\sigma}_{q \bar{q} \to \tilde{e}^* \tilde{e}} (\hat{s}),$$ where $\tau_0=4 M_{\tilde{e}}^2 / s$ and the differential partonic cross section is $$d \hat{\sigma}_{q \bar{q} \to \tilde{e}^* \tilde{e}} (\hat{s}) = | \overline{{\cal M}}_{q \bar{q} \to \tilde{e}^* \tilde{e}} (\hat{s})|^2 \frac{\rm{d PS}^{(2)}}{2 \hat{s}}.$$ Here $\rm{d PS}^{(2)}=d \hat{t}/ 8 \pi \hat{s}$ is the two particle phase-space element and $\hat{s}= \tau s$, where $s$ is the hadronic center-of-mass energy squared. As it is well-known the parton luminosities are given by $$\frac{\rm{d} {\cal L}_{ab}^{AB} }{\rm{d} \tau}= \frac{1}{1 + \delta_{ab}} \int_{\tau}^1 \left( f_{a/A} (x,\mu) f_{b/B} (\frac{\tau}{x}, \mu) + f_{a/B} (\frac{\tau}{x}, \mu) f_{b/A} (x, \mu) \right),$$ where the functions $f_{a/A} (x,\mu)$ are the particle distribution functions (PDFs). The amplitude squared for these processes can be written as $$| \overline{{\cal M}}_{q \bar{q} \to \tilde{e}^* \tilde{e}} (\hat{s})|^2 = \frac{2}{3} \left( \hat{u} \hat{t} - M_{\tilde{e}}^4 \right) | {{\cal A}}_{q \bar{q} \to \tilde{e}^* \tilde{e}} (\hat{s})|^2,$$ with $\hat{s}=(p_1 + p_2)^2$, $\hat{t}=(p_1-p_3)^2$, $\hat{u}=(p_1-p_4)^2$ and $${{\cal A}}_{q \bar{q} \to \tilde{e}^* \tilde{e}} (\hat{s})=\frac{C_{q\bar{q}\gamma} C_{\gamma \tilde{e}^* \tilde{e} }}{\hat{s}} \ + \ \frac{2C_{q\bar{q} Z} C_{Z \tilde{e}^* \tilde{e} }}{\hat{s}-M_Z^2 + i M_Z \Gamma_Z} \ + \ \frac{C_{q\bar{q} Z_{BL}} C_{Z_{BL} \tilde{e}^* \tilde{e} }}{\hat{s}-M_{Z_{BL}}^2 + i M_{Z_{BL}} \Gamma_{Z_{BL}}},$$ where $$\begin{aligned} C_{\bar{q} q \gamma}&=&e_q \ e, \ \ C_{\gamma \tilde{e}^*_L \tilde{e}_L}=e_l \ e, \ \ C_{Z \tilde{e}^*_L \tilde{e}_L} = \frac{e}{\sin 2 \theta_W} L_e, \\ C_{\bar{q}_L q_L Z} &=& \frac{e L_q}{\sin 2 \theta_W}, \ \ C_{\bar{q}_R q_R Z} = \frac{e R_q}{\sin 2 \theta_W}, \ \ C_{\bar{f} f Z_{BL}}= g_{BL} \frac{n_{BL}^f}{2}, \ \ C_{\tilde{f}^* \tilde{f} Z_{BL}}= g_{BL} \frac{n_{BL}^f}{2}.\end{aligned}$$ Here $L_f=I_f^3 - e_f \sin^2 \theta_W$ and $R_f=-e_f \sin^2 \theta_W$, where $I_f^3$ is the isospin of the fermion f. Now, using the equations $$\begin{aligned} \hat{u}&=& 2 M_{\tilde{e}}^2 - \hat{t}-\hat{s}, \\ \hat{t} &=& M_{\tilde{e}}^2 - \frac{\hat{s}}{2} + y \sqrt{\hat{s} \left( \frac{\hat{s}}{4} - M_{\tilde{e}}^2 \right)} ,\end{aligned}$$ we can compute the cross section $$\sigma_{pp \to \tilde{e}^*_L \tilde{e}_L} (s)= \sum_{q=u,d,s} \ \int_{-1}^1 dy \int_{\tau_0}^1 d \tau \frac{d {\cal L}^{pp}_{q \bar{q}}}{d \tau} \ \sigma (M_{\tilde{e}}, y, \tau, s).$$ ![Drell-Yan production cross sections for the charged sleptons in our model. The dashed line corresponds to the prediction in the MSSM and the solid lines show the results in our model for different values of the $Z_{BL}$ mass. The gauge coupling $g_{BL}$ is assumed to be at the maximum value allowed by the experimental constraints: Eq (\[ZBL.const\]). []{data-label="Slepton.sigma"}](imgppToe1e1LHC7 "fig:") (-40,-4)[(a)]{} ![Drell-Yan production cross sections for the charged sleptons in our model. The dashed line corresponds to the prediction in the MSSM and the solid lines show the results in our model for different values of the $Z_{BL}$ mass. The gauge coupling $g_{BL}$ is assumed to be at the maximum value allowed by the experimental constraints: Eq (\[ZBL.const\]). []{data-label="Slepton.sigma"}](imgppToe1e1LHC14 "fig:") (-40,-4)[(b)]{} The numerical results for the selectron production cross sections are shown in Fig. \[Slepton.sigma\] for different scenarios, with $g_{BL}$ assumed to be the maximum value allowed by the experimental constraints: Eq (\[ZBL.const\]). We have compared our analytical results for the cross section with the results in Ref. [@Dawson] and found the same result in the case of the MSSM. In Fig. \[Slepton.sigma\] we can see that even in the MSSM the cross section can be large and when the $Z_{BL}$ is included the cross section can be even larger due to the resonance enhancement. For example when the $M_{Z_{BL}}=1$ TeV one can have a cross section above 1 fb where the selectron mass is below 450 GeV and $\sqrt{s}=7$ TeV. Signals with Multi-Leptons -------------------------- In this paper we wish to investigate the most promising signals associated with lepton number violation, through the process $$q \bar{q} \ \to \ \gamma, Z^*, Z^*_{BL} \ \to \ \tilde{e}_i^* \tilde{e}_i \to e_i^+ e_i^- \tilde{\chi}^0_1 \tilde{\chi}^0_1 \ \to \ e_i^+ e_i^- e^{\pm}_j e^{\pm}_k \ 4j,$$ where $i,j,k=1..3$ are generational indices. See Fig. \[Signal\] for the illustration of these signals. However, we will focus on the electron and muon channels since hadronic activity associated with the tau would blemish the signal of lepton number violation, namely three same-sign leptons, one lepton with opposite sign, four jets and no missing energy. Taking a cue from Eq. (\[Selectron.Mass\]), which shows that the left-handed (right-handed) sleptons receive a negative (positive) contribution to their mass from the $B-L$ $D$-term, we assume that only the left-handed sleptons are producible through this process. We begin by giving an estimate for the number of events in the limit of mostly bino, wino and Higgsino LSP. We present results for a 7 TeV LHC with $10 \text{ fb}^{-1}$ of data. The combinatorics factor for the channels of interest are given by $$\begin{aligned} \begin{split} \mathcal F_{jk} = & 2 \left(2 - \delta_{jk} \right) \, \text{Br}\left(\tilde e^\pm_i \to e^\pm_i \tilde \chi_1^0\right)^2 \times \text{Br}\left(\tilde \chi_1^0 \to e^\pm_j W^\mp \right) \\ & \times \text{Br}\left(\tilde \chi_1^0 \to e^\pm_k W^\mp \right) \times \text{Br}\left(W^\pm \to jj\right)^2, \end{split}\end{aligned}$$ so that the final states are $$e^\pm e^\mp e^\mp e^\mp 4j, \quad e^\pm e^\mp e^\mp \mu^\mp 4j, \quad e^\pm e^\mp \mu^\mp \mu^\mp 4j,$$ $$\mu^\pm \mu^\mp e^\mp e^\mp 4j, \quad \mu^\pm \mu^\mp e^\mp \mu^\mp 4j, \quad \mu^\pm \mu^\mp \mu^\mp \mu^\mp 4j.$$ Hierarchy LSP $\text{Br}(\tilde \chi_1^0 \to e^\pm W^\mp)$ $\text{Br}(\tilde \chi_1^0 \to \mu^\pm W^\mp)$ $\quad \quad \ \mathcal{F}_{ee} \quad \quad \ $ $\ \quad \quad \mathcal{F}_{e \mu} \quad \quad \ $ $\quad \quad \ \mathcal{F}_{\mu \mu} \quad \quad \ $ ----------- -------------------------- ---------------------------------------------- ------------------------------------------------ ------------------------------------------------- ---------------------------------------------------- ------------------------------------------------------ NH Bino 1-20% 10-50% 0.0001-0.036 0.002-0.18 0.01-0.22 NH Wino 1-20 % 10-50% 0.00001-0.004 0.0002-0.02 0.001-0.024 NH $\quad$ Higgsino $\quad$ 2-25 % 10-50% 0.0004-0.056 0.004-0.22 0.008-0.24 IH Bino 10-60% 10-30 % 0.009-0.32 0.018-0.32 0.009-0.08 IH Wino 10-60% 13-30% 0.001-0.035 0.003-0.034 0.002-0.009 IH Higgsino 10-60% 13-35% 0.008-0.32 0.024-0.37 0.016-0.11 : Ranges for the branching ratios of the LSP to charge lepton and $W$ boson taken from the corresponding dense regions in Figs. \[LSP.BR.Bino\] - \[LSP.BR.Higgsino\]. These are used to calculate the overall combinatorics factor, $\mathcal{F}_{jk}$ for the final state $e_i^\pm e_i^\mp e^\pm_j e^\pm_k 4j$. Values are separated by the composition of the LSP: mostly bino, wino and Higgsino and for both the normal and inverted hierarchies.[]{data-label="Comb"} We assume that $\text{Br}\left(\tilde{e}^\pm_i \to e^\pm_i \tilde{\chi}^0_1 \right) \sim 100 \%, 33 \%, 100 \%$ for a bino, wino and Higgsino LSP respectively, following the discussion in Section \[slep.dk\]. The branching ratio of the $W$ boson into jets is about 67%. For the RPV neutralino decays we pick the most prominent regions from Figs. \[LSP.BR.Bino\] - \[LSP.BR.Higgsino\] and display these values along with $\mathcal{F}_{jk}$ in Table \[Comb\]. Values are shown for both the normal and inverted hierarchies. To calculate the number of events expected after 10 $\text{fb}^{-1}$ of data, we convolute the combinatorics in Table \[Comb\] with the cross sections for a 1 TeV $Z_{BL}$ shown in Fig. \[Slepton.sigma\] and multiply by ten; the results are displayed in Figs. \[Num.Events.Bino\] - \[Num.Events.Higgsino\]. Notice that in most of the scenarios one can have several events which are basically background free. The main backgrounds coming from $t\bar{t} W Z$ and $jjjj W^{\pm} W^{\pm} Z$ are very suppressed. ![ Number of $e_i^\pm e_i^\mp e_j^\pm e_k^\pm 4j$ at a 7 TeV LHC for 10 $\text{fb}^{-1}$ of data for a bino LSP. Branching ratio values are shown in Table \[Comb\], while cross section values are taken from Fig \[Slepton.sigma\]. Data is divided into (a) for the NH, and (b) for the IN.[]{data-label="Num.Events.Bino"}](imgNumNHBino7 "fig:") (-40,-4)[(a)]{} ![ Number of $e_i^\pm e_i^\mp e_j^\pm e_k^\pm 4j$ at a 7 TeV LHC for 10 $\text{fb}^{-1}$ of data for a bino LSP. Branching ratio values are shown in Table \[Comb\], while cross section values are taken from Fig \[Slepton.sigma\]. Data is divided into (a) for the NH, and (b) for the IN.[]{data-label="Num.Events.Bino"}](imgNumIHBino7 "fig:") (-40,-4)[(b)]{} ![ Number of $e_i^\pm e_i^\mp e_j^\pm e_k^\pm 4j$ at a 7 TeV LHC for 10 $\text{fb}^{-1}$ of data for a wino LSP. Branching ratio values are shown in Table \[Comb\], while cross section values are taken from Fig \[Slepton.sigma\]. Data is divided into (a) for the NH, and (b) for the IN. []{data-label="Num.Events.Wino"}](imgNumNHWino7 "fig:") (-40,-4)[(a)]{} ![ Number of $e_i^\pm e_i^\mp e_j^\pm e_k^\pm 4j$ at a 7 TeV LHC for 10 $\text{fb}^{-1}$ of data for a wino LSP. Branching ratio values are shown in Table \[Comb\], while cross section values are taken from Fig \[Slepton.sigma\]. Data is divided into (a) for the NH, and (b) for the IN. []{data-label="Num.Events.Wino"}](imgNumIHWino7 "fig:") (-40,-4)[(b)]{} ![Number of $e_i^\pm e_i^\mp e_j^\pm e_k^\pm 4j$ at a 7 TeV LHC for 10 $\text{fb}^{-1}$ of data for a Higgsino LSP. Branching ratio values are shown in Table \[Comb\], while cross section values are taken from Fig \[Slepton.sigma\]. Data is divided into (a) for the NH, and (b) for the IN. []{data-label="Num.Events.Higgsino"}](imgNumNHHiggsino7 "fig:") (-40,-4)[(a)]{} ![Number of $e_i^\pm e_i^\mp e_j^\pm e_k^\pm 4j$ at a 7 TeV LHC for 10 $\text{fb}^{-1}$ of data for a Higgsino LSP. Branching ratio values are shown in Table \[Comb\], while cross section values are taken from Fig \[Slepton.sigma\]. Data is divided into (a) for the NH, and (b) for the IN. []{data-label="Num.Events.Higgsino"}](imgNumIHHiggsino7 "fig:") (-40,-4)[(b)]{} In order to understand the testability of the model at the LHC we show curves of constant number of $e_i^\pm e_j^\mp e_j^\pm e_j^\pm 4j$ events per 10 $\text{fb}^{-1}$ of data in Fig. \[Const.Num.Events\] in the $\text{Br}(\tilde \chi^0_1 \to e_j^\pm W^\mp)-\text{Br}(\tilde e_i^\pm \to e_i^\pm \tilde \chi^0_1)$ plane. Values are shown for a seven TeV LHC, with a 1 TeV $Z_{BL}$ and $m_{\tilde e_i}=200$ GeV. In the case of the observation of such events a the LHC, the $Z_{BL}$ mass can be reconstructed from electron-electron and muon-muon events and the selectron mass may be reconstructible from its decay into two leptons and two jets. Therefore allowing the calculation of the cross section for charged slepton pair-production. A plot such as Fig.\[Const.Num.Events\] can be used to get a better handle on the two unknown branching ratios and shed further light on the model. In order to estimate the reach of the LHC we also present curves of constant number of $e_i^\pm e_i^\mp e_j^\pm e_j^\pm 4j$ events per 10 $\text{fb}^{-1}$ of data in the $\text{Br}\left(\tilde e_i^\pm \to e_i^\pm \tilde \chi^0_1 \right)-m_{\tilde e_i}$ plane in Fig. \[Const.Num.Events2\]. This is again for a seven TeV LHC, with a 1 TeV $Z_{BL}$ and we show two possible values for $\text{Br}\left(\tilde \chi^0_1 \to e_j^\pm W^\mp \right)$, representing the upper (lower) part of that range in blue (green). One can see that even if the slepton mass is around 450 GeV one could observe a few events with multileptons and four jets. It is important to mention we satisfy the recent bounds coming from ATLAS [@Atlas1; @Atlas2]. ![ Curves of constant number of events for the final state $e_i^\pm e_i^\mp e_j^\pm e_j^\pm 4j$ in the $\text{Br}\left(\tilde \chi^0_1 \to e_j^\pm W^\mp \right)-\text{Br}\left(\tilde e_i^\pm \to e_i^\pm \tilde \chi^0_1 \right)$ plane. Values are shown for a seven TeV LHC, $M_{Z_{BL}}=1$ TeV and $m_{\tilde e_i}=200$ GeV. []{data-label="Const.Num.Events"}](imgNumEvBrBr7) ![ Curves of constant number of events for the final state $e_i^\pm e_i^\mp e_j^\pm e_j^\pm 4j$ in the $\text{Br}\left(\tilde e_i^\pm \to e_i^\pm \tilde \chi^0_1 \right)-m_{\tilde e_i}$ plane. Values are shown for a seven TeV LHC, $M_{Z_{BL}}=1$ TeV and for two different values of $\text{Br}\left(\tilde \chi^0_1 \to e_j^\pm W^\mp \right)$, representing the upper (lower) part of that range in blue (green). []{data-label="Const.Num.Events2"}](imgNumEvBrme7) Summary and Outlook =================== In this article we have studied in detail the theory proposed in Ref. [@Rp2] which we consider the simplest gauge theory for R-parity violation. This theory makes a prediction for the LHC since in order to break the B-L gauge symmetry the right-handed sneutrino must get a vacuum expectation value and one should observe lepton number violation at colliders. We have found the following results: - In Fig. 2 we have illustrated in a simple way that one can have a realistic scenario for all sfermion masses even if some of the sfermion masses have a negative and large contribution from the B-L D-term in the theory. We have shown that in order to avoid tachyonic masses one should satisfy the condition $M_{Z_{BL}} < \sqrt{2} m_{\tilde{L}}$. This is a simple result which helps us to understand the constraints on the spectrum. - The full spectrum of the theory and the constraints coming from neutrino masses were analyzed in detail. The spectrum for neutrinos is interesting since it contains five light neutrinos: three active neutrinos and two sterile neutrinos. Using the experimental constraints on the masses and mixing for the active neutrinos we show in Fig. 4 the allowed values for the vacuum expectations of the left-handed neutrinos and the Yukawa couplings. As we have discussed in the text, these results are crucial to understand the decays of the lightest supersymmetric particle in the theory. - In Figs. 5 and 6 we have shown the properties of the new neutral gauge boson in the theory, the B-L gauge boson. Since one has two extra light neutrinos in the theory the invisible decay width is larger in this case. The contributions of the supersymmetric particle to the decay width are small and so the $Z_{BL}$ is like the B-L gauge boson in the non-SUSY scenarios. - We have investigated the neutralinos decays in great detail. In Figs. 7-9 we have shown the results for the decay length in the different cases. As one can appreciate in Figs. 7-9 there are some scenarios in the Bino limit where one could expects displaced vertices. The branching ratios have been investigated in Figs.10-12 and we can summarize the results in the following way $$\rm{Br}( \tilde{\chi}^0_1 \to \tau W), \rm{Br}(\tilde{\chi}^0_1 \to \mu W)> \rm{Br}(\tilde{\chi}^0_1 \to e W),$$ in the Normal Hierarchy and $$\rm{Br}( \tilde{\chi}^0_1 \to e W), \rm{Br}(\tilde{\chi}^0_1 \to \mu W)> \rm{Br}(\tilde{\chi}^0_1 \to \tau W),$$ in the Inverted Hierarchy, in majority of the parameter space. - We have studied the main production channels for the charged sleptons at the LHC. In this case one can produce the charged sleptons through the photon and the Z as in the MSSM, and through the new neutral gauge boson, $Z_{BL}$, in our model. As we have shown in Fig. 13 the production cross section can be large and thanks to the presence of the $Z_{BL}$ one can have even larger values for the cross section due to the resonance enhancement. We should point out that this production channel (throught the photon and $Z$) is very important to understand the signals in any model for R-parity violation. - The most striking signals for lepton number violation in this context are the channels with three leptons with the same electric charge and four jets. In Fig. \[Num.Events.Bino\]-\[Const.Num.Events2\] we have shown that one can have a large number of events at the LHC with only 10 $\rm{fb}^{-1}$. The background for these channels is suppressed, therefore there is a hope to test or rule out this theory in the near future. [*Acknowledgments*]{}: The work of P.F.P. is supported by the James Arthur Fellowship, CCPP-New York University. P. F. P. thanks A. Haas for a discussion about the searches for multi-leptons at the LHC. Mass Matrices ============= In the case of the CP-odd neutral scalars, in the basis $(A_L, A_R, A_d, A_u)$, one finds that the mass matrix reads as $$\label{CP-odd} {\cal M}_{odd}^2 = \begin{pmatrix} \frac{v_R}{v_L} B_\nu & B_\nu & -\frac{1}{\sqrt{2}} Y_\nu \mu v_R & -\frac{1}{\sqrt{2}} a_\nu v_R \\ B_\nu & \frac{v_L}{v_R} B_\nu & -\frac{1}{\sqrt{2}} Y_\nu \mu v_L & -\frac{1}{\sqrt{2}} a_\nu v_L \\ -\frac{1}{\sqrt{2}} Y_\nu \mu v_R & -\frac{1}{\sqrt{2}} Y_\nu \mu v_L & \frac{v_u}{v_d} B\mu \ + \frac{Y_\nu \mu v_L v_R}{\sqrt{2} v_d} & B\mu \\ -\frac{1}{\sqrt{2}} a_\nu v_R & -\frac{1}{\sqrt{2}} a_\nu v_L & B\mu & \frac{v_d}{v_u} B\mu \ - \frac{a_\nu \mu v_L v_R}{\sqrt{2} v_u} \end{pmatrix},$$ while for the CP-even scalars, in the basis $(h_L, h_R, h_d, h_u)$, one finds: $$\label{CP-even} {\cal M}_S^2 = \begin{pmatrix} S_{\nu}^2 & S_{\nu H}^2 \\ \left(S_{\nu H}^{2}\right)^T & S_{H}^2 \end{pmatrix},$$ where $$\begin{aligned} S_\nu^2 \equiv & \begin{pmatrix} \frac{1}{4} \left(g_1^2 + g_2^2 + g_{BL}^2 \right) v_L^2 + \frac{v_R}{v_L} B_\nu & -\frac{1}{4} \left(g_{BL}^2 - 2 Y_\nu^{2}\right) v_L v_R - B_\nu \\ -\frac{1}{4} \left(g_{BL}^2 - 2 Y_\nu^{2}\right) v_L v_R - B_\nu & \frac{1}{4} g_{BL}^2 v_R^2 + \frac{v_L}{v_R} B_\nu \end{pmatrix},\end{aligned}$$ $$\begin{aligned} S_{\nu H}^2 \equiv & \begin{pmatrix} \frac{1}{4} \left(g_1^2 + g_2^2 \right) v_L v_d - \frac{1}{\sqrt{2}} Y_\nu \mu v_R & -\frac{1}{4} \left(g_1^2 + g_2^2 - 4 Y_\nu^{2} \right) v_L v_u + \frac{1}{\sqrt{2}} a_\nu v_R \\ -\frac{1}{\sqrt{2}} Y_\nu \mu v_L & Y_\nu^{2} v_R v_u + \frac{1}{\sqrt{2}} a_e v_L \end{pmatrix}, \\ \nonumber \\ S_{H}^2 \equiv & \begin{pmatrix} \frac{1}{4} \left( g_1^2 + g_2^2 \right)v_d^2 + \frac{v_u}{v_d} B\mu + \frac{Y_\nu \mu v_R v_L}{\sqrt{2} v_d} & - \frac{1}{4} \left(g_1^2 + g_2^2 \right) v_u v_d - B\mu \\ - \frac{1}{4} \left(g_1^2 + g_2^2 \right) v_u v_d - B\mu & \frac{1}{4} \left(g_1^2 + g_2^2 \right) v_u^2 + \frac{v_d}{v_u} B\mu - \frac{a_\nu v_L v_R}{\sqrt{2} v_u} \end{pmatrix}. \end{aligned}$$ In the case of the charged scalars, in the basis $(\tilde{e}, (\tilde{e}^c)^*, H_u^-, H_d^-)$ the mass matrix reads as: $$\label{Charged} M_C^2 = \begin{pmatrix} C_e^2 & C_{e C}^2 \\ \left(C_{e H}^2 \right)^T & C_{H}^2 \end{pmatrix},$$ with $$\begin{aligned} C_e^2 \equiv & \begin{pmatrix} C_{11}^2 & B_e \\ B_e & C_{22}^2 \end{pmatrix},\end{aligned}$$ $$\begin{aligned} C_{e H}^2 \equiv & \begin{pmatrix} \frac{1}{4} g_2^2 v_L v_d - \frac{1}{2} Y_e^2 v_L v_d - \frac{1}{\sqrt{2}} Y_\nu \mu v_R & \frac{1}{4} g_2^2 v_L v_u - \frac{1}{2} Y_\nu^2 v_L v_u - \frac{1}{\sqrt{2}} a_\nu v_R \\ \frac{1}{2} Y_e Y_\nu v_R v_u + \frac{1}{\sqrt{2}} a_e v_L & \frac{1}{2} Y_e Y_\nu v_R v_d + \frac{1}{\sqrt{2}} Y_e \mu v_L \end{pmatrix}, \\ \nonumber \\ C_{H}^2 \equiv & \begin{pmatrix} \frac{1}{4} g_2^2\left(v_u^2-v_L^2 \right) + B\mu \frac{v_u}{v_d} + \frac{1}{2} Y_e^2 v_L^2 + \frac{Y_\nu \mu v_R v_L}{\sqrt{2} v_d} & B\mu + \frac{1}{4} g_2^2 v_u v_d \\ B\mu + \frac{1}{4} g_2^2 v_u v_d & \frac{1}{4} g_2^2 \left(v_d^2 + v_L^2 \right) + \frac{v_d}{v_u}B\mu - \frac{1}{2} Y_\nu^{2} v_L^2 - \frac{a_\nu v_L v_R}{\sqrt{2} v_u} \end{pmatrix}.\end{aligned}$$ In the above equations $C_{11}^2$ and $C_{22}^2$ are given by $$\begin{aligned} C_{11}^2 & = & \frac{1}{4} g_2^2 \left(v_u^2 - v_d^2 \right) + \frac{1}{2}Y_e^2 v_d^2 - \frac{1}{2} Y_\nu^{2} v_u^2 + \frac{v_R}{v_L} B_e,\end{aligned}$$ and $$\begin{aligned} C_{22}^2 &=& M_{\tilde e^c}^2 + \frac{1}{4} g_1^2 \left(v_u^2 - v_d^2 -v_L^2 \right) + \frac{1}{8} g_{BL}^2 \left(v_R^2 - v_L^2 \right) + \frac{1}{2} Y_e^2 \left( v_d^2 + v_L^2 \right).\end{aligned}$$ We also define for convenience: $B_\nu = \frac{1}{\sqrt{2}} \left(Y_\nu \mu v_d - a_\nu v_u \right)$ and $B_e = \frac{1}{\sqrt{2}} \left(Y_e \mu v_u - a_e v_d \right)$. Decay Amplitude {#ZPDecay} =============== The amplitude for $Z_{BL}$ are $$\begin{aligned} \label{Zff} \left| \overline{\mm} (Z_{BL} \to f_i \bar{f_i}) \right|^2 &=&~ \frac{4}{3} c_{f} \left( \frac{g_{BL} }{2}n_{BL}^{f}\right)^2 {m_{Z_{BL}}}^2 \left(1+\frac{2 m_{f_i}^2}{{m_{Z_{BL}}}^2} \right), \ \ \ f_i = u,d,c,s,b, t, e, \mu, \tau; \\ \left| \overline{\mm} (Z_{BL} \to \nu_i \bar{\nu_i}) \right|^2 &=&~ \frac{2}{3} \left( \frac{g_{BL} }{2}n_{BL}^{\nu}\right)^2 {m_{Z_{BL}}}^2, \\ \left| \overline{\mm} (Z_{BL} \to \bar{N} N) \right|^2 &=&~ \frac{2}{3} \left( \frac{g_{BL} }{2}n_{BL}^{\nu_R}\right)^2 \, {m_{Z_{BL}}}^2 \, \left(1 - 4\frac{m_{N}^2}{{m_{Z_{BL}}}^2}\right), \\ \left| \overline{\mm} (Z_{BL} \to \tilde{f}_{\alpha} \tilde{f}^\ast_{\beta}) \right|^2 &=&~ \frac{1}{3} c_{\tilde{f}} \left( \frac{g_{BL} }{2}n_{BL}^{\tilde{f}}\right)^2 {m_{Z_{BL}}}^2 \left( 1 - \frac{2 m_{\tilde{f}_{\alpha}}^2 + 2 m_{\tilde{f}_{\beta}}^2}{{m_{Z_{BL}}}^2} + \frac{\left(m_{\tilde{f}_{\alpha}}^2 - m_{\tilde{f}_{\beta}}^2\right)^2}{{m_{Z_{BL}}}^4} \right) \\&& \times~ \left( U_{\alpha 1}^{\tilde{f}} U_{\beta 1}^{\tilde{f}} + U_{\alpha 2}^{\tilde{f}} U_{\beta 2}^{\tilde{f}} \right)^2, \qquad\quad \tilde{f}_{\alpha} \tilde{f}_{\beta}^* = \tilde{q}_{i\alpha} \tilde{q}_{i\beta}^{*}, \tilde{l}_{i\alpha} \tilde{l}_{i\beta}^{*}, \tilde{\nu}_i \tilde{\nu}_{i}^{*}, \tilde{\nu}_{Ri} \tilde{\nu}_{Ri}^{*}.\end{aligned}$$ Here, $i$ is a generation index, $c_f$ are color factors ($c_{q_i} = 3$, $c_{l_i}=1$) and $U^{\tilde{f}}$ are the unitary sfermion mixing matrices. Neutrino-Neutralino Mixing Matrix {#Vla} ================================= In the basis $\psi^T = \left(\nu, \chi\right)$, the mass matrix has the general form $$\mathcal{M} = \begin{pmatrix} 0_{3\times3} & m_D \\ m_D^T & M_\chi \end{pmatrix},$$ where $\mathcal{M}$ is diagonalized by $\mathcal{N}$ $$\label{diagonalM} \mathcal N^\dagger \mathcal{M} \mathcal{N}^* = \begin{pmatrix} m_\nu^D & 0 \\ 0 & M_\chi^D \end{pmatrix},$$ $m_\nu^D$ is the diagonal mass matrix for the light neutrinos, $M_\chi^D$ is the diagonal mass matrix for the neutralinos and $$\mathcal{N} = \begin{pmatrix} U & V \\ V_c & U_c \end{pmatrix}.$$ Eq. (\[diagonalM\]) yields $$\begin{aligned} \label{nu.mass} m_\nu^D & = U^\dagger m_D V_c^* + V_c^\dagger m_D^T U^* + V_c^\dagger M_\chi V_c, \\ \label{chi.mass} M_\chi^D & = V^\dagger m_D^T V_c^* + U_c^\dagger M_\chi U_c^* + U_c^\dagger m_D^T V^*, \\ \label{zero.mass} 0 & = U^\dagger m_D U_c^* + V_c^\dagger M_\chi U_c^* + V_c^\dagger m_D^T V^*,\end{aligned}$$ and $$U \sim U_c \sim \mathcal{O}(1); \quad \quad \quad V \sim V_c \sim \mathcal{O}(\frac{m_\nu^D}{m_D}).$$ The unitarity condition yields the following expressions $$\begin{aligned} \begin{split} & U U^\dagger + V V^\dagger = U^\dagger U + V_c^\dagger V_c = V_c V_c^\dagger + U_c U_c^\dagger = V^\dagger V + U_c^\dagger U_c = I, \\ & U V_c^\dagger + V U_c^\dagger = U^\dagger V + V_c^\dagger U^c = 0. \end{split}\end{aligned}$$ In Eq. \[zero.mass\], the $V_c^\dagger m_D^T V^*$ is negligible whereas in Eq. \[chi.mass\] the $M_\chi$ term dominates. Therefore $$\begin{aligned} \label{um} U^\dagger m_D + V_c^\dagger M_\chi & = 0, \\ \label{chi.mass.diag} U_c^\dagger M_\chi U_c & = M_\chi^D, \\ m_D U_c^* & = V M,\end{aligned}$$ where the last expression is a result of inverting Eq. \[chi.mass.diag\], substituting it into Eq. \[um\] and making use of the unitarity conditions. Substituted Eq. \[um\] into Eq. \[nu.mass\] yields $$\label{mnuU} V_c^\dagger m_D^T = m_\nu^D U^T.$$ These results can be use to manipulate the seesaw relation: $$m_\nu = U m_\nu^D U^T = m_D M_\chi^{-1} m_D^T,$$ where $m_\nu$ is the nondiagonal light neutrino mass matrix diagonalized by $U$. Substituting Eq. (\[mnuU\]) for $m_\nu^D U^T$, rearranging using the unitarity condition yields and solving for V yields $$V = m_D M_\chi^{-1} U_c$$ This can be rewritten by inverting Eq. (\[chi.mass\]): $$V = m_D U_c^* \left(M_\chi^D\right)^{-1}.$$ Where $V$ can be identified with $V_{i a}$, the matrix that describes the mixing between the neutrinos and the neutralinos and is necessary for computing the neutralino decay properties. This result agrees with the naive expectation from the mass insertion approximation. While in the decay widths, Eq. (\[lWL\]-\[nuh\]), factors of $U$ and $E$ (the matrix that diagonalizes the charged lepton mass matrix) appear, they do so as sums of unitarity quantities and therefore are either zeroes or ones. LSP Candidates and Their Final States {#LSPs} ===================================== The violation of R-parity increases the space of possible LSPs, which is now no longer restricted to chargeless fields. We therefore take the time here to make a quick survey of the possible final states in this model. For each possible LSP, we consider its production, if lepton number violation is observable in principle and if there are any obstruction to this observation. Finally, we judge which LSP leads to the most interesting signals. For us, these are the signals where lepton number violation is explicit: same-sign leptons with no missing energy (which might be due to neutrinos thereby confounding the counting of lepton number). Of course this can only arise from neutral LSPs. - Gluino $(\tilde{g})$ LSP: Gluino pairs are produced through strong cross sections at the LHC. Their possible decays are $$pp \ \to \ \tilde{g} \tilde{g} \ \to \ t t \ \bar{b} \bar{b} \ e^{-}_i e^{-}_j, \ t \bar t \, t \bar t \, \nu \nu,$$ $$pp \ \to \tilde{g} \tilde{g} \ \to 4j \ e^{\pm}_i \ e^{\pm}_j,\ 4j \, \nu \nu,$$ where the former is favored if the third generation squarks are lighter than the first two. The gluino decay width can be estimated as $$\Gamma (\tilde{g} \to f^{'} \bar{f} e^{\pm}_i) \sim \alpha_s \frac{M_{\tilde{g}}^5 (v_L^i)^2}{ M_{\tilde{q}}^4 M_{\tilde{\chi}^+}^2 64 \pi^2}.$$ For $M_{\tilde{g}} =100$ GeV, $M_{\tilde{q}}=500$ GeV, $M_{\tilde{\chi}^+}=500$ GeV and $v_L^i=10$ MeV, one finds that the decay width is smaller than $10^{-13}$ GeV: a long enough lifetime for the gluino to form bound states but short enough so that it decays within the detector. In principle, these channels can yield spectacular signals at the LHC. However a recent inclusive analysis in the search for isolated same-sign muons published by the ATLAS collaboration has placed a model independent upper bound on the gluino pair production cross section of 58 fb [@ATLAS1]. Imposing this bound translate into a lower bound on the gluino mass of around 1 TeV indicating a heavy SUSY spectrum. Since we are interested in the scenarios with low energy supersymmetry we do not pursue this scenario further. - Squark $(\tilde{q})$ LSP: A stop LSP allows for final states with two third generation quarks of the same type and two leptons: $$pp \ \to \ \tilde{t}^* \tilde{t} \ \to \ \bar{b} b \ e^{\pm}_i \ e^{\mp}_j, \ \rm{or} \ \bar t t \, \nu \nu,$$ while a first or second generation squark LSP has channels with two jets and two leptons: $$pp \ \to \ \tilde q^* \tilde q \ \to \ 2j \ e^{\pm}_i \ e^{\mp}_j, \ 2j \, \nu \nu.$$ These channels have strong cross sections but do not provide information on the violation of the total lepton number. Since squarks in this case act as leptoquarks (each decaying into a quark and lepton) bounds on this scenario can be derived from leptoquark searches. - Charged slepton $(\tilde{e}_i)$ LSP: Charged sleptons can be pair produced through the $Z$ and $Z_{BL}$, with signals $$pp \ \to \ \tilde{e}^*_i \tilde{e}_i \ \to \ \bar{t} t \ b \bar{b}, \ e_i^+ e_i^- \, \nu \nu,$$ where once more, lepton number violation is not discernible. The $t \bar b$ finals state is due to the mixing of the charged sleptons with the charged Higgs boson which typically decays in this way. The leptonic channel arises due to the R-parity violating mixing between the charged leptons (neutrinos) and the charginos (neutralinos). - Sneutrino $(\tilde{\nu}_i)$ LSP: Sneutrino pair production also proceeds through the $Z$ and $Z_{BL}$ with the following possible final states: $$pp \ \to \ \tilde{\nu}^* \tilde{\nu} \ \to \ b \bar{b} \ b \bar{b}, \ \nu \nu \nu \nu, \ e_i^+ e_j^- e_i^+ e_k^-$$ The first final state results from the R-parity violating mixing of the sneutrino with the Higgs boson, while the latter two states are due to the R-parity violating mixing between the charged leptons (neutrinos) and the charginos (neutralinos). - Chargino $(\tilde{\chi}^{\pm})$ LSP: Charginos pair production is possible through the $Z$ and leads to channels with two charged leptons due to the R-parity violating mixing between the charged leptons (neutrinos) and the charginos (neutralinos). $$pp \ \to \ \tilde{\chi}^+ \tilde{\chi}^- \ \to \ e_i^+ e_j^- Z Z, \ \nu \nu \, W^+ W^-.$$ While in this case lepton flavor violation is observable, total lepton number cannot be probed. - Neutralino $(\tilde{\chi}_1^0)$ LSP: This scenario allows for several interesting channels with lepton number violation. If the neutralino is Higgsino-like the pair production (See Section \[chi.decay\] for information on neutralino decays): $$pp \ \to \ \tilde{\chi}_1^0 \tilde{\chi}^0_1 \ \to \ 4j \ e^{\pm}_i \ e^{\pm}_j,$$ is possible through the $Z$, as well as associated production which gives rise to channels with three charged leptons: $$pp \ \to \ \tilde{\chi}_1^0 \tilde{\chi}^{\pm}_1 \ \to \ 4j \ \nu \ e^{\pm}_i \ e^{\pm}_j \ e^{\pm}_k.$$ Unfortunately, these channels are interesting only in the Higgsino-like neutralino scenario and in general the cross sections can be small. However, striking channels with three same-sign charged leptons, multijets and no missing energy through the pair production of selectrons are generally present: $$pp \ \to \ \tilde{e}^*_i \tilde{e}_i \ \to \ e^{\pm}_i \ e^{\mp}_i \ e^{\mp}_j \ e^{\mp}_k \ 4j.$$ Such striking signals maybe the signatures that help test this model at the LHC. As we show above, the production cross section can be large and there are no relevant backgrounds. [000]{} P. Fayet, “Supersymmetry and Weak, Electromagnetic and Strong Interactions,” Phys. Lett. B [**64**]{} (1976) 159. P. Fayet, “Spontaneously Broken Supersymmetric Theories of Weak, Electromagnetic and Strong Interactions,” Phys. Lett. B [**69**]{} (1977) 489. S. Dimopoulos and H. Georgi, “Softly Broken Supersymmetry and SU(5),” Nucl. Phys. B [**193**]{} (1981) 150. A. Salam and J. A. Strathdee, “Supersymmetry and Fermion Number Conservation,” Nucl. Phys. B [**87**]{} (1975) 85. P. Fayet, “Supergauge Invariant Extension of the Higgs Mechanism and a Model for the electron and Its Neutrino,” Nucl. Phys. B [**90**]{} (1975) 104. M. J. Hayashi and A. Murayama, “Radiative Breaking of SU(2)-R x U(1)-(B-L) gauge symmetry induced by broken N=1 supersymmetry in a left-right symmetry model,” Phys. Lett. B [**153**]{} (1985) 251. R. N. Mohapatra, “Mechanism For Understanding Small Neutrino Mass In Superstring Theories,” Phys. Rev. Lett.  [**56**]{} (1986) 561. S. P. Martin, “Some simple criteria for gauged R-parity,” Phys. Rev. D [**46**]{} (1992) 2769 \[hep-ph/9207218\]. C. S. Aulakh and R. N. Mohapatra, “Neutrino as the Supersymmetric Partner of the Majoron,” Phys. Lett. B [**119**]{} (1982) 136. S. P. Martin, “Implications of supersymmetric models with natural R-parity conservation,” Phys. Rev. D [**54**]{} (1996) 2340 \[hep-ph/9602349\]. A. Masiero and J. W. F. Valle, “A Model For Spontaneous R Parity Breaking,” Phys. Lett. B [**251**]{} (1990) 273. C. S. Aulakh, A. Melfo, A. Rasin and G. Senjanovic, “Seesaw and supersymmetry or exact R-parity,” Phys. Lett. B [**459**]{} (1999) 557 \[hep-ph/9902409\]. C. S. Aulakh, B. Bajc, A. Melfo, A. Rasin and G. Senjanovic, “SO(10) theory of R-parity and neutrino mass,” Nucl. Phys. B [**597**]{} (2001) 89 \[hep-ph/0004031\]. P. Fileviez Perez and S. Spinner, “Spontaneous R-Parity Breaking and Left-Right Symmetry,” Phys. Lett. B [**673**]{} (2009) 251 \[arXiv:0811.3424 \[hep-ph\]\]. V. Barger, P. Fileviez Perez and S. Spinner, “Minimal gauged U(1)(B-L) model with spontaneous R-parity violation,” Phys. Rev. Lett.  [**102**]{} (2009) 181802 \[arXiv:0812.3661 \[hep-ph\]\]. P. Fileviez Perez, M. Gonzalez-Alonso and S. Spinner, “Gauge Origin of M-Parity and the mu-Term in Supersymmetry,” Phys. Rev. D [**84**]{} (2011) 095014 \[arXiv:1109.1823 \[hep-ph\]\]. D. Feldman, P. Fileviez Perez and P. Nath, “R-parity Conservation via the Stueckelberg Mechanism: LHC and Dark Matter Signals,” arXiv:1109.2901 \[hep-ph\]. L. Alvarez-Gaume, J. Polchinski and M. B. Wise, “Minimal Low-Energy Supergravity,” Nucl. Phys.  B [**221**]{}, 495 (1983); L. E. Ibanez and G. G. Ross, “SU(2)-L X U(1) Symmetry Breaking As A Radiative Effect Of Supersymmetry Breaking In Guts,” Phys. Lett.  B [**110**]{}, 215 (1982). M. Ambroso and B. Ovrut, “The B-L/Electroweak Hierarchy in Heterotic String and M-Theory,” JHEP [**0910**]{} (2009) 011 \[arXiv:0904.4509 \[hep-th\]\]. M. Ambroso and B. A. Ovrut, “The B-L/Electroweak Hierarchy in Smooth Heterotic Compactifications,” Int. J. Mod. Phys. A [**25**]{} (2010) 2631 \[arXiv:0910.1129 \[hep-th\]\]. M. Ambroso and B. A. Ovrut, “The Mass Spectra, Hierarchy and Cosmology of B-L MSSM Heterotic Compactifications,” arXiv:1005.5392 \[hep-th\]. P. Fileviez Perez and S. Spinner, “The Fate of R-Parity,” Phys. Rev. D [**83**]{} (2011) 035004 \[arXiv:1005.4930 \[hep-ph\]\]. M. S. Carena, A. Daleo, B. A. Dobrescu and T. M. P. Tait, “Z-prime gauge bosons at the Tevatron,” Phys. Rev.  D [**70**]{} (2004) 093009 \[arXiv:hep-ph/0408098\]. V. Barger, P. Fileviez Perez, S. Spinner, “Three Layers of Neutrinos,” Phys. Lett.  [**B696** ]{} (2011) 509-512. \[arXiv:1010.4023 \[hep-ph\]\]. D. K. Ghosh, G. Senjanovic and Y. Zhang, “Naturally Light Sterile Neutrinos from Theory of R-parity,” Phys. Lett. B [**698**]{} (2011) 420 \[arXiv:1010.3968 \[hep-ph\]\]. J. Hamann, S. Hannestad, G. G. Raffelt, I. Tamborra and Y. Y. Y. Wong, “Cosmology seeking friendship with sterile neutrinos,” Phys. Rev. Lett.  [**105**]{} (2010) 181301 \[arXiv:1006.5276 \[hep-ph\]\]. P. Nath and P. Fileviez Perez, “Proton stability in grand unified theories, in strings and in branes,” Phys. Rept.  [**441**]{} (2007) 191 \[hep-ph/0601023\]. C. Csaki, Y. Grossman and B. Heidenreich, “MFV SUSY: A Natural Theory for R-Parity Violation,” arXiv:1111.1239 \[hep-ph\]. S. Borgani, A. Masiero and M. Yamaguchi, “Light gravitinos as mixed dark matter,” Phys. Lett. B [**386**]{} (1996) 189 \[hep-ph/9605222\]; F. Takayama and M. Yamaguchi, “Gravitino dark matter without R-parity,” Phys. Lett. B [**485**]{}, 388 (2000) \[hep-ph/0005214\]. W. Buchmuller, L. Covi, K. Hamaguchi, A. Ibarra and T. Yanagida, “Gravitino Dark Matter in R-Parity Breaking Vacua,” JHEP [**0703**]{}, 037 (2007) \[hep-ph/0702184 \[HEP-PH\]\]; W. Buchmuller, “Gravitino Dark Matter,” AIP Conf. Proc.  [**1200**]{} (2010) 155 \[arXiv:0910.1870 \[hep-ph\]\]. T. Schwetz, M. Tortola and J. W. F. Valle, “Global neutrino data and recent reactor fluxes: status of three-flavour oscillation parameters,” New J. Phys.  [**13**]{} (2011) 063004 \[arXiv:1103.0734 \[hep-ph\]\]. F. Thomas and W. Porod, “Determining R-parity violating parameters from neutrino and LHC data,” JHEP [**1110**]{} (2011) 089 \[arXiv:1106.4658 \[hep-ph\]\]. S. Bobrovskyi, W. Buchmuller, J. Hajer and J. Schmidt, “Quasi-stable neutralinos at the LHC,” JHEP [**1109**]{} (2011) 119 \[arXiv:1107.0926 \[hep-ph\]\]. G. Aad [*et al.*]{} \[ATLAS Collaboration\], “Search for anomalous production of prompt like-sign muon pairs and constraints on physics beyond the Standard Model with the ATLAS detector,” arXiv:1201.1091 \[hep-ex\]. S. Dawson, E. Eichten and C. Quigg, “Search for Supersymmetric Particles in Hadron - Hadron Collisions,” Phys. Rev. D [**31**]{} (1985) 1581. The ATLAS Collaboration “Search for New Phenomena in Events with Four Charged Leptons,” http://cdsweb.cern.ch/record/1388601/files/ATLAS-CONF-2011-144.pdf The ATLAS Collaboration “Search for New Phenomena in Events with Three or more Charged Leptons,” http://cdsweb.cern.ch/record/1399618/files/ATLAS-CONF-2011-158.pdf [^1]: It is important to mention that the breaking of B-L in the context of the MSSM was studied for the first time in Ref. [@AM]. See also Refs. [@Martin2; @Masiero; @Goran1; @Goran2] for the study of R-parity in other models. [^2]: The size would go as $\frac{Y_\nu \mu v_d v_L}{m_{\tilde \nu^c}^2} < 10^{-10}$. The maximum values for $Y_\nu$ and $v_L$ are about $10^{-6}$ and $10^{-2}$ GeV, respectively, see Fig. \[vy\]. [^3]: Throughout this paper, shorthand such as $pp \to \tilde e^* \tilde e$ represents the process $pp \to \tilde e^* \tilde e + X$, where the activity associated with $X$ has low transverse momentum and is not associated with the relevant physics of interest. Alternatively, this notation represents all possible production methods of $\tilde e^* \tilde e$ from the partons inside the proton taking their respective parton distribution functions into account.
--- abstract: 'We study the orbital properties of stars in four (published) simulations of thick disks formed by: $i)$ accretion from disrupted satellites, $ii)$ heating of a pre-existing thin disk by a minor merger, $iii)$ radial migration and $iv)$ gas rich mergers. We find that the distribution of orbital eccentricities are predicted to be different for each model: a prominent peak at low eccentricity is expected for the heating, migration and gas-rich merging scenarios, while the eccentricity distribution is broader and shifted towards higher values for the accretion model. These differences can be traced back to whether the bulk of the stars in each case is formed [*in-situ*]{} or is [*accreted*]{}, and are robust to the peculiarities of each model. A simple test based on the eccentricity distribution of nearby thick disk stars may thus help elucidate the dominant formation mechanism of the Galactic thick disk.' author: - | \ \ $^{1}$ Kapteyn Astronomical Institute, P.O. Box 800, Groningen, The Netherlands.\ $^{2}$ Universidad Nacional de Córdoba, Laprida 854, 5000 Córdoba, Argentina and Instituto de Astronomía Teórica y Experimental, Conicet, Argentina.\ $^{3}$ Centre for Astrophysics, University of Central Lancashire, Preston, PR1 2HE, UK.\ $^{4}$ Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195.\ $^{5}$ Astrophysikalisches Institut Potsdam, An der Sternwarte 16, Potsdam 14482, Germany.\ bibliography: - 'references.bib' nocite: - '[@steinmetz06]' - '[@abadi03a; @abadi03b]' title: Orbital Eccentricity as a probe of Thick Disk Formation Scenarios --- Introduction {#sec:intro} ============ Several mechanisms have been proposed to explain the formation of thick disks in galaxies [see @majewski93]. However it is still unclear by which of these mechanisms thick disks preferentially form. This is despite the fact that more than 25 years have passed since it was first detected in the Milky Way [@gilmore_reid83], and that it has been established that this component appears to be ubiquitous in late-type systems [e.g. @yoachim08 and references therein]. Amongst the scenarios proposed to explain the formation of the thick disks are the direct accretion of stars from disrupted satellites [e.g. @abadi03b], the thickening of a pre-existing thin disk through a minor merger [e.g. @quinn93; @villalobos08; @kazantzidis08], the scattering or migration of stars by spiral arms [e.g. @schoenrich09a; @roskar08a; @schoenrich09b], and [*in-situ*]{} trigered star formation during/after gas-rich mergers [e.g. @brook05; @bournaud07]. Even though studies of external galaxies have been fundamental to establish the statistical properties of thick disks, it is likely that only for the Galactic thick disk we will be able to unravel its evolutionary path. For example, measurements of the phase-space coordinates for nearby thick disk stars allows reconstruction of their orbits, which contain imprints of the dynamical history, while their chemical abundances encode information about their sites of origin. Time is ripe to delve into more detailed predictions for the above-mentioned scenarios, because these have reached a level of maturity and detail that they warrant and permit a nearly direct comparison to observations. In this [*Letter*]{} we investigate how the orbits of thick disk stars can be used to distinguish between the various formation channels. In particular, we focus on the predicted eccentricity distributions. We expect our findings to be applied soon to samples of nearby thick disk stars from SEGUE [@yanny09; @smith09] and RAVE (Steinmetz et al. 2006, Breddels et al. submitted), and in the long term, to the [*Gaia*]{} dataset which will provide much more accurate information for much larger samples of stars spanning a wide range of distances from the Sun. The orbital eccentricity-test should help to elucidate the dominant mechanism by which the Galactic thick disk formed. ----------------- ------------------------- ------------------------- ------------------------- -------------- --------------- ------------- ------------- --------------------------- Scenario $M_{vir}$ $M_{bulge}$ $M_{disk}$ $z_0$ (thin) $z_0$ (thick) $R_d$ $\epsilon$ Reference $[10^{10} \rm M_\odot]$ $[10^{10} \rm M_\odot]$ $[10^{10} \rm M_\odot]$ $[\rm kpc] $ $[\rm kpc]$ $[\rm kpc]$ $[\rm kpc]$ [*accretion*]{} 87 6.7 2.8 0.5 2.3 4.1 0.50 Abadi et al. (2003a,b) [*heating*]{} 50 – 1.2 – 1.2 – 0.01 Villalobos & Helmi (2008) [*migration*]{} 100 4.8 3.0 0.3 0.9 3.5 0.05 Roskar et al. (2008) [*merger*]{} 71 2.1 3.4 0.3 1.0 2.9 0.40 Brook et al. (2004) [Milky Way]{} 60-200 1. 7-10 0.3 0.9 3.5 – @turon08 ----------------- ------------------------- ------------------------- ------------------------- -------------- --------------- ------------- ------------- --------------------------- \[table:simu\] Numerical Experiments {#sec:simu} ===================== We have gathered four existing numerical simulations of late-type galaxies that, having all developed a thick disk component, clearly differ in the dominant formation mechanism. These are: 1. accretion and disruption of satellites [@abadi03b], 2. disk heating by a minor merger [@villalobos08], 3. radial migration via resonant scattering [@roskar08a], 4. [*in-situ*]{} formation during/after a gas-rich merger [@brook04; @brook05]. Thick disk formation models {#sec:description} --------------------------- Because each of the simulations mentioned above have been already introduced in the literature, here we will only review their main relevant features, referring the reader to the original papers for further details. Table \[table:simu\] summarizes their key parameters. ### Accretion scenario {#ssec:accretion} @abadi03b showed that within the $\Lambda$CDM paradigm, the accretion of stars from disrupting satellites in approximately co-planar orbits may give rise to an old thick disk component that comprises about one-third the mass of the much younger thin disk. In our sample we include the galaxy presented in @abadi03b, which formed in a cosmological N-body/SPH simulation. This object, with a virial mass of the order of that of the Milky Way, was selected from a low-resolution simulation of a large volume of the Universe; and later re-simulated with much higher resolution. In this high resolution run, the mass per baryonic particle is $\sim 3 \times 10^6 M_\odot$. The final mass for the thick disk in this galaxy (derived via a dynamical decomposition) is $1.1 \times 10^{10} M_\odot$. ### Heating scenario {#ssec:heating} In this model, a thick disk is formed by the dynamical heating that is induced by a massive satellite merging with a primordial, rotationally supported thin disk. This scenario has been explored recently by e.g. @villalobos08 [@kazantzidis08], who have shown that 5:1 mergers and with a wide range of orbital inclinations generate thick disks whose properties are in reasonable agreement with observations. In such a model, the bulk of stars that end up in the thick disk originate from the primordial disk rather than from the accreted satellite [@villalobos08]. In our analysis we include one of the numerical experiments presented in Villalobos & Helmi (2008). In these simulations, the mass ratio between the satellite and the host is 0.2 and its initial orbit is prograde and inclined by 30$^o$ with respect to the host disk. The mass per stellar particle in the simulation is $m_p = 1.2\times 10^5 M_\odot$, and the thick disk has a final mass of 1.2$\times 10^{10} M_\odot$. It is important to clarify that only a small fraction of the thin disk component is present at the end of the simulation ($\sim 15-20\%$ the mass of the original disk). This implies that, for this remnant to be the thick disk of a late-type galaxy, a new thin disk should form later from the cooling of fresh gas. This will lead to structural changes in the thicker component, which are not considered here. Nevertheless, if the growth of the new disk is adiabatic, then many characteristics, and in particular the eccentricities, are not expected to be dramatically different [@villalobos-thesis]. ### Radial Migration scenario {#ssec:migration} Stars in the thin disk may be trapped onto resonant corotation with spiral arms, and may migrate inwards and outwards along the spiral waves approximately conserving their angular momenta (and hence eccentricity) and without leading to significant heating in the disk [@selwood_binney02]. However, since the vertical velocity dispersion of stellar disks correlates with their surface brightness [@kregel05], the [*radial migration*]{} of stars from the inner regions (kinematically hotter) will result in the formation of a thicker disk component. Although this process has not been formally proposed as a thick disk formation scenario, numerical simulations suggest that a modest thick component may be built. Therefore, we include in our sample the simulation presented in @roskar08a ran with the goal of characterizing the migration that takes place in galactic disks. The simulation starts with a dark matter halo of $\sim 10^{12} M_\odot$ where 10% of this mass is in the form of a hot halo gas component. This gas is allowed to cool and form stars self-consistently, mimicking the quiescent growth of disk galaxies, over a period of 10 Gyr. The initial mass resolution is $10^5 M_\odot$ for the baryons, and stellar particles have on average masses $3 \times 10^4 M_\odot$. ### Gas-rich merger scenario {#ssec:merger} The last scenario we explore consists in the formation of a thick rotating component during an active epoch of gas rich mergers in the past history of a galaxy [@brook04; @brook05]. This formation channel differs fundamentally from the [*accretion*]{} model because the bulk of thick disk stars are born [*in-situ*]{} rather than being accreted from satellites. In this sense, this scenario might show certain similarities with the [*heating*]{} model. However, the latter requires the existence of a thin disk at early times, $z \geq 1$, in contrast to the [*merger*]{} scenario where the stars are already born in a hotter component. Here we analyze the simulated galaxy introduced in @brook04. It formed in a semi-cosmological N-body/SPH simulation that includes heating/cooling of gas, star formation, feedback and chemical enrichment. Its dark halo has a quiescent merger history after $z \!\sim\!2$, and a final baryonic content of the galaxy is $\sim 5 \times 10^{10} M_\odot$ (see Table \[table:simu\]). The mass per baryonic particle is $\sim 2 \times 10^{5} M_\odot$. The mass of the thick disk in this galaxy is $\sim 2.2 \times 10^9 M_\odot$, identified as old stars ($8.5<\rm age<10.5$) with relatively high rotation velocity ($V_{\rm rot}>50$ km/s). ![Vertical density profile of stars for each of the scenarios discussed in Section \[sec:description\]. The best fit mass-weighted double-exponential profiles are shown with black solid lines, and the individual contributions of the ”thin” and ”thick” components are indicated by the red dotted curves. Notice that there is no significant thin disk in the [*heating*]{} scenario, thus only the vertical profile for the thick component is present. For the accretion scenario kinematical cuts have been applied in order to avoid contamination from the stellar halo (see text for details)](figs/z_profile_all_err.ps){width="84mm"} . \[fig:zprof\] The scenarios described in this Section are capable of producing a rotationally supported hot component whose properties resemble the ‘thick disks’ in galaxies. However, the relative preponderance of such thick components does vary from galaxy to galaxy in our simulations. This is illustrated in Figure \[fig:zprof\], where we show with solid dots the vertical mass profiles for each case in a cylindrical shell $2<R/R_d<3$; which minimizes the contribution from bars and bulges. Additionally, in the accretion model, particles identified as “spheroid” in Abadi et al. (2003b) have been removed (see Section 3). The error bars correspond to the rms obtained from one hundred bootstrap re-samples of the data, and they are generally smaller than the dot’s sizes. The black solid lines show the best-fit double exponential profile found for each galaxy, together with its decomposition into the “thin” and the “thick” disk contributions, in red dotted lines. This decomposition is generally robust, although correlations exist between the relative density of each component and the scale-height of the thick disk. The scale-heights $z_0$ obtained by minimizing $\chi^2$ are quoted in each panel, and the typical errors are of order $\sim 5-10\%$ for $z_0^{thin}$ and $\sim 20-30\%$ for the thick component. For the migration scenario, the uncertainties are larger due to the stronger dominance of the thin disk, and are 15% and 60% for $z_0^{thin}$ and $z_0$, respectively. Nevertheless, in all cases, the results presented below are robust to changes in the value of $z_0$ within the uncertainties. Differences in the relevance of the thick component depend not only on the net efficiency of the respective formation process, but may also be influenced by the different initial conditions and simulation techniques (e.g. only the [*accretion*]{} and [*merger*]{} scenarios actually account for the full cosmological framework). Because of these basic differences between simulations, global properties such as the mass, rotation and size of each formed thick disk are expected to be diverse. Our focus, however, is on contrasting the specific dynamical properties of the stars in the thick component for each case, and in particular, their orbital eccentricities, which as we shall see below, are fundamentally related to the physical mechanism by which this component was built. In order to facilitate comparisons between the thick disks in our galaxies and in particular, also to that of the Milky Way, we re-scale the radial and vertical distances of the stellar particles in each galaxy by their corresponding thin disk scale-lengths and thick disk scale-heights[^1]. In what follows, we will focus our attention on “solar neighbourhood volumes”, equivalent to cylindrical shells between two and three scale-radii of the thin disk ($2<R/R_d<3$). For comparison, $R_\odot/R_d \sim 2.2-2.4$, assuming a scale radius of $R_d = 3.5$ kpc for the Milky Way. Modelling of the orbits ----------------------- Kinematical surveys such as RAVE, SEGUE and ultimately [*Gaia*]{} provide phase-space coordinates of stars around the position of the Sun. This instantaneous information may be used to recover their plausible past orbits. This requires modelling the (unknown) Galactic potential, and possibly its evolution, which implies that the orbital parameters derived for each star generally suffer from a certain degree of uncertainty, even if measurement errors are neglected. On the other hand, our numerical simulations allow us to track in time each particle, with full orbits that are known. Nevertheless we prefer to mimic observations, and therefore we use the present-day position and velocity of each stellar particle as initial conditions for the integration of their orbits in the best-fit potential of their host galaxy. We model each galaxy as a four component system with an NFW [@nfw97] dark halo, a Hernquist profile [@hernquist90] for the bulge and two Miyamoto-Nagai disks [@miyamoto_nagai] corresponding to the thin and thick disks contributions. The mass associated with each of these components is known for each simulation (see Table \[table:simu\]), and their various scale-lengths are chosen by requiring a good match to the circular velocity profile of the system up to a distance of 20 kpc[^2]. This is along the lines of previous work, where the circular velocity of the Milky Way is often used to constrain the model parameters [e.g. @helmi06]. We define eccentricity as $\epsilon=(r_{ap}-r_{pe})/(r_{ap}+r_{pe})$, where $r_{ap}$ and $r_{pe}$ correspond, respectively, to the apo- and pericenter distance of the last orbit of each particle. With this definition, for a circular velocity curve modeled with $\sim 10\%$ accuracy, the eccentricities obtained by numerical integration show a scatter $\pm 0.1-0.2$ around their true value with no systematic trends (see Fig. \[fig:abadi\_allz\]). Larger deviations are found as the eccentricity increases. ![Eccentricity distribution of all stellar particles in a cylindrical shell with $2<R/R_d<3$ for the [*accretion*]{} scenario [@abadi03b]. The various panels are for different heights above/below the plane (normalized to the thick disk scale-height, $z_0$). The thick solid black line shows the total distributions per $z$-bin, while blue-empty and red-shaded histograms distinguish between [*in-situ*]{} and accreted stars. The effect introduced by the numerical integration of the orbits is rather small: the $\epsilon$-distribution obtained from direct tracking of the particles orbits in the simulation is shown in dotted black line. The fractional contributions from the thin disk (magenta dot-dashed), thick disk (green long-dashed) and the spheroid (solid black) to each eccentricity bin are shown below each histogram. The number of particles $N$ included in each box is also quoted.[]{data-label="fig:abadi_allz"}](figs/e_hist_rborn_norm.ps){width="84mm"} Results {#sec:analysis} ======= Baryons in galaxies are generally sorted in several components: a bulge, a disk (thin+thick) and a more extended and diffuse spheroidal distribution, the stellar halo, that might extend well beyond the luminous edge of the disk. Although each of these components has defining characteristics, some of its properties may change smoothly from component to component. For example the eccentricity of the orbits changes as we move away from the disk plane into the realms of the stellar halo. This can be seen in Figure \[fig:abadi\_allz\], where the eccentricity distribution of all stellar particles within a cylindrical radii $2<R/R_d<3$ is plotted for different heights above/below the plane. Vertical distances are normalized to the scale-height of the thick disk, $z_0$. The eccentricity distributions for stars formed [*in-situ*]{} (defined as those born within a distance of 20 kpc from the main progenitor) and for those [*accreted*]{} are given by empty long-dashed blue and shaded red histograms respectively. For comparison, the eccentricity distribution measured directly from the simulation (i.e. by tracking individual particle orbits) is given by the dotted histogram. This shows that no systematic errors are introduced by the orbital integration in the model host potential. The lowermost bin $|z/z_0|<0.5$ is largely dominated by thin disk stars with circular motions, as can be seen by the strong peak around $\epsilon \sim 0.15$ in the top left panel of Figure \[fig:abadi\_allz\]. These stellar particles formed [*in-situ*]{} through local conversion of gas settled in a disk, into stars (blue empty histogram) within the main galaxy. As we move away from the plane the thick disk gains importance and the eccentricity distributions become flatter as the fraction of accreted stars increases. Further above and below the plane, the distribution is dominated by particles from the spheroidal component, with even higher characteristic eccentricities, i.e $\epsilon \sim 0.7$. These changes in the relative preponderance of the thin, thick and spheroidal components can be seen as their fractional contribution (magenta dot-dashed, green long-dashed and black solid, respectively) to each eccentricity on the bottom panel of all $|z|$-bins. Here, we have used the dynamical decomposition performed in @abadi03b to assign stars to a given component. This galaxy, introduced in Abadi et al. (2003a,b), has a large stellar spheroid that contains more than $\sim$70% of the total luminous mass. Figure \[fig:abadi\_allz\] shows that its contribution dominates the high eccentricity bins at all heights above/below the plane. To highlight the properties of the thick disk, and also to avoid confusion on the interpretation of our results and its comparison with other simulated galaxies (lacking such a prominent spheroidal component), we will exclude in what follows any stellar particle that have been assigned to the spheroid by the analysis performed in Abadi et al. (2003b). Figure \[fig:e\_all\] shows how the eccentricity distributions vary according to the different formation channels of the thick disk. Each panel corresponds to one particular model: [*accretion*]{} (top left), [*heating*]{} (top right), [*migration*]{} (bottom left) and [*merger*]{} (bottom right). When relevant, the contributions from stars formed “[*in-situ*]{}” or “[*accreted*]{}” have been highlighted. In order to minimize the contribution from thin disk stars, we have focused on the vertical bin $1<|z/z_0|<3$. On the other hand, to avoid contamination from the spheroids in our simulations (this is unlikely to be important for the Galactic stellar halo because of its very low density), we only consider stars with rotational velocity $v_\phi > 50$ km s$^{-1}$. This corresponds to average azimuthal velocity at which there is a clear excess of stars with velocities larger than this threshold compared to the distribution at $v_\phi < -50$ km s$^{-1}$ in our simulations. Under the assumption of a non-rotating spheroidal component, this criterion will minimize the contribution of the stellar halo in our samples. Nonetheless, we have checked that our results are not strongly sensitive to this assumption. The $v_\phi > 50$ km s$^{-1}$ cut removes 33 per cent of the stars in Brook’s model, but has a negligible effect in the simulation by Villalobos & Helmi and Ro[š]{}kar et al. due to the suppressed contribution of satellite accretion. Recall that for the Abadi’s galaxy we have removed all the spheroid identified by the dynamical decomposition. Cuts on the cylindrical radii ($2<R/R_d<3$) also help to elude the contributions from central bars and bulges. ![ Comparison of the eccentricity distributions of each thick disk formation model for stars in the range 1-3 (thick-disk) scale-heights and cylindrical distance $2 < R/R_d < 3$. The color and line coding are the same as introduced in Figure \[fig:abadi\_allz\].[]{data-label="fig:e_all"}](figs/e_hist_allsimus.ps){width="84mm"} Figure \[fig:e\_all\] shows that the eccentricity distributions of stellar particles in these “solar neighbourhood” regions, and between one and three thick disk scale-heights above/below the plane, are different according to each model. For the [*accretion*]{} scenario, the distribution is very broad, with a median eccentricity $ \langle \epsilon \rangle \sim 0.5$ [in good agreement with the accreted component in @reed08]. On the other hand, the heating of a pre-existing thin disk by a minor merger gives rise to a bimodal distribution. The dominant peak is at low eccentricity $\epsilon \sim 0.2 - 0.3$ and associated to the stars from the progenitor disk, while the second peak at $\epsilon \sim 0.8$, is brought by the disrupted satellite. [*Radial migration*]{} tends to preserve the initial (low) eccentricity distribution, with only one peak present at $ \langle \epsilon \rangle\sim 0.2$ and with a sharp cut-off at $\epsilon \sim 0.6$. Finally, in the [*merger*]{} scenario a prominent peak around $\epsilon \sim 0.2$ is visible which is associated to stars formed [*in-situ*]{} during the epoch of active gas-rich mergers. But like for the [*accretion*]{} scenario, accreted stars from infalling satellites contribute to a high-eccentricity tail. Interestingly, Figure \[fig:e\_all\] shows, as expected, that stars formed [*in-situ*]{} have low eccentricity orbits regardless the mechanism ([*heating, migration, merger*]{}) that places them at their current height above/below the plane. On the other hand, accreted stars from satellites always dominate the high eccentricity tails of the distributions. Changes in the relative proportion of [*in-situ*]{} versus accreted stars drive the differences seen in the histograms of each thick disk formation scenario. In other words, the analysis of the stellar eccentricities off the plane helps to unravel whether the bulk of stars was formed locally in the main progenitor ([ *heating, migration, merger*]{}) or, on the contrary, was accreted from infalling satellites ([*accretion*]{}). Kolmogorov-Smirnov tests performed over randomly generated subsamples of stars selected from each simulation show that $\sim 150$ stars are enough to distinguish at 90% confidence level between all the scenarios investigated here. Moreover, in samples containing more than $\sim 550$ stars, the probability that the stars are drawn from the same distribution is found to be lower than 1%. Clearly a smaller number of stars ($\sim 200$) is sufficient to distinguish the accretion model from the rest. Note, however, that these estimates are only indicative since they are derived for particular realizations of a general class of models. It is important to recall that each of these simulations has produced a galaxy with a different morphology. Furthermore the simulations representative of the [*heating*]{} scenario as well as that of [*migration*]{} have, by construction, a suppressed contribution from accreted satellite galaxies. Nevertheless, the general behaviour of [*in-situ*]{} vs [*accreted*]{} populations are robust to the different simulations idiosyncrasies and can be traced back to the different physical mechanisms related to where and how the stars were formed. Therefore, we do not expect the global properties of the eccentricity distribution (e.g. bimodality, high eccentricity tails associated to accreted populations) to differ significantly, but only in the details, in other realizations of the same models. Although we have focused on “solar-neighbourhood” regions ($2<R/R_d<3$), our conclusions do not depend fundamentally on this choice. Conclusions {#sec:concl} =========== In this [*Letter*]{} we have analyzed four numerical simulations of galaxies hosting a thick disk component of fundamentally different origin: $(i)$ accretion of satellites, $(ii)$ heating of a pre-existing disk by a 5:1 mass-ratio merger, $(iii)$ radial migration by resonant scattering and $(iv)$ gas-rich mergers at high-redshift. We have compared the eccentricity distributions predicted by these different models for stellar particles in the “solar neighbourhood”, i.e. located in a cylindrical shell of radius $2<R/R_d<3$ and with heights $1 <|z/z_0|<3$. Thick disk stars formed [*in-situ*]{} have low orbital eccentricities $\epsilon \sim 0.2-0.3$, independently of the mechanism that brought them high above/below the plane: gas-rich mergers, heating or migration. On the other hand, and again regardless of the particular model, accreted stars always dominate the high-eccentricity tail of the distributions. Therefore, the characterization of the eccentricity distribution of the thick disk can be used to establish if this component was formed by the accretion of satellites (Abadi et al. 2003) or alternatively locally within the main galaxy [@brook05; @villalobos08; @roskar08a]. However, given the various limitations of our set of simulations, we cannot claim that it will be possible to make an unequivocal classification among models $(i)-(iv)$ based only on eccentricity. Nevertheless, the differences between the orbital eccentricities of [*in-situ*]{} and accreted stellar particles are encouraging in view of the various kinematic surveys mapping our Galaxy today and in the near-future. We believe that with a reasonable guess of the Milky Way potential, the analysis of the eccentricity distribution of thick disk stars at approximately 1-3 scale-heights should shed light on the formation path of the Galactic thick disk. Acknowledgements {#acknowledgements .unnumbered} ================ The authors thank the hospitality of the KITP, Santa Barbara, where this work was started, and in particular Juna Kollmeier for encouragement and for her contagious enthusiasm. LVS and AH gratefully acknowledge NWO and NOVA for financial support. This research was supported in part by the National Science Foundation under Grant No. PHY05-51164. We also thank the anonymous referee for useful suggestions and comments. [^1]: For the simulation by Villalobos & Helmi (2008) we assume the scale-length to be that of the Milky Way thin disk: $R_d=3.5$ kpc, although our conclusions do not fundamentally depend on this choice. [^2]: For the migration and merger scenarios only the [*total*]{} (thin + thick) mass of the disk is known. For these cases the relative mass ratio between the thin and thick components is also a free parameter.
--- abstract: | In this Paper we report on phase resolved $I$–band optical spectroscopic and photometric observations of Cir X–1 obtained with the Very Large Telescope. The spectra are dominated by Paschen absorption lines at nearly all orbital phases except near phase zero (coinciding with the X–ray dip) when the absorption lines are filled–in by broad Paschen emission lines. The radial velocity curve of the absorption lines corresponds to an eccentric orbit ($e=0.45$) whose period and time of periastron passage are consistent with the period and phase predicted by the most recent X–ray dip ephemeris. We found that the $I$–band magnitude decreases from 17.6 to $\sim 16.8$ near phase 0.9–1.0, this brightening coincides in phase with the X–ray dip. Even though it is likely that the absorption line spectrum is associated with the companion star of Cir X–1, we cannot exclude the possibility that the spectrum originates in the accretion disc. However, if the spectrum belongs to the companion star, it must be a supergiant of spectral type B5–A0. If we assume that the compact object does not move through the companion star at periastron, the companion star mass is constrained to $\approxlt$10 M$_\odot$ for a 1.4 M$_\odot$ neutron star, whereas the inclination has to be $\approxgt 13.7^\circ$. Alternatively, the measured absorption lines and their radial velocity curve can be associated with the accretion disc surrounding a 1.4 M$_\odot$ neutron star and its motion around the centre of mass. An absorption line spectrum from an accretion disc is typically found when our line–of–sight passes through the accretion disc rim implying a high inclination. In this scenario the companion star mass is found to be $\sim$0.4 M$_\odot$. However, from radio observations it was found that the angle between the line–of–sight and the jet axis is smaller than 5$^\circ$. This would mean that the jet ploughs through the accretion disc in this scenario, making this solution less probable. author: - | P.G. Jonker$^{1,2,3}$[^1], G. Nelemans$^{4}$, C.G. Bassa$^3$\ $^1$SRON, Netherlands Institute for Space Research, Sorbonnelaan 2, 3584 CA, Utrecht, The Netherlands\ $^2$Harvard–Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, Massachusetts, U.S.A.\ $^3$Astronomical Institute, Utrecht University, P.O.Box 80000, 3508 TA, Utrecht, The Netherlands\ $^4$Department of Astrophysics, IMAPP, Radboud University Nijmegen, Toernooiveld 1, 6525 ED, Nijmegen, The Netherlands\ title: 'Detection of the radial velocity curve of the B5–A0 supergiant companion star of Cir X–1?' --- stars: individual (Cir X–1) — accretion: accretion discs — stars: binaries — stars: neutron — X-rays: binaries Introduction ============ X–ray binaries are binary systems harbouring a neutron star or black hole compact object that accretes matter from either a low– or a high–mass companion star (LMXB and HMXB, respectively). LMXBs are typically old systems whereas the early type companion star of HMXBs precludes systems older than a few times 10$^7$ yr. Recently, it has been realised that many X–ray binaries might have started–off as intermediate mass X–ray binaries (cf. Cyg X–2; ; @2002ApJ...565.1107P). One intriguing system that so far has defied classification is Cir X–1. It has a 16.6 day orbital period (@1976ApJ...208L..71K; see @2004MNRAS.348..458C for the latest X–ray ephemeris). Owing to the detection of type I X–ray bursts, the compact object in Cir X–1 likely is a neutron star (@1986MNRAS.219..871T; @1986MNRAS.221P..27T). The X–ray and radio behaviour of the source is complex. The long–term X–ray lightcurve of the source has been discussed in detail by @2003HEAD....7.1723S. One of the striking features of the X–ray lightcurve is the periodic appearance of dips. In radio, an arc minute scale radio nebula was found (@1993MNRAS.261..593S). Furthermore, a relativistic outflow has been detected on scales of arc seconds that is aligned with the arc minute scale jet (@1998ApJ...506L.121F; @2004Natur.427..222F). The inclination of the jet with respect to the line–of–sight has to be less than 5$^\circ$ (@2004Natur.427..222F). The presence of relativistic outflows detectable as synchrotron emission in the radio band argues against a high magnetic field neutron star in Cir X–1 (cf. @2000MNRAS.317....1F). Furthermore, the detection of type I X–ray bursts from Cir X–1 suggests that the neutron star has a low magnetic field, since the thermonuclear instability giving rise to type I X–ray bursts is suppressed by dipole magnetic fields $\approxgt 10^{12}$ Gauss (@1980ApJ...238..287J; @1995ApJ...438..852B). In accordance with this, type I X–ray bursts are not found in HMXBs. The low magnetic field of the neutron star in Cir X–1 suggests that it is old and hence the companion star is not an early type star. So far, clear spectroscopic evidence on the nature of the companion star is lacking. It has been suggested that Cir X–1 has a supergiant companion (). However, it can be argued (cf. @1999MNRAS.308..415J) that the source is too faint in the optical bands for the distance of 8–10 kpc that has been derived from the type I X–ray bursts (@2004MNRAS.354..355J). Nevertheless, found emission lines in the near–infrared spectrum that are consistent with a mid–B super giant. However, these authors concluded that these features must have arisen in the accretion disc and/or in outflows from the disc. In this Manuscript we present phase resolved Very Large Telescope (VLT) photometric and spectroscopic observations obtained with the FOcal Reducer/low dispersion Spectrograph 2 (FORS2). Observations, analysis and results ================================== We have obtained VLT/FORS2 spectra of the peculiar X–ray binary Cir X–1 using the 1028z holographic grism with a slit width of 1. The observations were obtained in service mode on 21 different nights in the period ranging March 15–May 15, 2005 (MJD 53446–53507). In order to sample the $\approx$16.6 day long binary orbital period of Cir X–1 we obtained one spectrum per night with an exposure time of 1730 seconds. Exceptions were May 13 and April 4. On May 13 we obtained 9 spectra with an exposure time of 1675 seconds since that night Cir X–1 was close to periastron (according to the ephemeris of @2004MNRAS.348..458C). On April 4, three spectra were obtained since the seeing was higher than the specified conditions on 2 of the 3 occasions. To minimise the light coming from an unrelated nearby field star (star 2 in ), observations were obtained under good seeing conditions (seeing between 04 and 13 as measured from the point spread function full–width–at–half–maximum of the acquisition images). The dispersion was 0.86 Å per pixel. With the 1 slit width the resolution is about 120 km s$^{-1}$ at 8800 Å. The spectra have been reduced with <span style="font-variant:small-caps;">iraf</span>[^2]. We used the overscan area of the Charge Coupled Device (CCD) for bias subtraction. The data were flatfield corrected and optimally extracted (@1986PASP...98..609H). Wavelength calibration was done using lines from He, Ar & Ne lamp spectra that were obtained during daytime the day after the observations with the same instrument set–up, as is customary for VLT Service mode observations. The rms scatter of the wavelength calibration was in the range of 0.05–0.08 Å. The extracted spectra were further reduced and analysed using the software package <span style="font-variant:small-caps;">molly</span>. We corrected the wavelength calibration for potential shifts caused by flexure by cross correlating the spectra over the wavelength ranges 9050–9160 Å and 9300–9500 Å with the first object spectrum. That part of the spectrum is dominated by features from the night sky which should have the same wavelength in each spectrum. Note that this does not account for uncertainties in the wavelength calibration caused by centroiding or star tracking inaccuracies. We did this for both the spectra of Cir X–1 and that of star 2 that was also in the slit. Next, the observation times were corrected to the Heliocentric Julian Date time frame (using UTC times) and we normalised and rebinned the spectra to a uniform velocity scale removing the Earth’s velocity. Cross–correlation of the spectra of star 2 with the first spectrum of that star over the range 8000–9000 Å shows that the remaining rms velocity differences are 2.5 km s$^{-1}$. In Fig. \[fig:sample\] we show the normalised spectra as a function of the orbital period. To fold the spectra we used our best–fit epoch of periastron (T, see below) and an orbital period of ${\rm P_{orb}=16.54}$ days as found by extrapolating the X–ray dip ephemeris of @2004MNRAS.348..458C. We binned the data in 15 phase bins. However, none of the spectra fall in the phase range 0.60–0.85, hence only 11 phase bins are shown. The most striking feature is the large change in the profile of the Paschen lines. At phase zero, large, broad (possibly double peaked) emission lines are present. Superposed is an absorption line spectrum that could originate in the companion star. This absorption line spectrum can be seen clearly at phases 0.2–0.6. Besides the Paschen absorption lines that are often seen in B and A stars, small absorption lines often seen in supergiants can be seen as well, e.g. near 8777 Å (; ; compare the Cir X–1 spectrum with that of the supergiant system XTE J1739–302 in @2006ApJ...638..982N). This feature is most likely due to He I. We investigated the behaviour of the equivalent width of the luminosity indicator Paschen–12 at 8750 Å as a function of the orbital phase. At phase 0.4–0.6 the equivalent width of the absorption line is largest. Assuming that the Paschen lines are formed in the companion star, the contamination of the accretion disc at phase 0.4–0.6 is thus minimal but not necessarily zero. Hence, the measured equivalent width of 2 Å at phase 0.4–0.6 is a lower limit. Using @1994PASP..106..382D, @1995AJ....109.1379A, @1992ApJS...81..865S and we find that the spectra at phase 0.4–0.6 resemble most that of late B/early A stars. As can be seen in Fig. \[fig:comp\], the lines are very narrow and from this we conclude that the star must be a supergiant. However, we cannot exclude that the absorption line spectrum is from an accretion disc. Nevertheless, in order to test what spectral type would describe the observed spectrum best under the assumption that the absorption lines are stellar, we obtained template stellar spectra from . We optimally subtracted the spectra of 9 stars (see Table \[tab:ref\]) with spectral type ranging from B1Ib–F5Ia from the Cir X–1 $\approx 8250-9300$ Å spectrum at phase 0.4–0.6. The optimal subtraction is performed on normalised spectra, minimising the residuals using the following recipe: ${\rm f^{CirX-1}_\lambda=A+bf^{temp}_\lambda}$, where A is the assumed accretion disc contribution and b is the fraction of light from the companion star. From the residuals of the optimal subtraction it was clear that the Ia supergiants matched the narrow absorption lines better than those of the Ib stars. Between the Ia supergiants we found that the spectra of those of spectral type mid–B provided the best fit. The multiplicative factor in the optimal subtraction was consistent with unity. However, systematic uncertainties prevent us from obtaining formal (reduced chi–squared, $\chi_\nu^2$) fits. Since the template star spectra and the Cir X–1 spectrum are not obtained with the same telescope–instrument combination the instrumental broadening of the line profile is different. Furthermore, the broadening due to the rotational velocity of the stars is different for the different stars. Spectral type & Henry Draper Identification\ ------- ----------- ------ ----------- B1Ib HD 091316 B2Ia HD 268623 B5Ib HD 164353 B3Ia HD 271163 B7Iab HD 268749 B9Ia HD 032034 A0Ib HD 087737 A3Ia HD 033579 – – F5Ia HD 269697 ------- ----------- ------ ----------- : Template stars from .[]{data-label="tab:ref"} ![VLT/FORS2 spectra of Cir X–1 ranging from 8300–8900 Å for the different orbital phases as indicated on the right hand side in the plot. The Paschen lines are easily discernible. They change from absorption to strong emission lines (with an absorption core) near phase zero. An arbitrary offset has been applied to the spectra for display purposes. The X–es mark regions where cosmic ray hits have been incompletely corrected for. The absorption line spectrum likely reveals the companion star. The line at 8619.5 Å is due to a diffuse interstellar absorption band (DIB). The feature near 8650 Å is also due to DIB although an He I line at 8648 Å found in B stars may contribute to the broad nature of the line. []{data-label="fig:sample"}](in15phasebinsavern.ps){width="8cm"} ![From [*bottom*]{} to [*top*]{}: Spectra ranging from 8300–8900 Å of Cir X–1 (at $\sim$phase 0.5; this work), the template star HD 33579 (spectral type A3Ia), HD 87737 (spectral type A0Ib) and HD 77350 (spectral type A0III; the latter three spectra are from ). If the absorption line spectrum is from the companion star of Cir X–1, the narrow Paschen lines show that the companion star must be a supergiant. The X–es mark regions where cosmic ray hits have been incompletely corrected for. []{data-label="fig:comp"}](CirX-1_A0III-HD77350_A0Ia-HD033579_A0Ib-HD87737.ps){width="8cm"} Next, we investigated the velocity of the absorption line spectrum as a function of the binary orbital phase. Since we did not obtain early type supergiant template star spectra we cross correlate the spectra of Cir X–1 with that of one of the spectra. In the cross correlation it is vital that the continuum level has been determined accurately. Furthermore, one has to avoid regions of the spectrum affected by skylines. In order to avoid problems with the continuum normalisation and skylines we have cross–correlated the region between 8700–8900 Å. In Fig. \[fig:radvel\] we show the observed radial velocity curve. The solid line in Fig. \[fig:radvel\] is the best–fit elliptical orbit (see Table \[tab:radvel\] and  \[tab:ell\]). However, in the fit shown in Fig. \[fig:radvel\] we have increased the errors by 6.5 km s$^{-1}$ in order to obtain a $\chi_\nu^2$ of 1.0. As mentioned above the rms residual velocities measured by cross correlating the spectra of the nearby star with itself are 2.5 km s$^{-1}$. We used this 2.5 km s$^{-1}$ as a measure of the amplitude of systematic effects. This systematic error dominates the statistical error of the cross–correlation of the Cir X–1 spectra. However, even when we take this uncertainty into account, the formal $\chi^2_\nu\approx 6$ for the 25 degrees of freedom. Possible other systematic effects that are not included are: (i) the effects of differences in X–ray heating on the absorption line spectrum. E.g. it can be seen comparing the [*top and bottom left panel*]{} of Fig. \[fig:radvel\] that the first three velocity measurements are somewhat below the fit, these observations were performed when the X–ray flux was still relatively high (ii) from Fig. \[fig:sample\] it can be seen that near phase zero strong Paschen emission lines are present. These emission lines are redshifted with respect to their rest wavelengths and since the Paschen absorption lines at those orbital phases are red shifted by a different amount, the emission lines fill–in the absorption in an asymmetric way, possibly skewing the cross correlation velocities (iii) slit centroiding and tracking errors can have introduced errors in the wavelength calibration. ------------ --------------------- --------------------- ------------------- MJD Orbital phase Orbital phase Radial velocity ($P_{orb}=16.53$ d) ($P_{orb}=16.68$ d) (km s$^{-1}$)$^a$ 53445.3868 0.323 0.326 -8.2$\pm$1.0 53447.3749 0.443 0.446 -18.1$\pm$1.0 53448.2275 0.495 0.497 -17.7$\pm$1.1 53455.3966 0.929 0.926 11.1$\pm$0.8 53456.3680 0.988 0.985 35.3$\pm$1.0 53457.4003 0.050 0.047 23.4$\pm$1.0 53459.3877 0.171 0.166 0.4$\pm$1.2 53461.1724 0.279 0.273 -8.9$\pm$1.1 53465.3265 0.530 0.522 -9.5$\pm$1.3 53465.3517 0.531 0.523 -6.9$\pm$1.4 53465.3763 0.533 0.525 -7.3$\pm$1.1 53466.1605 0.580 0.572 -12.4$\pm$1.1 53471.2432 0.888 0.876 22.0$\pm$0.8 53472.2192 0.947 0.935 30.4$\pm$0.8 53473.2753 0.011 0.998 49.7$\pm$1.1 53491.1779 0.094 0.072 27.2$\pm$1.0 53492.2715 0.160 0.137 8.1$\pm$1.1 53493.2927 0.222 0.198 11.5$\pm$1.6 53495.2467 0.340 0.316 -3.9$\pm$0.9 53496.2661 0.402 0.377 -4.3$\pm$1.0 53498.3358 0.527 0.501 -5.7$\pm$1.2 53504.0955 0.875 0.846 1.5$\pm$0.9 53504.1153 0.877 0.847 2.4$\pm$0.8 53504.1351 0.878 0.848 3.5$\pm$0.9 53504.1549 0.879 0.850 6.1$\pm$0.8 53504.1748 0.880 0.851 7.1$\pm$0.8 53504.1946 0.881 0.852 9.8$\pm$0.9 53504.2144 0.883 0.853 14.9$\pm$0.9 53504.2343 0.884 0.854 14.7$\pm$1.0 53504.2541 0.885 0.856 17.5$\pm$1.0 53506.3040 0.009 0.978 38.0$\pm$0.8 ------------ --------------------- --------------------- ------------------- : Relative radial velocities of the counterpart of Cir X–1 as displayed in Fig. \[fig:radvel\]a. To convert these velocities to absolute velocities add the systemic velocity of $\sim -26\pm 3$ km s$^{-1}$.[]{data-label="tab:radvel"} [$^a$ The error has to be increased by 6.5 to obtain a reduced $\chi^2$ of $\approx$1 in the fit (see text).]{}\ Since we cross correlated the spectra with a spectrum of Cir X–1 itself the systemic velocity that is derived from the elliptical orbit fit (3.7$\pm$1.3 km s$^{-1}$) is not the true systemic velocity. In fact, as one would expect, within 3 $\sigma$ it is consistent with 0. The systemic velocity is $\sim -26\pm 3$ km s$^{-1}$. We derived this by comparing the best–fit central wavelength of the Gaussian absorption lines with rest wavelengths of 8467.25, 8598.4, 8750.5 and 8862.8 Å. The Gaussians were fitted to the spectrum that was taken as our template spectrum in the cross–correlation (we have excluded the three Paschen lines that are possibly blended with absorption lines of the Ca II triplet). Hence, in order to convert the relative velocities in Fig. \[fig:radvel\] and Table \[tab:radvel\] to absolute velocities one has to add the systemic velocity. --------------------------------- ----------------- ----------------- P (days) 16.68$\pm$0.15 16.53 (fixed) K (km s$^{-1}$) 25$\pm$2 25$\pm$2 e 0.45$\pm$0.07 0.47$\pm$0.06 $\omega$ (deg) 2$\pm$12 -7$\pm$11 T (+53,475.0 days; MJD) -1.7$\pm0.4$ -1.9$\pm0.4$ a $\sin$ [*i*]{} (lightseconds) 16.9$\pm$1.2 16.8$\pm$2.0 f(m) (M$_\odot$) 0.019$\pm$0.007 0.019$\pm$0.007 --------------------------------- ----------------- ----------------- : The best–fit orbital parameters. We provide both the solution for the fit with the orbital period as a free fit–parameter and that for where it was fixed to the orbital period extrapolated from the ephemeris of @2004MNRAS.348..458C. []{data-label="tab:ell"} We have used the 32 $I$–band images that were obtained to acquire the source to construct a lightcurve. The images were corrected for bias using the values from the overscan regions and flatfielded using sky flats taken within one or two days from the science images. To determine instrumental magnitudes of Cir X–1 and stars in its relative crowded vicinity, we used the point–spread–function (psf) fitting routines from <span style="font-variant:small-caps;">DAOPHOT II</span> (@1987PASP...99..191S), running inside <span style="font-variant:small-caps;">midas</span>. The instrumental magnitudes were placed in a common photometric system by removing small magnitude differences between the different images, due to e.g. differences in the psf and the exposure times, by matching stars between these images and comparing their magnitudes. This common system was then calibrated using observations of standard star fields (PG0942$-$029 and L110; using the calibrated magnitudes by @2000PASP..112..925S), which were imaged during photometric nights (March 27, April 11 and 12, 2005). As Cir X–1 is only imaged in the $I$–band, we have not determined colour terms. The uncertainty in the zeropoint is estimated to be about 0.2mag. The derived evolution of the $I$–band magnitude as a function of time and orbital phase is shown in the [*middle panels*]{} of Fig. \[fig:radvel\]. It is clear that the $I$–band magnitude is constant except for a brief period that coincides with the X–ray dip and periastron passage. Discussion ========== We have obtained VLT phase resolved optical $I$–band spectra and images of the X–ray binary Cir X–1. The observed X–ray flux was low compared to that observed during other earlier spectroscopic observations (e.g. @1999MNRAS.308..415J; @2001MNRAS.328.1193J; ). If this lower X–ray flux corresponds to a lower intrinsic X–ray luminosity the optical light from the accretion disc thought to be caused in large by reprocessing of the X–ray luminosity might be lower as well. Hence, the putative companion star might be more readily observable now compared with high X–ray flux episodes. The spectrum we observed varies strongly as a function of orbital phase. Strong emission lines appear near periastron passage, filling–in the absorption line spectrum observable at other orbital phases. Cross–correlation of the absorption features in the 8700–8900 Å range gives a radial velocity curve with an orbital eccentricity of $e=0.45$, $a\,\sin\,i=16.9$ lightseconds and a mass function of 0.019 M$_\odot$. The question is; is this the radial velocity curve of the companion star, is it associated with the compact object/neutron star, or are the absorption lines caused by a circumbinary disc? The detected absorption line spectrum is consistent with a stellar spectrum if that star is a B5–A0 supergiant. This provides evidence for a supergiant companion star in Cir X–1. Similarly, found emission lines in the near–infrared spectrum of Cir X–1 that often occur in mid–B supergiants. Those results are consistent with our findings (but favoured a scenario where the emission features arise in the accretion disc/flow.) Previously, proposed that the companion star of Cir X–1 is an early type supergiant. However, the then known counterpart was later resolved into three stars reducing the magnitude associated with the companion star () which led people to discard the supergiant model for Cir X–1. On the other hand, an interstellar extinction of A$_V=5$ was used. This value was derived on the basis of the lowest N$_H$ measured in the seventies. That N$_H$ was determined from an X–ray spectrum obtained from Aerobee rocket data. No error was given on N$_H$ (@1971ApJ...169L..23M). More recent N$_H$ measurements from X–ray spectral fits with more sensitive satellites with a good soft response such as ROSAT, ASCA and [*Chandra*]{} give an N$_H$ in the range 1.6–2.2$\times10^{22}$ cm$^{-2}$ (; @1996MNRAS.283.1071B; @1999ApJ...511..304S; @2002ApJ...572..971S). However, @2005ApJ...619..503I again found a significant lower value for N$_H$ modelling [*Beppo*]{}–SAX data. As we will show below there are three independant measurements that are consistent with the higher N$_H$ for Cir X–1. First of all, as mentioned by @1996MNRAS.283.1071B, a significantly lower value for N$_H$ would not be compatible with the strength of the dust scattering halo in Cir X–1 as found by . Secondly, the distance that would be derived from the low N$_H$ value is $\sim$4 kpc (@2005ApJ...619..503I). This distance is too low to explain the type I X–ray bursts, those require a lower limit to the distance of d=7.8–10.5 kpc (@2004MNRAS.354..355J). Finally, the equivalent width of the diffuse interstellar band at 8620 Å that we find in the optical spectrum is correlated with $E_{(B-V)}$ (@2000msl..work..179M). This provides an A$_V=12.2$ for Cir X–1 (taking R=3.1), consistent with the higher value derived from N$_H$ which using the conversion of N$_H$ to A$_V$ from gives $9<{\rm A}_V<12$. For these reasons and since the modelling of the X–ray spectrum is model dependent as mentioned in @2005ApJ...619..503I we think that the higher value for N$_H$ and hence A$_V$ is more likely to be right. From the distance to Cir X–1 from the type I X–ray bursts (d=7.8–10.5 kpc), A$_V\approx 9-12$, and $I$=17.6, one gets an absolute magnitude $-4.9<{\rm M}_V<-2.5$ (taking into account that $V-I$ is nearly 0 for mid–B/early A type stars, and using the relative extinction from @scfida1998 A$_I=0.6\times$A$_V$). An absolute magnitude of -2.5 would be too low and even -4.9 is on the low side for standard B5–A0 supergiants (@1974MNRAS.166..203B). However, considering the preceding binary evolution involving significant mass transfer from the neutron star progenitor to the initially less massive companion star and the current X–ray heating especially at periastron the companion star of Cir X–1 is not likely to be a standard supergiant. Let us investigate the consequences of the assumption that the absorption line spectrum and the radial velocity curve track the companion star of Cir X–1. Kepler’s third law, the measured a$_{opt}\, \sin$ [*i*]{} and P$_{orb}$ define a relation between the inclination of the orbit with respect to the line–of–sight and the mass of the companion star (the solid lines in Fig. \[fig:massinc\]). From the spectral classification B5–A0I we find $2.1\approxlt \log\,g\, \approxlt 2.4$ (@2000asqu.book.....C). This gives $\frac{R_{comp}}{R_\odot}=C\sqrt\frac{M_{comp}}{M_\odot}$ (where $C$ is in the range of 10–15). To avoid the neutron star going through the companion star at periastron, we constrain the radius of the companion star to be smaller than or equal to the periastron distance $a\,(1-e)$, with $a\,\sin\,i={\rm \frac{M_X+M_2}{M_X}} a_{opt}\,\sin\,i$ and $a_{opt}\,\sin\,i =7.28$ R$_\odot$. From Fig. \[fig:massinc\] this yields ${\rm M_{comp} \approxlt 18.9\,M_\odot}$, $\approxlt 15\,{\rm M_\odot}$, $\approxlt 11.9\,{\rm M_\odot},$ and $\approxlt 10.4\,{\rm M_\odot}$ for M$_{comp}=10$, 5, 2 and 1.4 M$_\odot$, respectively. The inclination is constrained to be $ 7.1^\circ\approxlt i \approxlt 14.9^\circ$, $8.9^\circ\approxlt i \approxlt 23.5^\circ$, $12.1^\circ\approxlt i \approxlt 50^\circ$ and $i \approxgt 13.7^\circ$ for a 10, 5, 2, and 1.4 M$_\odot$ compact object, respectively. A normal B5–A0 supergiant has a mass of $\sim$10 M$_\odot$ which would hence fit–in with a neutron star compact object of Cir X–1. On the other hand it is conceivable that if the companion star of Cir X–1 has indeed spectral type B5–A0I it is no ordinary star due to the preceding evolution (as mentioned above). The observed type I X–ray bursts (@1986MNRAS.219..871T) are evidence of a neutron star nature of the compact object. Furthermore, as mentioned in the Introduction the neutron star has to have a magnetic field $\approxlt 10^{12}$ Gauss. Also the presence of a strong radio jet is not compatible with a high magnetic field neutron star. These findings seem to be at odds with a supergiant (thus young) companion star. We conclude that if the presence of a supergiant companion star in Cir X–1 is confirmed the neutron star magnetic field either decayed quickly to below $\approxlt 10^{12}$ Gauss or the neutron star was born with such a low field. Assuming that a neutron star kick at birth did not give the binary system a large systemic velocity, one can calculate the radial velocity for Local Standards of Rest along the direction of Cir X–1 (see for instance @2001ApJ...555..364B) to find that the systemic radial velocity of $\sim -26\pm 3$ km s$^{-1}$ at the location of Cir X-1 gives a distance of either $\sim 1.6$ kpc or $\sim$11.8 kpc. A distance of 11.8 kpc is close to the distance derived from the observed type I X–ray bursts (@1986MNRAS.221P..27T; 7.8–10.5 kpc @2004MNRAS.354..355J). The optical I–band lightcurve (see Fig. \[fig:radvel\]e) shows a clear brightening near periastron. This is similar in shape to the lightcurves published by and @1994MNRAS.268..742G, although one has to bear in mind that those lightcurves were obtained when the observed X–ray luminosity of the source was much higher and that the data was phase folded using different ephemerides. The increase in I–band light corresponds to the phase where Paschen emission lines start to become apparent in the spectrum. Hence, it is probable that the enhanced mass transfer rate near periastron is responsible for both effects. For instance, the emission lines could be formed in the accretion stream from the companion star to the compact object and the enhanced I–band emission could be both due to these emission lines as well as due to enhanced continuum emission related to the stream impact site. ![The mass of the companion star as a function of the orbital inclination with respect to the line–of–sight. The drawn lines are derived assuming that the compact object is from left to right, a 10, 5, 2, or a 1.4 M$_\odot$ compact object. The dots are solutions if imposing the radius of the companion star to be smaller than or equal to the periastron distance $a\,(1-e)$ (we took the least constraining case of B5I, the radius of a A0I is somewhat larger hence the dots would all move to lower inclinations). For each of the solid line – dot combination the allowed parameter space is that on the solid line and below the dot. This yields the following constraints on the companion star mass for a 10, 5, 2 and 1.4 M$_\odot$ compact object; ${\rm M_{comp} \approxlt 18.9\,M_\odot}$, $\approxlt 15\,{\rm M_\odot}$, $\approxlt 11.9\,{\rm M_\odot},$ and $\approxlt 10.4\,{\rm M_\odot}$, respectively. The inclination is constrained to be $ 7.1^\circ\approxlt i \approxlt 14.9^\circ$, $8.9^\circ\approxlt i \approxlt 23.5^\circ$, $12.1^\circ\approxlt i \approxlt 50^\circ$ and $i \approxgt 13.7^\circ$ for a 10, 5, 2, and 1.4 M$_\odot$ compact object, respectively. Taking an AOI star instead of a B5I would change the inclination and companion star mass constraint to $ 7.1^\circ\approxlt i \approxlt 10^\circ, 7.3$ M$_\odot$ for the 10 M$_\odot$ compact object. Under the assumption that the measured $a\, \sin\,i$ is not $a_{opt}\, \sin\, i$ but $a_{NS}\, \sin\, i$, the constraint on the companion star mass as a function of inclination is given by the dotted line (taking ${\rm M_{NS}=1.4\,M_\odot}$).[]{data-label="fig:massinc"}](massincl.ps){width="7cm"} In principle it cannot be ruled out that the absorption line spectrum originates in the accretion disc. E.g. the $I$–band spectrum of the high inclination (accretion disc corona) source 2S 0921–630 shows Paschen absorption lines at orbital phases near 0.9 that could well be caused by the line of sight passing through the accretion disc rim (@2005MNRAS.356..621J). Those Paschen absorption lines are not present in the K1III spectral type that has been derived for the companion star in 2S 0921–630 (although X–ray heating effects may result in different spectral types being observed at different orbital phases, cf. the observed spectral type in Cyg X–2 changes from A5–F2 @1979ApJ...231..539C). However, in radio observations of Cir X–1 superluminal motion has been observed which limits the inclination of the jet axis to the line of sight to $i<5^\circ$ (@2004Natur.427..222F; the limit $i<5^\circ$ has been derived assuming a distance of 6.5 kpc, if Cir X–1 is indeed further away as is indicated by the burst properties, then the limit on $i$ is more stringent still). If our line–of–sight also goes through the accretion disc rim (at all orbital phases) it implies that the jet–axis is nearly in the plane of the orbit. Hence, it implies that the jet ploughs through the accretion disc if it originates close to the compact object. Nevertheless, if we assume that the $a\, \sin\,i$ we measured is associated with the binary motion of the accretion disc/neutron star we can derive a limit on the mass of the companion star assuming the neutron star mass is 1.4 M$_\odot$. This limit is a function of the binary inclination and is given by the dotted line in Fig. \[fig:massinc\]. From the discussion above it would seem that the companion star mass is $\approx$0.4 M$_\odot$ since the binary inclination must be high in this scenario. A star of such a mass cannot have evolved off the main–sequence in a Hubble time unless the star was more massive initially (at least 0.8 M$_\odot$) and more than 0.4 M$_\odot$ has been transfered already. However, it is unclear whether such an amount of mass can have been transfered while the orbital eccentricity is still 0.45. Finally, in this paragraph we consider other possible formation scenarios for the absorption line spectrum. E.g. in the peculiar X–ray binary SS 433 evidence for a circumbinary disc has been found (@2001ApJ...562L..79B), perhaps the absorption lines are formed in such a disc. However, @1988AJ.....96..242F have obtained I–band spectra of SS 433 and their spectra are markedly different from those that we observe from Cir X–1. In SS 433 the Paschen lines are double peaked and in emission whereas we find them to be in absorption except near periastron passage. On the other hand the inclinations at which we observe SS 433 and Cir X–1 are different. In this respect it is interesting to note that according to @2006astro.ph..7612T we will be observing the system through the material that is swept up by the approaching radio jet. However, it is difficult to imagine that that material is dense enough and has a velocity low enough to cause the narrow absorption lines. Finally, in all these scenario’s the velocity changes of the absorption lines with orbital phase are difficult to explain. A potential way to determine whether the measured radial velocity curve is associated with the companion star is to obtain a high resolution spectrum. Such a spectrum could reveal narrow spectral features and allow for a better spectral classification e.g. the observed absorption feature near 8685 Å consists of a blend of several narrow lines if the spectrum is from the companion star () and if the companion star is an A–star the Ca II absorption triplet can be separated from the Paschen lines. Furthermore, the rotational broadening of the spectral lines could be measured. Finally, due to precession of the orbit (i.e. apsidal motion), the anomalous period as measured by the time between periastron passages is different from the period that would be measured by eclipse timing (i.e. the sidereal period). Hence, if the X–ray dips in Cir X–1 are due to (grazing) eclipses the sidereal period will over time differ from the anomalous period. Acknowledgments {#acknowledgments .unnumbered} =============== The authors are grateful to the referee, Dr. D. Gies, for his comments that helped improve the Paper. We would like to thank the Director of ESO for approving these DDT observations. The use of the spectral analysis software package <span style="font-variant:small-caps;">molly</span> written by prof. Tom Marsh is acknowledged. PGJ acknowledges support from NASA grants NNG05GN20G and NNG05GN27G. PGJ, GN and CGB acknowledge support from the Netherlands Organisation for Scientific Research. [46]{} natexlab\#1[\#1]{} , L. E., [Strom]{}, K. M., 1995, , 109, 1379 , L., [Crampton]{}, D., 1974, , 166, 203 , D. H., [Gies]{}, D. R., 2001, , 555, 364 , L., 1995, , 438, 852 , K. M., [Mioduszewski]{}, A. J., [Muxlow]{}, T. W. B., [Podsiadlowski]{}, P., [Rupen]{}, M. P., 2001, , 562, L79 , W. N., [Fabian]{}, A. C., [Dotani]{}, T., [Nagase]{}, F., [Inoue]{}, H., [Kotani]{}, T., [Segawa]{}, Y., 1996, , 283, 1071 , J. S., [Charles]{}, P. A., [Clarkson]{}, W. I., [Coe]{}, M. J., 2003, , 400, 655 , J. S., [Negueruela]{}, I., [Crowther]{}, P. A., [Goodwin]{}, S. P., 2005, , 434, 949 , W. I., [Charles]{}, P. A., [Onyett]{}, N., 2004, , 348, 458 , A. P., [Crampton]{}, D., [Hutchings]{}, J. B., 1979, , 231, 539 , A. N., 2000, [Allen’s astrophysical quantities]{}, Allen’s astrophysical quantities, 4th ed.  Publisher: New York: AIP Press; Springer, 2000.  Editedy by Arthur N. Cox.  ISBN: 0387987460 , A. C., [Dennefeld]{}, M., 1994, , 106, 382 , R., [Spencer]{}, R., [Tzioumis]{}, T., [Wu]{}, K., [van der Klis]{}, M., [van Paradijs]{}, J., [Johnston]{}, H., 1998, , 506, L121 , R., [Wu]{}, K., [Johnston]{}, H., [Tzioumis]{}, T., [Jonker]{}, P., [Spencer]{}, R., [van der Klis]{}, M., 2004, , 427, 222 , R. P., [Hendry]{}, M. A., 2000, , 317, 1 , A. V., [Romani]{}, R. W., [Sargent]{}, W. L. W., [Blandford]{}, R. D., 1988, , 96, 242 , I. S., 1994, , 268, 742 , K., 1986, , 98, 609 , R., [Span[ò]{}]{}, M., [Di Salvo]{}, T., [Robba]{}, N. R., [Burderi]{}, L., [Fender]{}, R., [van der Klis]{}, M., [Frontera]{}, F., 2005, , 619, 503 , H. M., [Fender]{}, R., [Wu]{}, K., 1999, , 308, 415 , H. M., [Wu]{}, K., [Fender]{}, R., [Cullen]{}, J. G., 2001, , 328, 1193 , P. G., [Nelemans]{}, G., 2004, , 354, 355 , P. G., [Steeghs]{}, D., [Nelemans]{}, G., [van der Klis]{}, M., 2005, , 356, 621 , P. C., [Li]{}, F. K., 1980, , 238, 287 , L. J., [Holt]{}, S. S., [Boldt]{}, E. A., [Serlemitsos]{}, P. J., 1976, , 208, L71 , J.-F., et al., 2003, , 402, 433 , B., [Lampton]{}, M., [Bowyer]{}, S., [Cruddace]{}, R., 1971, , 169, L23 , A., 1992, , 260, L7 , U., 2000, in [Porceddu]{}, I., [Aiello]{}, S., eds., Molecules in Space and in the Laboratory, p. 179 , U., [Tomasella]{}, L., 1999, , 137, 521 , P., [Jauncey]{}, D. L., [Lerche]{}, I., [Nicolson]{}, G. D., [Kaluzienski]{}, L. J., [Holt]{}, S. S., [Haynes]{}, R. F., 1980, , 87, 292 , I., [Smith]{}, D. M., [Harrison]{}, T. E., [Torrej[ó]{}n]{}, J. M., 2006, , 638, 982 , P., [Rappaport]{}, S., [Pfahl]{}, E. D., 2002, , 565, 1107 , P., [Schmitt]{}, J. H. M. M., 1995, , 293, 889 , P. M., et al., 2003, AAS/High Energy Astrophysics Division, 7, astroph0303402 , D. J., [Finkbeiner]{}, D. P., [Davis]{}, M., 1998, , 500, 525 , N. S., 1999, , 511, 304 , N. S., [Brandt]{}, W. N., 2002, , 572, 971 , D. R., [Cornell]{}, M. E., 1992, , 81, 865 , P. B., 1987, , 99, 191 , P. B., 2000, , 112, 925 , R. T., [Caswell]{}, J. L., [Haynes]{}, R. F., [Nelson]{}, G. J., 1993, , 261, 593 , T. M., [Savonije]{}, G. J., 1999, , 350, 928 , A. F., [Fabian]{}, A. C., [Shafer]{}, R. A., 1986, , 221, 27P , A. F., [Fabian]{}, A. C., [Shafer]{}, R. A., 1986, , 219, 871 , V., [Fender]{}, R. P., [Kaiser]{}, C. R., [Tzioumis]{}, A. K., [van der Klis]{}, M., [Spencer]{}, R., 2006, ArXiv Astrophysics e-prints [^1]: email : [email protected]. Based on observations made with ESO telescopes at the Paranal Observatories under programme ID 274.D–5047(A) [^2]: <span style="font-variant:small-caps;">iraf</span> is distributed by the National Optical Astronomy Observatories
[**Continuity of KMS States\ for Quantum Fields on Manifolds**]{}\ [Jacek Damek]{}\ Institute of Physics,\ University of Zielona Góra,\ ul. Szafrana 4a,\ 65–516 Zielona Góra, Poland\ e-mail: [email protected] ${}$\ [ We show that pure, quasifree states, as well as regular (i.e., those with a unique vacuum) quasifree ground and KMS states, for linear quantum fields in a curved spacetime, are always continuous in the sense of distributions, and provide certain applications of this fact. ]{} ${}$\ Vast majority of papers on quantum fields in a curved spacetime exploit an assumption that the two-point functions of the fields are distributions (see, e.g., [@passive; @Schlieder; @stroh; @analitic]). This property is, for instance, crucial for powerful methods of microlocal analysis and, consequently, for the study of Hadamard states. Nevertheless, this fact does not seem to have been established generally for the most prominent states of the physical theory; namely, ground and KMS states. It is even thought somewhere in the literature that all quasifree states are regular in the sense concerned. However, that is not the case, as may be seen from examples in this article. In the setting of a model linear scalar field theory, we prove the continuity for quasifree pure states and, subsequently, demonstrate this feature to hold for quasifree regular ground and KMS states, thereby exposing its inherent character, independent of any peculiarities of operator theory usually involved here. These results allow one, for instance, to drop the continuity assumption, in the above cases, in a remarkable theorem due to Sahlmann and Verch [@passive] and claim that regular ground and KMS states of a scalar field are always of the Hadamard form. We also provide certain exemplary consequences of the established continuity and point out some other useful omissions of this assumption in the literature. We will be concerned only with a linear hermitean scalar field as possible generalizations of our results are rather straightforward. Let $(M,g)$ be a globally hyperbolic spacetime (second countable, paracompact, orientable, n–dimensional) with a time orientation chosen. Assume that $\varSigma$ is a smooth Cauchy surface in $M$. For the wave equation $$\label{wave} \left(\square_g + V\right)\varphi = 0,\qquad V\in C^\infty(M),$$ we may define the space, $S$, of compact Cauchy data $C^\infty_0(\varSigma) \oplus C^\infty_0(\varSigma)$ for solutions of (\[wave\]), and the usual classical symplectic form $\sigma$ on $\varSigma$. Now, let $\mathcal{W}$ be the Weyl algebra of canonical commutation relations over the symplectic space $(S,\sigma)$. For any real scalar product $\mu$ on $\varSigma$, one can define a state $\omega_\mu$ on $\mathcal{W}$ [@K-W] by $$\label{qfree} \omega_\mu\left(W(\phi)\right)= \exp(-\frac{1}{2}\mu(\phi,\phi)), \qquad \phi \in S,$$ where $W(\phi)$ stands for Weyl generators of $\mathcal{W}$, provided the condition $$\left(\sigma(\phi_1,\phi_2)\right)^2 \leq 4\mu(\phi_1,\phi_1)\,\mu(\phi_2,\phi_2), \qquad \phi_1, \phi_2 \in S. \label{cond}$$ A quasifree state is just a state obtained in this way. As is well–known [@K-W], there exists a one–particle Hilbert space structure for a state $\omega_\mu$. This consists of a Hilbert space $\mathcal{H}$, and a real–linear mapping $K\colon S \to {\mathcal{H}}$ such that, for all $\phi_1,\phi_2 \in S$, 1. $\mu(\phi_1,\phi_2) = \operatorname{Re}\langle K\phi_1,K\phi_2 \rangle$, 2. $\sigma(\phi_1,\phi_2) = 2 \operatorname{Im}\langle K\phi_1,K\phi_2 \rangle$, 3. $KS + iKS\;\;$is dense in ${\mathcal{H}}$. In the sequel, we shall also need the fact that $KS$ is dense in ${\mathcal{H}}$ itself if and only if a state $\omega_\mu$ is pure [@K-W]. After those preliminaries, we are now able to present our definition of continuity of states on ${\mathcal{W}}$, as well as its convenient equivalents in the forthcoming proposition. A state $\omega$ over $\mathcal{W}$ will be called continuous if he function $\phi \to \omega \left(W(\phi)\right)$ is continuous w.r.t. the Schwartz topology of $S$. In the quasifree case, this definition amounts to the so–called $C^\infty$–regularity of a state (cf. [@vector]), which, notably, ensures the existence of Wightman disributions for the quantum field theory resulting from such a state via the GNS construction. \[equiv\] For a quasifree state $\omega_\mu$, the following conditions are equivalent: 1. $\omega_\mu$ is continuous, 2. $\mu$ is continuous on $S\times S$, 3. $K$ is continuous (in the norm of ${\mathcal{H}}$). This easily follows from the formula (\[qfree\]), polarization for $\mu$, and condition (i) of the definition of a one–particle structure for $\mu$. The inequality (\[cond\]) forces the continuity of $\sigma$ whenever a state associated with $\mu$ is continuous, but the converse is false. Take an arbitrary $\mu_0$ defining a continuous quasifree state and, therefore, satisfying (\[cond\]), and some discontinuous linear functional $s$ on $S$. Then $\mu_s = \mu_0 + s \otimes s$ defines a discontinuous quasifree state on ${\mathcal{W}}$, since such a $\mu_s$ clearly fulfills the desired condition (\[cond\]) if $\mu_0$ does. In view of the next theorem, we note that this exemplary singular state is not a pure state. We now state and prove our basic theorem. \[pure\] The pure quasifree states on ${\mathcal{W}}$ are continuous. Let $\omega_\mu$ be an arbitrary pure quasifree state with a one–particle Hilbert space structure $({\mathcal{H}},K)$. By Proposition (\[equiv\]), it suffices to prove the continuity for the mapping $K$. For any compact set $C \subset \Sigma$, we define $S_C \subset S$ by $$S_C = C^\infty_0(C) \oplus C^\infty_0(C).$$ Then $S_C$ is a Frechet (metrizable) space. We will prove that $K \upharpoonright S_C$ is continuous, which, of course, implies the desired continuity of $K$ on all of $S$. To show this, we employ the closed graph theorem for the spaces $S_C$ and ${\mathcal{H}}$. Accordingly, suppose that $\phi_n \to \phi,\ K\phi_n \to x\ (\phi_n \in S_C,\; x \in {\mathcal{H}})$ with obvious meaning. We only need to show that $x = K\phi$. By the definition of $K$, for any $\psi \in S$, we have $$\label{im} \sigma(\psi,\phi_n) = 2 \operatorname{Im}\langle K\psi,K\phi_n \rangle.$$ Passing to the limit yields $$\sigma(\psi,\phi) = 2 \operatorname{Im}\langle K\psi,x \rangle,$$ as the classical form $\sigma$ is continuous. On applying (\[im\]) with $\phi$ instead of $\phi_n$, we then obtain $$\label{im2} \operatorname{Im}\langle K\psi,K\phi \rangle = \operatorname{Im}\langle K\psi,x \rangle.$$ By purity of the state, there is a sequence $\{\psi_n\} \in S$ such that $iK\psi = \lim\limits_{n \to \infty} K\psi_n$. Substituting $K\psi_n$ for $K\psi$ in (\[im2\]) and taking the limit, we arrive at $$\label{re} \operatorname{Re}\langle K\psi,K\phi \rangle = \operatorname{Re}\langle K\psi,x \rangle.$$ Finally, it follows from (\[im2\]), (\[re\]), and the denseness of $KS$ in ${\mathcal{H}}$ that $K\phi = x$, which completes the proof. Now, we shall infer a few consequences of the above general theorem, which are of more direct physical interest. To this end, we assume that our spacetime is stationary, which allows one to define the quasifree symplectic time evolution on the Weyl algebra considered here. Then, the standard notions of ground or KMS states make sense with respect to the just introduced time evolution. Further, let us explain that by regular quasifree ground or KMS states we mean those whose one–particle hamiltonian has no zero modes. The regular, quasifree ground states are continuous. This is true due to Theorem \[pure\] and the fact that such states are pure [@K-W]. The regular, quasifree KMS states are continuous. Such a KMS state, say $\omega$, necessarily arises from a unique regular ground one–particle Hilbert space structure $({\mathcal{H}}, K, h)$, where h is a one–particle hamiltonian (see [@K-W; @Kay]). Next, we may double the underlying spacetime $M$, so as to obtain a stationary spacetime $\tilde{M} = M_L \cup M_R\:$, $M_L,\: M_R$ being copies of $M$. Then, on the Weyl algebra for $\tilde{M}$, one can build a double quasifree KMS state by doubling the structure $({\mathcal{H}}, K, h)$ (see [@Kay]) in order to obtain the double KMS one–particle structure in ${\mathcal{H}}\oplus {\mathcal{H}}$, which then defines that state. This state is both quasifree and pure, and coincides with $\omega$ on $M_R$. Thus, by Theorem \[pure\], the state $\omega$ is continuous. In view of, e.g., Lemma 6.1 in ref. [@K-W], we note that for a one–particle Hilbert space structure $({\mathcal{H}}, K, h)$ of such a KMS or ground state, the condition $KS \subset \operatorname{dom}(h)$ always holds. To see this, we write $$\frac{d}{dt}\, e^{iht}K\phi = \frac{d}{dt} K\phi_t = K\frac{d}{dt} \phi_t,$$ where $\phi_t$ stands for the time translate of $\phi$. The existence of the derivative inside the formula is guaranteed by the continuity of $K$ and the fact that $\frac{\phi_t - \phi}{t}$ converges to $\frac{d}{dt}\phi_t$ in the Schwartz topology of $S$. In the case of a continuous quasifree state, the one–particle Hilbert space of that state (thus the Fock space of the GNS representation) must be separable. This stems from continuity of $K$, denseness of $KS + iKS$, and separability of $S_C$ (the space having been defined at the beginning of the proof of Theorem \[pure\]) together with the fact that a Cauchy surface in our spacetime is a sum of a countable number of compact sets. Although there is no doubt that explicitly constructed states, such as those for a Schwarzschild black hole rigorously introduced by Kay [@Kay], are continuous, now this property has been shown to hold almost automatically, apart from any assumptions on self–adjointness or spectral expansions. This can be helpful, especially in the context of abstract, general theorems, where one can thus drop the continuity assumption in the above-noted, common cases. For example, Sahlmann and Verch’s [@passive] general theorem on the microlocal spectrum condition for passive (e.g., ground or KMS) states implies, in view of Radzikowski’s result [@Radzik], that these states, if defined on our Weyl algebra, are of the Hadamard form on condition that their two–point functions are continuous. However, we may drop this requirement in the important case of regular, quasifree states, as explained in this article. In ref. [@Schlieder] the Reeh–Schlieder property has been established for quasifree, ground and KMS states under the continuity assumption, which may again be dropped for regular states. Finally, we remark that there is no obstacle to generalize our results to vector–valued fields of Verch’s type [@vector].\ It is pleasure to thank Roman Gielerak for discussions related to this work. [3]{} Sahlmann,H.,Verch, R., “Passivity and Microlocal Spectrum Condition”, Commun. Math. Phys. [**214**]{} (2000) 705. Strohmaier, A., “The Reeh–Schlieder Property for Quantum Fields on Stationary Spacetimes”, Commun. Math. Phys. [**215**]{} (2000) 105. Strohmaier, A., “On the local structure of the Klein–Gordon field on curved spacetimes”, Lett. Math. Phys. [**54**]{} (2000) 249. Strohmaier, A., Verch, R., Wollenberg, M., “Microlocal analysis of quantum fields on curved spacetimes: Analytic wavefront sets and Reeh–Schlieder theorems”, J. Math. Phys. [**43**]{} (2002) 5514. Kay, B.S.,Wald, R.M., “Theorems on the uniqueness and thermal properties of stationary, nonsingular, quasifree states on spacetimes with a bifurcate Killing horizon”, Phys. Rep. [**207**]{} (1991) 49. Kay, B.S., “The Double–Wedge Algebra for Quantum Fields on Schwarzschild and Minkowski Spacetimes”, Commun. Math. Phys. [**100**]{} (1985) 57. Sahlmann,H.,Verch, R., “Microlocal spectrum condition and Hadamard form for vector–valued quantum fields in curved spacetime,” Rev. Math. Phys. [**13**]{} (2001) 1203. Radzikowski, M.J., “Micro-local approach to the Hadamard condition in quantum field theory in curved spacetime”, Commun. Math. Phys. [**179**]{} (1996) 529.
[ Naoyuki Haba,  Tsuneharu Omija  and  Toshifumi Yamada ]{} [ *Graduate School of Science and Engineering, Shimane University, Matsue 690-8504, Japan* ]{} We investigate charged lepton flavor violating (CFLV) processes in the ‘neutrinophilic Higgs+seesaw model’, in which right-handed neutrinos couple only with an extra Higgs field which develops a tiny VEV and the right-handed neutrinos also have Majorana mass. The model realizes a seesaw mechanism around TeV scale without extremely small Dirac Yukawa couplings. A phenomenological feature of the model is CFLV processes induced by loop diagrams of the charged scalar particles and heavy neutrinos. Therefore, first we constrain the model’s parameter space from the search for $\mu\to e\gamma$. Next, we predict the branching ratios of other CFLV processes including the $\mu\to3e$, $\mu+{\rm Al}\to e+{\rm Al}$, $\mu+{\rm Ti}\to e+{\rm Ti}$, $Z\to e\mu$, $Z\to e\tau$, $Z\to \mu\tau$, $h\to e\tau$, $h\to\mu\tau$ processes, and discuss their detectability in future experiments. Introduction ============ The origin of the smallness of the neutrino mass is one of the prime open questions in particle physics. One candidate solution to the above mystery is the neutrinophilic Two Higgs Doublet Model [@Gabriel:2006ns], where there is an extra Higgs doublet (called ‘neutrinophilic Higgs’) that couples to the lepton doublets and right-handed neutrinos while the coupling of the Standard Model (SM) Higgs doublet to right-handed neutrinos is forbidden by a $Z_2$ symmetry, and the smallness of the vacuum expectation value (VEV) of the neutrinophilic Higgs explains the smallness of the neutrino mass. In the original proposal [@Gabriel:2006ns], Majorana mass for right-handed neutrinos is absent and the neutrinos are purely Dirac particles. However, the $Z_2$ symmetry that forbids the coupling of the SM Higgs and right-handed neutrinos, does not exclude the possibility that right-handed neutrinos have Majorana mass term. If Majorana mass for right-handed neutrinos is introduced, the model becomes a low-scale realization of the seesaw mechanism [@seesaw1]-[@seesaw4], where the smallness of the neutrino mass is accounted for by the seesaw mechanism in addition to the tininess of the neutrinophilic Higgs VEV. We call the new model ‘neutrinophilic higgs+seesaw model’ for the obvious reason. Important experimental signatures of the neutrinophilic Higgs+seesaw model are (i) the presence of new scalar particles $H^{\pm}, H, A$ originating dominantly from the neutrinophilic Higgs field, and (ii) charged lepton flavor violating (CLFV) processes, such as $\mu\to e\gamma$, mediated by a loop of the charged scalar $H^{\pm}$ and a heavy neutrino. In this paper, we investigate CFLV processes in the neutrinophilic Higgs+seesaw model in detail. First, we constrain the parameter space of the neutrinophilic Higgs+seesaw model from current experimental bounds on CFLV processes, the most stringent bound coming from the $\mu\to e\gamma$ decay. Next, we predict branching ratios (or conversion rates) of various CFLV processes and discuss whether it is possible to detect these processes in the future. Previously, Ref. [@Bertuzzo:2015ada] has studied CFLV processes in the neutrinophilic Two Higgs Doublet Model, but in that work, Majorana mass term is not considered and the neutrinos are purely Dirac particles. Our work extends it by introducing Majorana mass for right-handed neutrinos. Also, Ref. [@Toma:2013zsa] has studied CFLV processes in a model with similar phenomenological features, but CFLV decays of the SM-like Higgs particle and $Z$ boson are not included, unlike our paper. The rest of the paper is organized as follows: In Section 2, we describe the neutrinophilic Higgs+seesaw model. In Section 3, we give the formulas for the branching ratios of CLFV process. In Section 4, we present our numerical results, which include the current constraints on the neutrinophilic Higgs+seesaw model and prediction on various CFLV processes. Finally, we summarize our results in Section 5.\ Neutrinophilic Higgs+Seesaw Model ================================= The model contains two Higgs doublet fields, $H_1$, $H_2$, left-handed leptons, $\ell_L^\alpha$, right-handed charged leptons, $e_R^\alpha$, and right-handed neutrinos, $\nu_R^i$, where $\alpha=e,\mu,\tau$ is the flavor index for charged leptons and $i=1,2,3$ is another flavor index. It also contains quarks, $q_L^k,\,u_R^k,\,d_R^k$, but they play no role in this study. The fields are charged under the SM $SU(3)_C\times SU(2)_L\times U(1)_Y$ gauge group and a $Z_2$ symmetry as Table \[fields\]. ------------------------------------------------------------------------------------------------- Field $SU(3)_C$ $SU(2)_L$ $U(1)_Y$ $Z_2$ ----------------- ----------- ------------------------------------------------ ---------- ------- $H_1$ [**1**]{} [**2**]{} $-1/2$ + $H_2$ [**1**]{} [**2**]{} $-1/2$ $-$ $\ell_L^\alpha$ [**1**]{} **[2]{} & $-1/2$ & +\ $e_R^\alpha$ & [**1**]{} & **[1]{} & $-1$ & +\ $\nu_R^i$ & [**1**]{} & **[1]{} & 0 & $-$\ $q_L^k$ & [**3**]{} & [**2**]{} & $1/6$ & $+$\ $u_R^k$ & [**3**]{} &[**1**]{} & $2/3$ & $+$\ $d_R^k$ & [**3**]{} &[**1**]{} & $-1/3$ & $+$\ ****** ------------------------------------------------------------------------------------------------- : The field content and charge assignments. \[fields\] Note that the above $Z_2$ charge assignment allows Majorana mass for right-handed neutrinos, while it forbids the Yukawa couplings of SM fermions with $H_2$ and the Yukawa coupling of right-handed neutrinos with $H_1$. We assume that the $Z_2$ symmetry is softly broken in the scalar potential. The most general scalar potential and Yukawa couplings are then $$\begin{aligned} -{\cal L} &= m_1^2 \, H_1^\dagger H_1 + m_2^2 \, H_2^\dagger H_2 - m_3^2 \, (H_1^\dagger H_2 + H_2^\dagger H_1) \nonumber \\ &+ \frac{\lambda_1}{2}(H_1^\dagger H_1)^2 + \frac{\lambda_2}{2}(H_2^\dagger H_2)^2 + \lambda_3(H_1^\dagger H_1)(H_2^\dagger H_2) + \lambda_4(H_1^\dagger H_2)(H_2^\dagger H_1) + \lambda_5(H_1^\dagger H_2)^2 + \lambda_5^*(H_2^\dagger H_1)^2 \nonumber \\ &+ (Y_e)_{\alpha\beta} \, \ell_L^{\alpha\dagger} \, \epsilon_g H_1^* \, e_R^\beta + (Y_D)_{\alpha i} \, \ell_L^{\alpha\dagger} \, H_2 \, \nu_R^i + \frac{1}{2}M_{N_i} \, \nu_R^{iT} \epsilon_s \nu_R^i + {\rm H.c.}, \label{lagrangian}\end{aligned}$$ where $\epsilon_g$ denotes the antisymmetric tensor acting on $SU(2)_L$ indices and $\epsilon_s$ denotes that acting on spinor indices. Here, we have taken the flavor basis in which the Majorana mass for right-handed neutrinos is diagonal, and we have made $m_3^2$ real positive by a phase redefinition. Note that only the $m_3^2$ term explicitly breaks the $Z_2$ symmetry, and so the limit with $m_3^2/m_1^2\to0$ and $m_3^2/m_2^2\to0$ can be taken naturally at the quantum level. We also assume $$\begin{aligned} m_1^2 &< 0, \ \ \ \ \ m_2^2 >0, \label{m1m2}\end{aligned}$$ so that $H_1$ develops a VEV around the scale $\sqrt{\vert m_1^2\vert}$, and then $H_2$ gains a VEV through the term $m_3^2 (H_1^\dagger H_2 + H_2^\dagger H_1)$. Consequently, the VEV of $H_2$ is proportional to $m_3^2$ and is controlled by the explicit breaking of the $Z_2$ symmetry, and therefore the VEV of $H_2$ can naturally take a small value. In the neutrinophilic Higgs+seesaw model, we take the limit with $m_3^2/m_2^2\to0$. Then, writing the Higgs VEVs as $\langle H_1^0 \rangle = v_1/\sqrt{2}$ and $\langle H_2^0 \rangle = v_2/\sqrt{2}$, we find $$\begin{aligned} v_1 &\simeq \sqrt{-2m_1^2/\lambda_1}, \\ v_2 &\simeq \frac{2m_3^2}{m_2^2+(\lambda_3+\lambda_4)v_1^2/2}v_1, \label{vevs}\end{aligned}$$ and hence $v_2\ll v_1$. The physical particles are the lighter CP-even scalar, $h$, which is identified with the observed 125 GeV scalar particle, the heavier CP-even scalar, $H$, the $CP$-odd scalar, $A$, and the charged scalar, $H^\pm$. The masses of $A$ and $H^\pm$ are given by $$\begin{aligned} m_A^2 &= \frac{m_3^2}{\sin\beta\cos\beta}, \ \ \ \ \ m_{H^\pm}^2 = \frac{m_3^2}{\sin\beta\cos\beta} - \frac{\lambda_4}{2}v^2\end{aligned}$$ and the masses of $h$ and $H$ are given, in the limit with $v_1\gg v_2$, by $$\begin{aligned} m_h^2 &\simeq \lambda_1 v_1^2, \ \ \ \ \ m_H^2 \simeq m_A^2.\end{aligned}$$ where $\tan\beta \equiv v_1/v_2$. In terms of $h,H,A,H^\pm$ and would-be Nambu-Goldstone modes, $G^0,G^\pm$, the Higgs fields are decomposed as $$\begin{aligned} H_1 &= \begin{pmatrix} \frac{1}{\sqrt{2}}\left(\sin\beta \ v+\cos\alpha \ h+\sin\alpha \ H -i \sin\beta \ G^0 -i\cos\beta \ A\right) \\ -\sin\beta \ G^- - \cos\beta \ H^- \\ \end{pmatrix}, \nonumber \\ H_2 &= \begin{pmatrix} \frac{1}{\sqrt{2}}\left(\cos\beta \ v-\sin\alpha \ h+\cos\alpha \ H -i \cos\beta \ G^0 +i\sin\beta \ A\right) \\ -\cos\beta \ G^- +\sin\beta \ H^- \\ \end{pmatrix},\end{aligned}$$ where $\alpha$ is the mixing angle of the CP-even scalars. $\alpha$ satisfies $$\begin{aligned} 0&>\alpha>-\frac{\pi}{2}, \ \ \ \tan2\alpha \simeq -\frac{2}{\tan\beta}\end{aligned}$$ in the limit with $\tan\beta \gg 1$, and hence $\alpha\simeq0$ in the neutrinophilic Higgs+seesaw model. The interaction of the charged scalar $H^\pm$ is the dominant source of CLFV processes and is particularly important. The three point interaction term for $H^+ H^- h$ is given by $$\begin{aligned} -{\cal L} &\supset v \lambda_3 \, h H^+H^-\end{aligned}$$ in the limit with $\tan\beta\gg1$. The Yukawa interaction terms of $H^\pm$ are $$\begin{aligned} -{\cal L} &\supset -(Y_e)_{\alpha\beta} \ \nu_L^{\alpha\dagger} e_R^\beta \ \cos\beta \ H^+ + (Y_D)_{\alpha i} \ e_L^{\alpha\dagger} \nu_R^i \ \sin\beta \ H^- +{\rm H.c.} \nonumber\end{aligned}$$ We turn our attention to the lepton mass. The Dirac and Majorana mass terms are given by $$\begin{aligned} -{\cal L} &\supset \frac{v_1}{\sqrt{2}}(Y_e)_{\alpha\beta} \ e_L^{\alpha\dagger} e_R^\beta + \frac{v_2}{\sqrt{2}}(Y_D)_{\alpha i} \ \nu_L^{\alpha\dagger} \nu_R^i +\frac{1}{2}M_{N_i} \ \nu_R^{iT} \epsilon_s \nu_R^i+{\rm H.c.}\end{aligned}$$ Then, the mass matrix for neutrinos is obtained as $$\begin{aligned} -{\cal L} &\supset \frac{1}{2}\begin{pmatrix} \nu_L^{\beta T} & \nu_R^{j\dagger}\epsilon_s^T \end{pmatrix} \epsilon_s \begin{pmatrix} O & -\frac{v_2}{\sqrt{2}}(Y_D^*)_{\beta i} \\ -\frac{v_2}{\sqrt{2}}(Y_D^*)_{\alpha j} & \delta_{ij} \, M_{N_i}^* \\ \end{pmatrix} \begin{pmatrix} \nu_L^\alpha \\ \epsilon_s\, \nu_R^{i*} \\ \end{pmatrix}+{\rm H.c.}\end{aligned}$$ The above mass matrix is diagonalized by a unitary matrix, $U$, as $$\begin{aligned} \begin{pmatrix} O & -\frac{v_2}{\sqrt{2}}(Y_D^*)_{\beta i} \\ -\frac{v_2}{\sqrt{2}} (Y_D^*)_{\alpha j} & \delta_{ij} \, M_{N_i}^* \\ \end{pmatrix} = U^* \, {\rm diag}\left(m_{\nu_1}, \, m_{\nu_2}, \, m_{\nu_3}, \, m_{\nu_4}, \, m_{\nu_5}, \, m_{\nu_6} \right) \, U^\dagger,\end{aligned}$$ where $m_{\nu_1}, \, m_{\nu_2}, \, m_{\nu_3}$ correspond to the tiny active neutrino masses, and $m_{\nu_4}, \, m_{\nu_5}, \, m_{\nu_6}$ to the masses of heavy neutrinos. We assume $v_2\ll M_{N_j}$. The unitary matrix $U$ is then approximated by $$\begin{aligned} U &\simeq \begin{pmatrix} U_{PMNS} & O \\ O & I_3 \\ \end{pmatrix}\end{aligned}$$ where $U_{PMNS}$ denotes the PMNS mixing matrix [@mns; @p] and $I_3$ denotes the 3-dimensional identity matrix, and we obtain the following seesaw formula: $$\begin{aligned} -\frac{v_2^2}{2} (Y_D^*)_{\beta i} (Y_D^*)_{\alpha i} \frac{1}{M_{N_i}} &\simeq \left[ \, U_{PMNS} \begin{pmatrix} m_{\nu_1} & 0 & 0 \\ 0 & m_{\nu_2} & 0 \\ 0 & 0 &m_{\nu_3} \\ \end{pmatrix} U_{PMNS} \, \right]_{\alpha \beta \, {\rm component}}. \label{seesaw}\end{aligned}$$ Inverting the relation Eq. (\[seesaw\]), one can express the neutrino Dirac Yukawa coupling $Y_D$ as $$\begin{aligned} Y_D &= i\frac{\sqrt{2}}{v_2} U_{PMNS} \begin{pmatrix} \sqrt{m_{\nu_1}} & 0 & 0 \\ 0 & \sqrt{m_{\nu_2}} & 0 \\ 0 & 0 & \sqrt{m_{\nu_3}} \\ \end{pmatrix} R_{3\times3} \begin{pmatrix} \sqrt{M_{N_1}} & 0 & 0 \\ 0 & \sqrt{M_{N_2}} & 0 \\ 0 & 0 & \sqrt{M_{N_3}} \\ \end{pmatrix} \label{yukawa}\end{aligned}$$ where $R_{3\times3}$ is an arbitrary complex-valued $3\times3$ rotation matrix [@ci]. The masses of heavy neutrinos are approximated as $$\begin{aligned} m_{\nu_4}\simeq M_{N1}^*, \ \ \ m_{\nu_5}\simeq M_{N2}^*, \ \ \ m_{\nu_6}\simeq M_{N3}^*,\end{aligned}$$ and the mass eigenstates belonging to $m_{\nu_4}, \, m_{\nu_5}, \, m_{\nu_6}$ are mostly given by the right-handed neutrinos, namely, we find $$\begin{aligned} \nu_4\simeq\epsilon_s\, \nu_R^{1*}, \ \ \ \nu_5\simeq\epsilon_s\, \nu_R^{2*}, \ \ \ \nu_6\simeq\epsilon_s\, \nu_R^{3*}.\end{aligned}$$\ Branching Ratios of Charged Lepton Flavor Violating Processes {#branchingratios} ============================================================= The limits with $m_\beta/m_\alpha \to 0$ and $m_\alpha/M_Z \to 0$ are taken throughout this section. We only consider the dominant contribution coming from one-loop diagrams of the charged scalar $H^\pm$ and heavy neutrinos $\nu_4,\nu_5,\nu_6$. $e_\alpha \to e_\beta \gamma$ ----------------------------- CLFV decays of a charged lepton into a charged lepton and a photon, $e_\alpha \to e_\beta \gamma$, arise from the following dipole term, induced by loop diagrams of the charged scalar $H^\pm$ and heavy neutrinos $\nu_4,\nu_5,\nu_6$: $$\begin{aligned} {\cal L}_{eff} &= \frac{1}{2}e\, A_D^{\beta\alpha}\, m_\alpha\bar{e}_\beta \sigma_{\mu\nu}e_\alpha F^{\mu\nu}, \\ A_D^{\beta\alpha} &= \frac{1}{16\pi^2}\frac{1}{2M_{H^\pm}^2} \sum_{i=1}^3 \, (Y_D)_{\beta i}F_2(r_i)(Y_D^\dagger)_{i\alpha}, \ \ \ \ \ r_i \equiv\frac{M_{N_i}^2}{M_{H^\pm}^2}, \label{AD} \\ F_2(x) &= \frac{1-6x+3x^2+2x^3-6x^2\log x}{6(1-x)^4}. \label{F2}\end{aligned}$$ The branching ratio is given by $$\begin{aligned} Br(e_\alpha \to e_\beta \gamma) &= 48\pi^3\frac{\alpha}{G_F^2}\vert A_D^{\beta\alpha}\vert^2 Br(e_\alpha \to e_\beta \nu_\alpha \bar{\nu}_\beta). \label{br}\end{aligned}$$\ $e_\alpha \to e_\beta \bar{e}_\beta e_\beta$ -------------------------------------------- CLFV decays of a charged lepton into three charged leptons, $e_\alpha \to e_\beta \bar{e}_\beta e_\beta$, arise from the following dipole, non-dipole and box-induced terms, induced by loop diagrams of the charged scalar and heavy neutrinos: $$\begin{aligned} {\cal L}_{eff} &= \frac{1}{2}e\, A_D^{\beta\alpha}\, m_\alpha\bar{e}_\beta \sigma_{\mu\nu}P_R e_\alpha F^{\mu\nu} +e\, A_{ND}^{\beta\alpha}\, q^2 \, \bar{e}_\beta \gamma_\mu P_L e_\alpha A^\mu +e^2\, B^{\beta\alpha}\, (\bar{e}_\beta \gamma_\mu P_L e_\beta)(\bar{e}_\beta \gamma^\mu P_L e_\alpha), \label{eff2}\\ A_{ND}^{\beta\alpha} &= \frac{1}{16\pi^2}\frac{1}{6M_{H^\pm}^2} \sum_{i=1}^3 \, (Y_D)_{\beta i}G_2(r_i)(Y_D^\dagger)_{i\alpha}, \\ e^2B^{\beta\alpha} &= \frac{1}{16\pi^2}\frac{1}{4M_{H^\pm}^2} \sum_{i,j=1}^3 \, \left\{ \frac{1}{2}(Y_D)_{\beta i}(Y_D^\dagger)_{i\alpha}(Y_D)_{\beta j}(Y_D^\dagger)_{j\beta}D_1(r_i,r_j)\right. \nonumber \\ &\left.+(Y_D^*)_{\beta i}(Y_D^\dagger)_{i\alpha}(Y_D)_{\beta j}(Y_D^T)_{j\beta}\sqrt{r_i r_j}D_2(r_i,r_j) \right\}, \\ G_2(x) &= \frac{2-9x+18x^2-11x^3+6x^3\log x}{6(1-x)^4}, \label{g2}\\ D_1(x,y)&=-\frac{x^2\log x}{(1-x)^2(x-y)}-\frac{1}{(1-x)(1-y)}-\frac{y^2\log y}{(1-y)^2(y-x)}, \label{D1} \\ D_2(x,y)&=-\frac{x\log x}{(1-x)^2(x-y)}-\frac{1}{(1-x)(1-y)}-\frac{y\log y}{(1-y)^2(y-x)}. \label{D2}\end{aligned}$$ The branching ratio is given by $$\begin{aligned} Br(e_\alpha \to e_\beta \bar{e}_\beta e_\beta) &= 6\pi^2\frac{\alpha^2}{G_F^2} \left\{ \ \vert A_D^{\beta\alpha}\vert^2\left(\frac{16}{3}\log\left(\frac{m_\alpha}{m_\beta}\right)-\frac{22}{3}\right)+\vert A_{ND}^{\beta\alpha}\vert^2+\frac{1}{6}\vert B^{\beta\alpha}\vert^2 \right. \nonumber\\ &\left. +\left(-2A_D^{\beta\alpha} A_{ND}^{\beta\alpha*}+\frac{1}{3}A_{ND}^{\beta\alpha}B^{\beta\alpha*}-\frac{2}{3}A_D^{\beta\alpha} B^{\beta\alpha*}+{\rm H.c.}\right) \ \right\} Br(e_\alpha \to e_\beta \nu_\alpha \bar{\nu}_\beta).\end{aligned}$$ Here, the contribution from the $Z$-penguin diagram is neglected because it is suppressed by $m_\alpha m_\beta/M_Z^2$ compared to the contribution from the photon-penguin diagram.\ $\mu N \to e N$ --------------- $\mu\to e$ conversion processes in a muonic atom arise from the dipole term $A_D$ and the non-dipole term $A_{ND}$. The conversion rate divided by the muon capture rate, $CR(\mu\to e)$, reads $$\begin{aligned} CR(\mu\to e) &= \frac{1}{\Gamma_{\rm capture}}\frac{p_e E_e m_\mu^3 \alpha^3 G_F^2}{8\pi^2 Z}Z_{eff}^4F_p^2 \left\vert (Z+N)g_{LV}^{(0)}+(Z-N)g_{LV}^{(1)} \right\vert^2, \label{cr}\\ g_{LV}^{(0)} &= \frac{1}{2}\sum_{q=u,d}\left(g_{LV}^{(q)}G_V^{(q,p)}+g_{LV}^{(q)}G_V^{(q,n)}\right), \ \ \ g_{LV}^{(1)} = \frac{1}{2}\sum_{q=u,d}\left(g_{LV}^{(q)}G_V^{(q,p)}-g_{LV}^{(q)}G_V^{(q,n)}\right), \\ g_{LV}^{(q)} &= \frac{\sqrt{2}}{G_F}e^2 Q_q (A_{ND}^{\mu e}-A_D^{\mu e}) \ \ \ \ \ \ \ \ (Q_q:{\rm electric \ charge \ of \ quark} \ q),\end{aligned}$$ where $p_e$ and $E_e$ are the momentum and energy of the final state electron, and $Z$ and $N$ are the number of protons and neutrons, respectively. $Z_{eff}$ is the effective atomic charge, $F_p$ is the nuclear matrix element, and $g_{LV}^{(0)},g_{LV}^{(1)}$ are effective charges. $\Gamma_{\rm capture}$ denotes the muon capture rate. Here, the contribution from the $Z$-penguin diagram is again neglected, and that from the Higgs-penguin diagram is neglected because the up and down quark Yukawa couplings are tiny. Also, since $\cos\beta\simeq0$, box diagrams involving two quarks and two leptons do not contribute.\ $Z \to \bar{e}_\alpha e_\beta$ ------------------------------ CLFV decays of a Z boson arise from the non-dipole term $A_{ND}$. In the leading order of $M_Z^2/M_{H^\pm}^2$, the effective Lagrangian contributing to $Z\to \bar{e}_\alpha e_\beta$ decay is given by $$\begin{aligned} {\cal L}_{eff} &= -A_{ND}^{\beta\alpha} \left(-\frac{1}{2}+\sin^2\theta_W\right)g_Z \ \bar{e}_\beta\gamma_\mu P_L e_\alpha \ Z^\mu.\end{aligned}$$ The branching ratio for $Z\to \bar{e}_\alpha e_\beta$ is $$\begin{aligned} Br(Z\to \bar{e}_\alpha e_\beta) &= Br(Z\to \bar{e}_\alpha e_\alpha)\frac{g_{eL}^2}{g_{eL}^2+g_{eR}^2} M_Z^4\left\vert A_{ND}^{\beta\alpha}\right\vert^2, \\ (\ g_{eL}&=-\frac{1}{2}+\sin^2\theta_W, \ \ \ g_{eR}=\sin^2\theta_W \ ). \nonumber\end{aligned}$$\ $h \to \bar{e}_\alpha e_\beta$ ------------------------------ In the leading order of $m_h^2/M_{H^\pm}^2$, the effective Lagrangian contributing to $h\to \bar{e}_\alpha e_\beta$ decay is given by $$\begin{aligned} {\cal L}_{eff} &= \frac{1}{16\pi^2} \frac{\lambda_3v \, m_\alpha}{M_{H^\pm}^2}\sum_{i=1}^3(Y_D)_{\beta i}G_H(r_i)(Y_D^\dagger)_{i\alpha} \ \bar{e}_\beta P_R e_\alpha \ h, \\ G_H(x) &= \frac{1-4x+3x^2-2x^2\log x}{4(1-x)^3},\end{aligned}$$ where $\lambda_3$ is the scalar quartic coupling that appears in Eq. (\[lagrangian\]). $G_H$ is a novel function different from $F_2$ in $A_D$ or $G_2$ in $A_{ND}$. The branching ratio for $h\to \bar{e}_\alpha e_\beta$ is $$\begin{aligned} Br(h\to \bar{e}_\alpha e_\beta) &= Br(h\to \bar{e}_\alpha e_\alpha)\frac{1}{2} \left\vert \frac{1}{16\pi^2}\frac{1}{M_{H^\pm}^2}\frac{\lambda_3 v^2}{\sqrt{2}}\sum_{i=1}^3(Y_D)_{\beta i}G_H(r_i)(Y_D^\dagger)_{i\alpha} \right\vert^2. \label{brh}\end{aligned}$$\ Numerical Study =============== We investigate CLFV processes in the neutrinophilic Higgs+seesaw model, based on the branching ratio formulas in Section \[branchingratios\]. First, we use current experimental upper limits on CLFV branching ratios to constrain the parameters of the model. Next, under the above constraint, we predict the branching ratios of various CFLV processes including $\mu\to3e$, $\mu+{\rm Al}\to e+{\rm Al}$, $\mu+{\rm Ti}\to e+{\rm Ti}$, $Z\to e\mu$, $Z\to e\tau$, $Z\to \mu\tau$, $h\to \mu\tau$, $h\to e\tau$, and assess their detectability in the future. Assumptions on the Model Parameters {#assumptions} ----------------------------------- The branching ratio formulas of CLFV processes depend on the neutrino Dirac Yukawa matrix Eq. (\[yukawa\]), the charged scalar mass $m_{H^{\pm}}$ and the right-handed neutrino Majorana masses $M_{N_1}, M_{N_2}, M_{N_3}$. The neutrino Dirac Yukawa matrix depends on $v_2$, $M_{N_1}$, $M_{N_2}$, $M_{N_3}$, $m_{\nu_1}$, $m_{\nu_2}$, $m_{\nu_3}$, $U_{PMNS}$ as well as a complex-valued $3\times3$ rotation matrix $R_{\rm 3 \times 3 }$. There are too many parameters and it is not easy to gain physical insight on phenomenology of the model. So, we reduce the number of parameters by considering the following situation: For the charged scalar mass $m_{H^\pm}$, the most phenomenologically interesting situation is when the charged scalar particle is detectable at the LHC. Hence, we assume $$\begin{aligned} m_{H^\pm}=0.3 ~{\rm TeV}.\end{aligned}$$ For the tiny active neutrino masses $m_{\nu_1}, ~m_{\nu_2} ~{\rm and}~ m_{\nu_3}$, we consider both Normal Hierarchy (NH) and Inverse Hierarchy (IH) cases, while focusing on the case where the lightest neutrino mass is 0, namely, we assume $$\begin{aligned} m_{\nu_1} = 0 ~({\rm NH}),~m_{\nu_3} = 0 ~({\rm IH})\end{aligned}$$ The values of $m_{\nu_2}$ and $m_{\nu_3}$ ($m_{\nu_1}$ and $m_{\nu_2}$) in NH (IH) case are obtained from the mass differences measured in neutrino oscillation experiments. In this paper, we employ the central values of the mass differences in NuFIT 4.1 [@Esteban:2018azc; @nufit]. For the parameters of $U_{PMNS}$, we employ the central values of the three mixing angles in NuFIT 4.1 [@Esteban:2018azc; @nufit]. As benchmark values of the Dirac phase $\delta$, we take the 3$\sigma$ bounds and central value in the NuFIT 4.1 result [@Esteban:2018azc; @nufit] as $$\begin{aligned} \delta&=144^\circ,~221^\circ,~357^\circ ~({\rm NH})\\ \delta&=205^\circ,~282^\circ,~348^\circ ~({\rm IH})\end{aligned}$$ We set the Majorana phase to be 0. For the Majorana masses of right-handed neutrinos, we assume them to be degenerate as $$\begin{aligned} M_{N_1}=M_{N_2}=M_{N_3}=M_N, \label{a2}\end{aligned}$$ where $M_N$ is taken real positive by a phase redefinition. We have found numerically that the branching ratios of CFLV processes do not change significantly even when the Majorana masses are hierarchical as $ M_{N_1}=0.1M_N$, $M_{N_2}=M_{N_3}=M_N$ or $M_{N_1}=M_{N_2}=0.1M_N$, $M_{N_3}=M_N$. For the neutrinophilic Higgs VEV $v_2$, we take it to be proportional to $\sqrt{M_N}$ as $$\begin{aligned} v_2=1\times10^{-6}\times&\sqrt{\frac{M_N}{{\rm TeV}}} ~ {\rm TeV}~({\rm NH}),\label{v2-nh}\\ v_2=2\times10^{-6}\times&\sqrt{\frac{M_N}{{\rm TeV}}} ~ {\rm TeV}~({\rm IH}). \label{v2-ih}\end{aligned}$$ These values of $v_2$ ensure $\left|Y_D\right|\sim0.05$ in each hierarchy, where we have defined $\left|Y_D\right|$ as the minimum absolute value of the Yukawa matrix components when Im$\theta_1=$Im$\theta_2=$Im$\theta_3=0$. Note that the motivation for the neutrinophilic Higgs+seesaw model is to realize low-scale seesaw without taking very small values for the neutrino Dirac Yukawa coupling, and hence it is essential to have $\left|Y_D\right|$ not much smaller than 1. For $R_{3\times3}$, we parametrize it in terms of three complex rotation angles $\theta_j ={\rm Re}\theta_j+\mathrm{i}{\rm Im}\theta_j$ $(j=1,2,3)$ as $$\begin{aligned} R_{3\times3} = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos{\theta_1} & -\sin{\theta_1} \\ 0 & \sin{\theta_1} & \cos{\theta_1} \end{array} \right) \left( \begin{array}{ccc} \cos{\theta_2} & 0 & -\sin{\theta_2} \\ 0 & 1 & 0 \\ \sin{\theta_2} & 0 & \cos{\theta_2} \end{array} \right) \left( \begin{array}{ccc} \cos{\theta_3} & -\sin{\theta_3} & 0 \\ \sin{\theta_3} & \cos{\theta_3} & 0 \\ 0 & 0 & 1 \end{array} \right) \label{r33}\end{aligned}$$ For the sake of simplifying the analysis, we vary each $\theta_j$ separately while fixing the other complex angles at zero. When we vary each $\theta_j$, its real part Re$\theta_j$ does not affect the branching ratios of CFLV processes. Therefore, we only regard the imaginary parts Im$\theta_1$, Im$\theta_2$ and Im$\theta_3$ as the parameters of $R_{3\times3}$. The larger the absolute value of Im$\theta_j$ is, the larger $Y_D$ becomes. Thus, to maintain perturbativity, we restrict the range as $-2<$Im$\theta_j<2$. The above are our assumptions on the model parameters. Consequently, for one CFLV process such as $\mu\to3e$, we show 18 plots on ($M_N$, Im$\theta_j$)-parameter space ($3~(\delta)\times3~({\rm Im}\theta)\times2~({\rm NH,~IH})=18$).\ Constraints on the Neutrinophilic Higgs+Seesaw Model from Charged Lepton Flavor Violating Processes --------------------------------------------------------------------------------------------------- The CLFV processes experimentally searched for are $e_\alpha \to e_\beta \gamma,~e_\alpha \to 3e_\beta,~ \mu N \to e N,~Z \to \bar{e}_\alpha e_\beta,~h \to \bar{e}_\alpha e_\beta$. For each process, the upper limit on the branching ratio (or conversion rate) is obtained by experiments and it constrains the model parameter space. At present, the strongest constraint comes from the upper limit on the $\mu\to e\gamma$ branching ratio, $Br(\mu\to e\gamma)<4.2\times10^{-13}$ [@meg], in the entire parameter space. Therefore, in the study of the current experimental constraints, we can concentrate on the $\mu\to e\gamma$ process while neglecting bounds from other CFLV processes [@babar]-[@conversionAu]. The constraint on the $(M_{N}, {\rm Im}\theta_j)$-parameter space from the bound $Br(\mu\to e\gamma)<4.2\times10^{-13}$ is displayed by the blue solid line in every figure, for both NH and IH, for $m_{H^\pm}=0.3$ TeV, $v_2=1~(2)\times10^{-6}\times\sqrt{\frac{M}{{\rm TeV}}}$ TeV in NH (IH), and for the benchmark values of the Dirac phase $\delta$. Additionally, we show the constraint when $v_2$ is multiplied by $1/3$ and thus $Y_D$ is uniformly multiplied by 3, by the dashed blue line. We observe that the constraint tends to be weaker for smaller $M_N$ and $|$Im$\theta_j|$. This is because $Y_D$ is proportional to $\sqrt{M_N}$ and $R_{3\times3}$ (see Eq. (\[yukawa\])), and so $Br(\mu\to e\gamma)$ is suppressed for small $M_N$ and $|$Im$\theta_j|$.\ Prediction on Charged Lepton Flavor Violating Processes ------------------------------------------------------- ### $ \mu \to 3e$ Among the $e_\alpha \to e_\beta \bar{e}_\beta e_\beta$ processes, the future sensitivity for the $\mu\to 3e$ decay reaches $Br(\mu\to 3e)=10^{-16}$ [@Blondel:2013ia] and so there is a large chance that this mode is detected even when the model satisfies the current experimental bound on $Br(\mu\to e\gamma)$. Therefore, we show in figure \[figpreeN\] (Normal Hierarchy) and figure \[figpreeI\] (Inverse Hierarchy) the prediction on $Br(\mu\to 3e)$, along with the value of $Br(\mu\to e\gamma)$. In figure \[figpreeN\], the blue solid line agrees with $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for NH and $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the blue solid line is excluded by the search for $\mu\to e\gamma$. The green solid line agrees with $Br(\mu\to 3e)=10^{-16}$, the future sensitivity. Therefore, in the region between the blue solid line and the green solid line, the $\mu\to 3e$ process can be detected in the future. Figure \[figpreeI\] is the corresponding figure for IH and $v_2$ in Eq. (\[v2-ih\]). In the same figures, the blue and green dashed lines are contours of $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and $Br(\mu\to 3e)=10^{-16}$ in the case when $v_2$ is multiplied by $1/3$ and thus $Y_D$ is uniformly multiplied by 3 according to Eq. (\[yukawa\]). Since the dipole and non-dipole terms $A_D,A_{ND}$ are proportional to $Y_D^2$ whereas the box-induced term $B$ is proportional to $Y_D^4$, reducing $v_2$ affects $Br(\mu\to 3e)$ and $Br(\mu\to e\gamma)$ differently. However, such an effect is not clearly seen in the figures, as the region between the blue and green dashed lines has a similar size to that between the blue and green solid lines. [c]{} ![Prediction for $Br(\mu\to3e)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The blue solid line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the blue solid line is excluded by the search for $Br(\mu\to e\gamma)$. The green solid line corresponds to $Br(\mu\to3e)=10^{-16}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The blue dashed line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the green dashed line corresponds to $Br(\mu\to3e)=10^{-16}$ in the case when $v_2$ is multiplied by $1/3$ and thus $Y_D$ is uniformly multiplied by 3. []{data-label="figpreeN"}](eN11.eps) ![Prediction for $Br(\mu\to3e)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The blue solid line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the blue solid line is excluded by the search for $Br(\mu\to e\gamma)$. The green solid line corresponds to $Br(\mu\to3e)=10^{-16}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The blue dashed line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the green dashed line corresponds to $Br(\mu\to3e)=10^{-16}$ in the case when $v_2$ is multiplied by $1/3$ and thus $Y_D$ is uniformly multiplied by 3. []{data-label="figpreeN"}](eN12.eps) ![Prediction for $Br(\mu\to3e)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The blue solid line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the blue solid line is excluded by the search for $Br(\mu\to e\gamma)$. The green solid line corresponds to $Br(\mu\to3e)=10^{-16}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The blue dashed line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the green dashed line corresponds to $Br(\mu\to3e)=10^{-16}$ in the case when $v_2$ is multiplied by $1/3$ and thus $Y_D$ is uniformly multiplied by 3. []{data-label="figpreeN"}](eN13.eps) \ \ ![Prediction for $Br(\mu\to3e)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The blue solid line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the blue solid line is excluded by the search for $Br(\mu\to e\gamma)$. The green solid line corresponds to $Br(\mu\to3e)=10^{-16}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The blue dashed line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the green dashed line corresponds to $Br(\mu\to3e)=10^{-16}$ in the case when $v_2$ is multiplied by $1/3$ and thus $Y_D$ is uniformly multiplied by 3. []{data-label="figpreeN"}](eN21.eps) ![Prediction for $Br(\mu\to3e)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The blue solid line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the blue solid line is excluded by the search for $Br(\mu\to e\gamma)$. The green solid line corresponds to $Br(\mu\to3e)=10^{-16}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The blue dashed line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the green dashed line corresponds to $Br(\mu\to3e)=10^{-16}$ in the case when $v_2$ is multiplied by $1/3$ and thus $Y_D$ is uniformly multiplied by 3. []{data-label="figpreeN"}](eN22.eps) ![Prediction for $Br(\mu\to3e)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The blue solid line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the blue solid line is excluded by the search for $Br(\mu\to e\gamma)$. The green solid line corresponds to $Br(\mu\to3e)=10^{-16}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The blue dashed line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the green dashed line corresponds to $Br(\mu\to3e)=10^{-16}$ in the case when $v_2$ is multiplied by $1/3$ and thus $Y_D$ is uniformly multiplied by 3. []{data-label="figpreeN"}](eN23.eps) \ \ ![Prediction for $Br(\mu\to3e)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The blue solid line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the blue solid line is excluded by the search for $Br(\mu\to e\gamma)$. The green solid line corresponds to $Br(\mu\to3e)=10^{-16}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The blue dashed line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the green dashed line corresponds to $Br(\mu\to3e)=10^{-16}$ in the case when $v_2$ is multiplied by $1/3$ and thus $Y_D$ is uniformly multiplied by 3. []{data-label="figpreeN"}](eN31.eps) ![Prediction for $Br(\mu\to3e)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The blue solid line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the blue solid line is excluded by the search for $Br(\mu\to e\gamma)$. The green solid line corresponds to $Br(\mu\to3e)=10^{-16}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The blue dashed line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the green dashed line corresponds to $Br(\mu\to3e)=10^{-16}$ in the case when $v_2$ is multiplied by $1/3$ and thus $Y_D$ is uniformly multiplied by 3. []{data-label="figpreeN"}](eN32.eps) ![Prediction for $Br(\mu\to3e)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The blue solid line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the blue solid line is excluded by the search for $Br(\mu\to e\gamma)$. The green solid line corresponds to $Br(\mu\to3e)=10^{-16}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The blue dashed line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the green dashed line corresponds to $Br(\mu\to3e)=10^{-16}$ in the case when $v_2$ is multiplied by $1/3$ and thus $Y_D$ is uniformly multiplied by 3. []{data-label="figpreeN"}](eN33.eps) [c]{} ![Same as figure \[figpreeN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figpreeI"}](eI11.eps) ![Same as figure \[figpreeN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figpreeI"}](eI12.eps) ![Same as figure \[figpreeN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figpreeI"}](eI13.eps) \ \ ![Same as figure \[figpreeN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figpreeI"}](eI21.eps) ![Same as figure \[figpreeN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figpreeI"}](eI22.eps) ![Same as figure \[figpreeN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figpreeI"}](eI23.eps) \ \ ![Same as figure \[figpreeN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figpreeI"}](eI31.eps) ![Same as figure \[figpreeN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figpreeI"}](eI32.eps) ![Same as figure \[figpreeN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figpreeI"}](eI33.eps) ### $\mu$-$e$ Conversions The processes whose sensitivity will be improved in the future are the $\mu +{\rm Al}\to e+{\rm Al}$ and $\mu +{\rm Ti}\to e+{\rm Ti}$ processes. The future sensitivity for $CR(\mu +{\rm Al}\to e+{\rm Al})$ is $2\times10^{-17}$ [@Kuno:2013mha], and that for $CR(\mu +{\rm Ti}\to e+{\rm Ti})$ is $10^{-18}$ [@Bertuzzo:2015ada]. Therefore, we study whether the $\mu +{\rm Al}\to e+{\rm Al}$ and $\mu +{\rm Ti}\to e+{\rm Ti}$ processes can be detected in the future. In the numerical calculation of the conversion rates, we employ the values of $Z_{eff}$, $F_p$, $\Gamma_{\rm capture}$ in Ref. [@conversion]. We comment that a peculiar property of the conversion rates $CR(\mu N\to e N)$ is that they are zero if $M_N=m_{H^\pm}$, because $A_{ND}=A_D$ at $\frac{M_{N_i}^2}{M_{H^\pm}^2}=1$. Therefore, the plots of $CR(\mu +{\rm Al}\to e+{\rm Al})$ and $CR(\mu +{\rm Ti}\to e+{\rm Ti})$ show a different behavior from other processes around the region $M_N\simeq m_{H^\pm}=0.3$ TeV. However, this region is excluded by the $\mu\to e\gamma$ search and so such a behavior is unimportant. In figure \[figprecalN\], the solid orange line agrees with $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$, the future sensitivity, for NH and $v_2$ in Eq. (\[v2-nh\]). So, in the region between the solid blue line and the solid orange line (we neglect the orange line near $M_N=0.3$ TeV), the $\mu +{\rm Al}\to e+{\rm Al}$ process can be detected in the future. Figure \[figprecalI\] is the corresponding plot for IH and $v_2$ in Eq. (\[v2-ih\]). In the same figures, the dashed orange line agrees with $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$ when $v_2$ is multiplied by $1/3$, and the $\mu +{\rm Al}\to e+{\rm Al}$ process can be detected in the region between the dashed blue line and the dashed orange line for this $v_2$. Since the dipole and non-dipole operators $A_D,A_{ND}$ are both proportional to $Y_D^2$, $Br(\mu\to e\gamma)$ and $CR(\mu +{\rm Al}\to e+{\rm Al})$ both scale with $1/v_2^4$. Hence, the relative location of the contours of $Br(\mu\to e\gamma)$ and $CR(\mu +{\rm Al}\to e+{\rm Al})$ does not depend on $v_2$. In figure \[figprectiN\], the solid purple line agrees with $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$, the future sensitivity, for NH and $v_2$ in Eq. (\[v2-nh\]). So, in the region between the solid blue line and the solid purple line (we neglect the purple line near $M_N=0.3$ TeV), the $\mu +{\rm Ti}\to e+{\rm Ti}$ process can be detected in the future. Figure \[figprectiI\] is the corresponding plot for IH and $v_2$ in Eq. (\[v2-ih\]). In the same figures, the dashed purple line agrees with $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ when $v_2$ is multiplied by $1/3$, and the $\mu +{\rm Ti}\to e+{\rm Ti}$ process can be detected in the region between the dashed blue line and the dashed purple line for this $v_2$. Just as Al, the relative location of the contours of $Br(\mu\to e\gamma)$ and $CR(\mu +{\rm Ti}\to e+{\rm Ti})$ does not depend on $v_2$. [c]{} ![Prediction for $CR(\mu +{\rm Al}\to e+{\rm Al})$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$ when $v_2$ is multiplied by $1/3$.[]{data-label="figprecalN"}](calN11.eps) ![Prediction for $CR(\mu +{\rm Al}\to e+{\rm Al})$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$ when $v_2$ is multiplied by $1/3$.[]{data-label="figprecalN"}](calN12.eps) ![Prediction for $CR(\mu +{\rm Al}\to e+{\rm Al})$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$ when $v_2$ is multiplied by $1/3$.[]{data-label="figprecalN"}](calN13.eps) \ \ ![Prediction for $CR(\mu +{\rm Al}\to e+{\rm Al})$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$ when $v_2$ is multiplied by $1/3$.[]{data-label="figprecalN"}](calN21.eps) ![Prediction for $CR(\mu +{\rm Al}\to e+{\rm Al})$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$ when $v_2$ is multiplied by $1/3$.[]{data-label="figprecalN"}](calN22.eps) ![Prediction for $CR(\mu +{\rm Al}\to e+{\rm Al})$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$ when $v_2$ is multiplied by $1/3$.[]{data-label="figprecalN"}](calN23.eps) \ \ ![Prediction for $CR(\mu +{\rm Al}\to e+{\rm Al})$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$ when $v_2$ is multiplied by $1/3$.[]{data-label="figprecalN"}](calN31.eps) ![Prediction for $CR(\mu +{\rm Al}\to e+{\rm Al})$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$ when $v_2$ is multiplied by $1/3$.[]{data-label="figprecalN"}](calN32.eps) ![Prediction for $CR(\mu +{\rm Al}\to e+{\rm Al})$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$, the future sensitivity, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed orange line corresponds to $CR(\mu +{\rm Al}\to e+{\rm Al})=2\times10^{-17}$ when $v_2$ is multiplied by $1/3$.[]{data-label="figprecalN"}](calN33.eps) [c]{} ![ Same as figure \[figprecalN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]). []{data-label="figprecalI"}](calI11.eps) ![ Same as figure \[figprecalN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]). []{data-label="figprecalI"}](calI12.eps) ![ Same as figure \[figprecalN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]). []{data-label="figprecalI"}](calI13.eps) \ \ ![ Same as figure \[figprecalN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]). []{data-label="figprecalI"}](calI21.eps) ![ Same as figure \[figprecalN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]). []{data-label="figprecalI"}](calI22.eps) ![ Same as figure \[figprecalN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]). []{data-label="figprecalI"}](calI23.eps) \ \ ![ Same as figure \[figprecalN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]). []{data-label="figprecalI"}](calI31.eps) ![ Same as figure \[figprecalN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]). []{data-label="figprecalI"}](calI32.eps) ![ Same as figure \[figprecalN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]). []{data-label="figprecalI"}](calI33.eps) [c]{} ![Same as figure \[figprecalN\] except that the prediction for $CR(\mu +{\rm Ti}\to e+{\rm Ti})$ is presented by the purple lines, for Normal Hierarchy. The solid purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ for $v_2$ in Eq. (\[v2-nh\]), and the dashed purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ when $v_2$ is multiplied by $1/3$. []{data-label="figprectiN"}](ctiN11.eps) ![Same as figure \[figprecalN\] except that the prediction for $CR(\mu +{\rm Ti}\to e+{\rm Ti})$ is presented by the purple lines, for Normal Hierarchy. The solid purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ for $v_2$ in Eq. (\[v2-nh\]), and the dashed purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ when $v_2$ is multiplied by $1/3$. []{data-label="figprectiN"}](ctiN12.eps) ![Same as figure \[figprecalN\] except that the prediction for $CR(\mu +{\rm Ti}\to e+{\rm Ti})$ is presented by the purple lines, for Normal Hierarchy. The solid purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ for $v_2$ in Eq. (\[v2-nh\]), and the dashed purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ when $v_2$ is multiplied by $1/3$. []{data-label="figprectiN"}](ctiN13.eps) \ \ ![Same as figure \[figprecalN\] except that the prediction for $CR(\mu +{\rm Ti}\to e+{\rm Ti})$ is presented by the purple lines, for Normal Hierarchy. The solid purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ for $v_2$ in Eq. (\[v2-nh\]), and the dashed purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ when $v_2$ is multiplied by $1/3$. []{data-label="figprectiN"}](ctiN21.eps) ![Same as figure \[figprecalN\] except that the prediction for $CR(\mu +{\rm Ti}\to e+{\rm Ti})$ is presented by the purple lines, for Normal Hierarchy. The solid purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ for $v_2$ in Eq. (\[v2-nh\]), and the dashed purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ when $v_2$ is multiplied by $1/3$. []{data-label="figprectiN"}](ctiN22.eps) ![Same as figure \[figprecalN\] except that the prediction for $CR(\mu +{\rm Ti}\to e+{\rm Ti})$ is presented by the purple lines, for Normal Hierarchy. The solid purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ for $v_2$ in Eq. (\[v2-nh\]), and the dashed purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ when $v_2$ is multiplied by $1/3$. []{data-label="figprectiN"}](ctiN23.eps) \ \ ![Same as figure \[figprecalN\] except that the prediction for $CR(\mu +{\rm Ti}\to e+{\rm Ti})$ is presented by the purple lines, for Normal Hierarchy. The solid purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ for $v_2$ in Eq. (\[v2-nh\]), and the dashed purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ when $v_2$ is multiplied by $1/3$. []{data-label="figprectiN"}](ctiN31.eps) ![Same as figure \[figprecalN\] except that the prediction for $CR(\mu +{\rm Ti}\to e+{\rm Ti})$ is presented by the purple lines, for Normal Hierarchy. The solid purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ for $v_2$ in Eq. (\[v2-nh\]), and the dashed purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ when $v_2$ is multiplied by $1/3$. []{data-label="figprectiN"}](ctiN32.eps) ![Same as figure \[figprecalN\] except that the prediction for $CR(\mu +{\rm Ti}\to e+{\rm Ti})$ is presented by the purple lines, for Normal Hierarchy. The solid purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ for $v_2$ in Eq. (\[v2-nh\]), and the dashed purple line corresponds to $CR(\mu +{\rm Ti}\to e+{\rm Ti})=10^{-18}$ when $v_2$ is multiplied by $1/3$. []{data-label="figprectiN"}](ctiN33.eps) [c]{} ![Same as figure \[figprectiN\] except that mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprectiI"}](ctiI11.eps) ![Same as figure \[figprectiN\] except that mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprectiI"}](ctiI12.eps) ![Same as figure \[figprectiN\] except that mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprectiI"}](ctiI13.eps) \ \ ![Same as figure \[figprectiN\] except that mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprectiI"}](ctiI21.eps) ![Same as figure \[figprectiN\] except that mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprectiI"}](ctiI22.eps) ![Same as figure \[figprectiN\] except that mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprectiI"}](ctiI23.eps) \ \ ![Same as figure \[figprectiN\] except that mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprectiI"}](ctiI31.eps) ![Same as figure \[figprectiN\] except that mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprectiI"}](ctiI32.eps) ![Same as figure \[figprectiN\] except that mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprectiI"}](ctiI33.eps) ### $Z \to \bar{e}_\alpha e_\beta$ There are three modes, $Z\to e\mu,\,Z\to e\tau,\,Z\to \mu\tau$. In figure \[figprezN\], the solid green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, for NH and $v_2$ in Eq. (\[v2-nh\]). Figure \[figprezI\] is the corresponding figure for IH and $v_2$ in Eq. (\[v2-ih\]). In the same figure, the dashed green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, when $v_2$ is multiplied by $1/3$. Since the dipole and non-dipole operators $A_D,A_{ND}$ are both proportional to $Y_D^2$, the branching ratios $Br(\mu\to e\gamma)$ and $Br(Z \to \bar{e}_\alpha e_\beta)$ both scale with $1/v_2^4$. Hence, the relative location of the contours of $Br(\mu\to e\gamma)$ and $Br(Z\to e\mu),Br(Z\to e\tau),Br(Z\to \mu\tau)$ do not depend on $v_2$. We observe that in all cases, all of the $Z\to e\mu,\,Z\to e\tau,\,Z\to \mu\tau$ decays can be detected at a rate about $10^{-16}$ even when the model satisfies the current experimental bound on $Br(\mu \to e\gamma)$. Unfortunately, the rate $10^{-16}$ is much below the future sensitivity of a high-luminosity $Z$-factory proposed in Ref. [@Abada:2014cca]. [c]{} ![Prediction for $Br(Z\to e\mu)$, $Br(Z\to e\tau)$ and $Br(Z\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, when $v_2$ is multiplied by $1/3$. []{data-label="figprezN"}](zN11.eps) ![Prediction for $Br(Z\to e\mu)$, $Br(Z\to e\tau)$ and $Br(Z\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, when $v_2$ is multiplied by $1/3$. []{data-label="figprezN"}](zN12.eps) ![Prediction for $Br(Z\to e\mu)$, $Br(Z\to e\tau)$ and $Br(Z\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, when $v_2$ is multiplied by $1/3$. []{data-label="figprezN"}](zN13.eps) \ \ ![Prediction for $Br(Z\to e\mu)$, $Br(Z\to e\tau)$ and $Br(Z\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, when $v_2$ is multiplied by $1/3$. []{data-label="figprezN"}](zN21.eps) ![Prediction for $Br(Z\to e\mu)$, $Br(Z\to e\tau)$ and $Br(Z\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, when $v_2$ is multiplied by $1/3$. []{data-label="figprezN"}](zN22.eps) ![Prediction for $Br(Z\to e\mu)$, $Br(Z\to e\tau)$ and $Br(Z\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, when $v_2$ is multiplied by $1/3$. []{data-label="figprezN"}](zN23.eps) \ \ ![Prediction for $Br(Z\to e\mu)$, $Br(Z\to e\tau)$ and $Br(Z\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, when $v_2$ is multiplied by $1/3$. []{data-label="figprezN"}](zN31.eps) ![Prediction for $Br(Z\to e\mu)$, $Br(Z\to e\tau)$ and $Br(Z\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, when $v_2$ is multiplied by $1/3$. []{data-label="figprezN"}](zN32.eps) ![Prediction for $Br(Z\to e\mu)$, $Br(Z\to e\tau)$ and $Br(Z\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green, orange and red lines correspond to the contours of $Br(Z\to e\mu)=10^{-16}$, $Br(Z\to e\tau)=10^{-16}$ and $Br(Z\to \mu\tau)=10^{-16}$, respectively, when $v_2$ is multiplied by $1/3$. []{data-label="figprezN"}](zN33.eps) [c]{} ![Same as figure \[figprezN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprezI"}](zI11.eps) ![Same as figure \[figprezN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprezI"}](zI12.eps) ![Same as figure \[figprezN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprezI"}](zI13.eps) \ \ ![Same as figure \[figprezN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprezI"}](zI21.eps) ![Same as figure \[figprezN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprezI"}](zI22.eps) ![Same as figure \[figprezN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprezI"}](zI23.eps) \ \ ![Same as figure \[figprezN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprezI"}](zI31.eps) ![Same as figure \[figprezN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprezI"}](zI32.eps) ![Same as figure \[figprezN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]).[]{data-label="figprezI"}](zI33.eps) ### $h \to e \tau$ and $h \to \mu \tau$ Among the $h \to \bar{e}_\alpha e_\beta$ ($\alpha\neq\beta$) decay modes, the diagrams of $h\to e\tau$ and $h\to \mu\tau$ involve the large $\tau$ Yukawa coupling and so these modes have much larger branching ratios than $h\to e\mu$. Therefore, we concentrate on the former two. $Br(h \to e\tau)$ and $Br(h \to \mu\tau)$ involve one unknown coupling constant, that is, $\lambda_3$. We present our prediction by assuming $\lambda_3=1$. Since the prediction scales with $\lambda_3^2$, it is straightforward to consider cases with other values of $\lambda_3$. In figure \[figprehN\], the solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, for NH and $v_2$ in Eq. (\[v2-nh\]). In the same figure, the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, when $v_2$ is multiplied by $1/3$. $Br(\mu\to e\gamma)$ and $Br(Z \to \bar{e}_\alpha e_\beta)$ $(\alpha\neq\beta)$ both scale with $1/v_2^4$, and so the relative location of their contours does not depend on $v_2$. Figure \[figprehI\] is the corresponding figure for IH and $v_2$ in Eq. (\[v2-ih\]). Here, the green lines correspond to $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and the orange lines correspond to $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$. We observe that for NH, we can hope that the $h\to e \tau$ decay is detected at a rate $Br(h\to e\tau)/Br(h\to \tau\tau)\sim10^{-12}$ and that the $h\to \mu \tau$ decay is detected at a rate $Br(h\to \mu\tau)/Br(h\to \tau\tau)\sim10^{-11}$ even when the model satisfies the current experimental bound on $Br(\mu \to e\gamma)$. If IH is the correct mass hierarchy, both $Br(h\to e\tau)$ and $Br(h\to \mu\tau)$ roughly decrease by $1/10$. Unfortunately, the predicted rate is too small to explain the hint of $h\to \mu \tau$ decay reported by CMS [@Khachatryan:2015kon]. [c]{} ![ Prediction for $Br(h\to e\tau)$ and $Br(h\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehN"}](hN11.eps) ![ Prediction for $Br(h\to e\tau)$ and $Br(h\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehN"}](hN12.eps) ![ Prediction for $Br(h\to e\tau)$ and $Br(h\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehN"}](hN13.eps) \ \ ![ Prediction for $Br(h\to e\tau)$ and $Br(h\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehN"}](hN21.eps) ![ Prediction for $Br(h\to e\tau)$ and $Br(h\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehN"}](hN22.eps) ![ Prediction for $Br(h\to e\tau)$ and $Br(h\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehN"}](hN23.eps) \ \ ![ Prediction for $Br(h\to e\tau)$ and $Br(h\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehN"}](hN31.eps) ![ Prediction for $Br(h\to e\tau)$ and $Br(h\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehN"}](hN32.eps) ![ Prediction for $Br(h\to e\tau)$ and $Br(h\to \mu\tau)$, along with the values of $Br(\mu\to e\gamma)$. The neutrino mass hierarchy is Normal Hierarchy, and we fix $m_{H^\pm}=0.3$ TeV. We take $\delta=144^\circ,~221^\circ~{\rm and}~357^\circ$ in the first, second and third rows. In the first column, we vary Im$\theta_1\neq0$ while fixing Im$\theta_2$=Im$\theta_3=0$. In the second column, we vary Im$\theta_2\neq0$ while fixing Im$\theta_1$=Im$\theta_3=0$. In the third column, we vary Im$\theta_3\neq0$ while fixing Im$\theta_1$=Im$\theta_2=0$. The solid blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ for $v_2$ in Eq. (\[v2-nh\]), and the region on the left of the solid blue line is excluded by the search for $Br(\mu\to e\gamma)$. The solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, for $v_2$ in Eq. (\[v2-nh\]). The dashed blue line corresponds to $Br(\mu\to e\gamma)=4.2\times10^{-13}$ and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-12}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-11}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehN"}](hN33.eps) [c]{} ![Same as figure \[figprehN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]) and that the solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehI"}](hI11.eps) ![Same as figure \[figprehN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]) and that the solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehI"}](hI12.eps) ![Same as figure \[figprehN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]) and that the solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehI"}](hI13.eps) \ \ ![Same as figure \[figprehN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]) and that the solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehI"}](hI21.eps) ![Same as figure \[figprehN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]) and that the solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehI"}](hI22.eps) ![Same as figure \[figprehN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]) and that the solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehI"}](hI23.eps) \ \ ![Same as figure \[figprehN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]) and that the solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehI"}](hI31.eps) ![Same as figure \[figprehN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]) and that the solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehI"}](hI32.eps) ![Same as figure \[figprehN\] except that the neutrino mass hierarchy is Inverted Hierarchy and $v_2$ is given in Eq. (\[v2-ih\]) and that the solid green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, and the dashed green and orange lines correspond to the contours of $Br(h\to e\tau)/Br(h\to \tau\tau)=10^{-13}$ and $Br(h\to \mu\tau)/Br(h\to \tau\tau)=10^{-12}$, respectively, when $v_2$ is multiplied by $1/3$.[]{data-label="figprehI"}](hI33.eps) Summary ======= We have investigated the neutrinophilic Higgs+seesaw model, in which right-handed neutrinos couple only with an extra Higgs field that develops a tiny VEV and they also have Majorana mass, and which realizes the low-scale seesaw naturally. We have concentrated on CFLV processes induced by loop diagrams of the charged scalar and heavy neutrinos. First, we have studied the current constraint on the model’s parameter space from the search for $\mu\to e\gamma$. Second, we have predicted the branching ratios of other CLFV processes ($\mu\to3e$, $\mu+{\rm Al}\to e+{\rm Al}$, $\mu+{\rm Ti}\to e+{\rm Ti}$, $Z\to e\mu$, $Z\to e\tau$, $Z\to \mu\tau$, $h\to e\tau$, $h\to\mu\tau$), and discussed whether these processes can be detected in the future. An important finding is that considering the future sensitivities, the $\mu\to3e$, $\mu+{\rm Al}\to e+{\rm Al}$ and $\mu+{\rm Ti}\to e+{\rm Ti}$ processes can be detected in a wide parameter region in the future, even when the model satisfies the current stringent bound on the $\mu\to e\gamma$ branching ratio.\ Acknowledgment {#acknowledgment .unnumbered} ============== This work is partially supported by Scientific Grants by the Ministry of Education, Culture, Sports, Science and Technology of Japan, Nos. 17K05415, 18H04590 and 19H051061 (NH), and No. 19K147101 (TY).\ [99]{} S. Gabriel and S. Nandi, “A New two Higgs doublet model,” Phys. Lett. B [**655**]{}, 141 (2007) \[hep-ph/0610253\]. P. Minkowski, “$\mu \to e\gamma$ at a Rate of One Out of $10^{9}$ Muon Decays?,” Phys. Lett.  [**67B**]{}, 421 (1977). T. Yanagida, “Horizontal Symmetry And Masses Of Neutrinos,” Conf. Proc. C [**7902131**]{}, 95 (1979). S. L. Glashow, “The Future of Elementary Particle Physics,” NATO Sci. Ser. B [**61**]{}, 687 (1980). R. N. Mohapatra and G. Senjanovic, “Neutrino Mass and Spontaneous Parity Violation,” Phys. Rev. Lett.  [**44**]{}, 912 (1980). E. Bertuzzo, Y. F. Perez G., O. Sumensari and R. Zukanovich Funchal, “Limits on Neutrinophilic Two-Higgs-Doublet Models from Flavor Physics,” JHEP **01**, 018 (2016) \[arXiv:1510.04284 \[hep-ph\]\]. T. Toma and A. Vicente, “Lepton Flavor Violation in the Scotogenic Model,” JHEP [**1401**]{}, 160 (2014) \[arXiv:1312.2840 \[hep-ph\]\]. Z. Maki, M. Nakagawa and S. Sakata, “Remarks on the unified model of elementary particles,” Prog. Theor. Phys.  [**28**]{}, 870 (1962). B. Pontecorvo, “Inverse beta processes and nonconservation of lepton charge,” Sov. Phys. JETP [**7**]{}, 172 (1958) \[Zh. Eksp. Teor. Fiz.  [**34**]{}, 247 (1957)\]. J. A. Casas and A. Ibarra, “Oscillating neutrinos and muon —&gt; e, gamma,” Nucl. Phys. B [**618**]{}, 171 (2001) \[hep-ph/0103065\]. I. Esteban, M. C. Gonzalez-Garcia, A. Hernandez-Cabezudo, M. Maltoni and T. Schwetz, “Global analysis of three-flavour neutrino oscillations: synergies and tensions in the determination of $\theta_{23}$, $\delta_{CP}$, and the mass ordering,” JHEP [**1901**]{}, 106 (2019) \[arXiv:1811.05487 \[hep-ph\]\]. I. Esteban, M. C. Gonzalez-Garcia, A. Hernandez-Cabezudo, M. Maltoni and T. Schwetz, NuFIT 4.1 (2019), www.nu-fit.org. A. M. Baldini [*et al.*]{} \[MEG Collaboration\], “Search for the lepton flavour violating decay $\mu ^+ \rightarrow \mathrm {e}^+ \gamma $ with the full dataset of the MEG experiment,” Eur. Phys. J. C [**76**]{}, no. 8, 434 (2016) \[arXiv:1605.05081 \[hep-ex\]\]. B. Aubert [*et al.*]{} \[BaBar Collaboration\], “Searches for Lepton Flavor Violation in the Decays tau+- —&gt; e+- gamma and tau+- —&gt; mu+- gamma,” Phys. Rev. Lett.  [**104**]{}, 021802 (2010) \[arXiv:0908.2381 \[hep-ex\]\]. U. Bellgardt [*et al.*]{} \[SINDRUM Collaboration\], “Search for the Decay mu+ —&gt; e+ e+ e-,” Nucl. Phys. B [**299**]{}, 1 (1988). C. Dohmen [*et al.*]{} \[SINDRUM II Collaboration\], “Test of lepton flavor conservation in mu —&gt; e conversion on titanium,” Phys. Lett. B [**317**]{}, 631 (1993). W. H. Bertl [*et al.*]{} \[SINDRUM II Collaboration\], “A Search for muon to electron conversion in muonic gold,” Eur. Phys. J. C [**47**]{}, 337 (2006). A. Blondel [*et al.*]{}, “Research Proposal for an Experiment to Search for the Decay $\mu \to eee$,” arXiv:1301.6113 \[physics.ins-det\]. Y. Kuno \[COMET\], “A search for muon-to-electron conversion at J-PARC: The COMET experiment,” PTEP **2013**, 022C01 (2013) E. Arganda, M. J. Herrero and A. M. Teixeira, “mu-e conversion in nuclei within the CMSSM seesaw: Universality versus non-universality,” JHEP [**0710**]{}, 104 (2007) \[arXiv:0707.2955 \[hep-ph\]\]. A. Abada, V. De Romeri, S. Monteil, J. Orloff and A. M. Teixeira, JHEP [**1504**]{}, 051 (2015) doi:10.1007/JHEP04(2015)051 \[arXiv:1412.6322 \[hep-ph\]\]. V. Khachatryan *et al.* \[CMS\], “Search for Lepton-Flavour-Violating Decays of the Higgs Boson,” Phys. Lett. B **749**, 337-362 (2015) \[arXiv:1502.07400 \[hep-ex\]\].
--- abstract: 'Music highlights are valuable contents for music services. Most methods focused on low-level signal features. We propose a method for extracting highlights using high-level features from convolutional recurrent attention networks (CRAN). CRAN utilizes convolution and recurrent layers for sequential learning with an attention mechanism. The attention allows CRAN to capture significant snippets for distinguishing between genres, thus being used as a high-level feature. CRAN was evaluated on over 32,000 popular tracks in Korea for two months. Experimental results show our method outperforms three baseline methods through quantitative and qualitative evaluations. Also, we analyze the effects of attention and sequence information on performance.' address: | ^1^Clova AI Research and ^2^Clova Music, NAVER Corp., Korea\ ^3^Hong Kong University of Science and Technology, China bibliography: - 'mhe.bib' title: Automatic Music Highlight Extraction using Convolutional Recurrent Attention Networks --- Introduction ============ Identifying music highlights is an important task for online music services. For example, most music online services provide first 1 minute free previews. However, if we can identify the highlights of each music, it is much better to play the highlights as a preview for users. Users can quickly browse musics by listening highlights and select their favorites. Highlights can contribute to music recommendation [@cai2007scalable; @su2010music; @celma2010music]. Using highlights, users efficiently confirm the discovery-based playlists containing unknown or new released tracks. Most of existing methods have focused on using low-level signal features including the pitch and loudness by MFCC and FFT [@lu2003automated; @xu2009music]. Therefore, these approaches are limited to extract snippets reflecting high-level properties of a track such as genres and themes. Although extraction by human experts guarantees the high quality results, basically it does not scale. In this paper, we assume that high-level information such as genre contribute to extract highlights, and thus propose a new deep learning-based technique for extracting music highlights. Our approach, convolutional recurrent attention-based highlight extraction (CRAN) uses both mel-spectrogam features and high-level acoustic features generated by the attention model [@xu2015show]. First, CRAN finds the highlight candidates by focusing on core regions for different genres. This is achieved by setting track genres to the output of CRAN and learning to attend the parts significant for characterizing the genres. Then, th highlights are determined by summing the energy of mel-spectrogram and the attention scores. The loss of genre classification are back propagated, and weights including the attention layer are updated in the end-to-end manner. In addition, CRAN is trained in an unsupervised way with respect to finding highlights because it does not use ground truth data of highlight regions for training. We evaluate CRAN on 32,000 popular tracks from December 2016 to January 2017, which are served through a Korean famous online music service, NAVER Music. The evaluation dataset consists of various genre songs including K-pop and world music. For experiments, we extract the highlighted 30 second clip per track using CRAN, and conduct qualitative evaluation with the likert scale (1 to 5) and quantitative verification using ground truth data generated by human experts. The results show that CRAN’s highlights outperform three baselines including the first 1 minute, an energy-based method, and a the attention model with no recurrent layer (CAN). CRAN also outperforms CAN and models with no attention with respect to genre classification. Furthermore, we analyze the relationships between the attention and traditional low-level signals of tracks to show the attention’s role in identifying highlights. Music Data Description ====================== We select 32,083 popular songs with 10 genres played from December 2016 to January 2017 for two months in NAVER Music. The detailed data are summarized in Table \[table1\]. Note that some tracks belong to more than one genre, so the summation of tracks per genre is larger than the number of the data. The data are separated into training, validation, and test sets. Considering a real-world service scenario, we separate the data based on the ranking of each track as shown in Table \[table2\]. We use two ranking criteria such as the popularity and the released date. We extracted a ground-truth dataset with highlights of 300 tracks by eight human experts for quantitative evaluation, explained in Section 4.1. The experts marked the times when they believe that highlight parts start and stop by hearing tracks. Genre \# songs Ratio Genre \# songs Ratio ---------- ---------- ------- --------- ---------- ------- Dance 5,634 14.9 Jazz 1,649 4.4 Ballad 8,224 21.9 R&B 3,619 9.6 Teuroteu 315 0.8 Indie 3,268 8.7 Hiphop 4,373 11.6 Classic 891 2.3 Rock 7,135 19.0 Elec 2,511 6.7 Total 37,619 100 : \[table1\]Constitution of tracks per genre -0.3in Data Ratio(%) Rank range(%) \# of data ------------ ---------- ------------------ ------------ Training 80 20 - 100 25,667 Val / Test 10 / 10 10 - 20 / 0 - 10 3,208 : \[table2\]Data separation for experiments -0.3in We convert *mp3* files to mel-spectrograms, which are two-dimensional matrices of which row and column are the number of mel-frequencies and time slots. Each mel-spectrogram is generated from a time sequence sampled from an *mp3* file with a sample rate of 8372Hz using librosa [@mcfee2015librosa]. The sample rate was set as two times of the frequency of C8 on equal temperament(12-TET). The number of mel-bins is 128 and the fft window size is 1024, which makes a single time slot of a mel-spectrogram to be about 61 milliseconds. The input representation $\bf {x}$ is generated as follows: 1. [$PT(\bf {x}) \ge 240s$: use the first 240 seconds of $\bf {x}$]{} 2. [$PT(\bf {x}) < 240s$: fill in the missing part with the last 240-$PT(\bf {x})$ seconds of a track]{} where $PT(\bf {x})$ denotes the playing time of $\bf {x}$. Therefore, we can obtain a 128 $\times$ 4,000 matrix from each track. Attention-Based Highlight Extraction ==================================== Convolutional Recurrent Attention Networks ------------------------------------------ CNN has been applied to many music pattern recognition approaches [@schluter2014improved; @ullrich2014boundary; @choi2016convolutional]. A low-level feature such as mel-spectrogram can be abstracted into a high-level feature by repeating convolution operations, thus being used for calculating the genre probabilities in the output layer. The attention layer aims to find which regions of features learned play a significant role for distinguishing between genres. The attention results later can be used to identify track highlights. We use 1D-convolution by defining each mel as a channel to reduce training time without losing accuracy, instead of the 2D-convolution  [@choi2016convolutional]. -0.3in Given a mel-spectrogram for a track $\bf{x}$, in specific, an intermediate feature $\mathbf{u}$ is generated through the convolution and pooling operations: $$\begin{aligned} \mathbf{u} = Concatenate(Maxpoolin{g^n}({Conv^k}(\mathbf{x}))) \end{aligned}$$ where $n$ and $k$ denote the numbers of pooling and convolution layers. We use the exponential linear unit (elu) as a non-linear function [@clevert2015fast]. After that, $\mathbf{u}$ is separated into a sequence of $T$ time slot vectors, $\mathbf{U}=\{ {\mathbf{u}^{(t)}}\} _{t = 1}^T$, which are fed into bidirectional LSTM [@hochreiter1997long; @graves2012long]. Then, we obtain a set of $T$ similarity vectors, $\mathbf{V}$, from the $tanh$ values of ${\mathbf{u}^{(t)}}$ and of a vector transformed from the output of LSTM, $\mathbf{u'}$: $$\begin{aligned} \mathbf{u'} = BiLSTM(\mathbf{U})\\ \mathbf{V} = \{ {\mathbf{v}^{(t)}}\} _{t = 1}^T= g(\mathbf{U}) \otimes f(\mathbf{u}') \label{eqn1}\\ g(\mathbf{U}) = \tanh ({}^{TS}W\mathbf{U}) \\ f(\mathbf{u}') = Re(\tanh ({}^{FC}W\mathbf{u}'), T)\end{aligned}$$ where $\otimes$ denotes element-wise multiplication. $Re(\mathbf{x}, T)$ is a function which makes $T$ duplicates of $\mathbf{x}$. ${}^{TS}W$ and ${}^{FC}W$ are the weight matrices of the time separated connection for attention and the fully connected layer (FC1) to the output of LSTM, as shown in Fig. \[fig1\]. -0.3in -0.3in -0.3in CRAN uses the soft attention approach [@nam2016dual]. The attention score of $\{ {\mathbf{u}^{(t)}}\}$ is the softmax value of $\{ {\mathbf{v}^{(t)}}\}$ using a two layer-networks: $$\begin{aligned} {\alpha _i} = Softmax \{ \tanh ({}^AW{\mathbf{v}^{(i)}})\} \end{aligned}$$ where ${}^AW$ is the weight matrix of the connection between similarity vectors and each node of the attention score layers. Then, $\mathbf{z}$ is calculated by the attention score-weighted summation of the similarity vectors of all time slots: $$\begin{aligned} \mathbf{z} = P\sum\nolimits_{t = 1}^T {{\alpha _t}{v^{(t)}}}\end{aligned}$$ where $P$ is a matrix for dimensionality compatibility. Then, the context vector $\mathbf{m}$ is obtained by element-wise multiplication between the tanh values of $\mathbf{z}$ and of the FC vector: $$\begin{aligned} \mathbf{m} = \tanh (\mathbf{z}) \otimes \tanh ({}^{FC}W\mathbf{u'})\end{aligned}$$ Finally, the probability of a genre $y$ is defined as the softmax function of $\mathbf{m}$. The loss function is the categorical cross-entropy [@deng2006cross]. Extracting Track Highlights --------------------------- We use the mel-spectrogram and the attention scores of a track together for highlight extraction. The highlight score of each frame is computed by summing the attention scores and the mean energies: $$\begin{aligned} {{\tilde e}^{(n)}} = \gamma {\alpha _n} + \frac{{(1 - \gamma )}}{{128}}\sum\nolimits_{i = 1}^{128} {e_i^{(n)}} \\ {H^n} = \beta \sum\nolimits_{s = 0}^{S - 1} {{{\tilde e}^{(n + s)}}} + (1 - \beta )\left( {\Delta {e^{(n)}} + {\Delta ^2}{e^{(n)}}} \right)\end{aligned}$$ where $e_i^{(n)}$ and $S$ denote the energy of the $i$-th mel channel in the $n$-th time frame and the duration of a highlight. $\beta$ and $\gamma$ are arbitrary constants in (0, 1). $\Delta {e^{(n)}}$ and ${\Delta ^2}{e^{(n)}}$ denote the differences of ${e^{(n)}}$ and $\Delta {e^{(n)}}$, and they enable the model to prefer the rapid energy increment. Experimental Results ==================== Parameter Setup and Evaluation Methodology ------------------------------------------ Hyperparameters of CRAN are summarized in Table \[table3\]. We compare the highlights extracted by CRAN to those generated by the method summing the energy of mel-spectrogram, the first one minute snippet (F1M), and convolutional attention model without a recurrent layer (CAN). In addition, CRAN and CAN are compared to models without attention for genre classification, called CRN and CNN, respectively. ------------------------------------------------------------- - Convolution & pooling layers: 2 & 1, 4 pairs - \# and size of filters: 64 & \[3, 3, 3, 3\] - Pooling method and size: max & \[2, 2, 2, 2\] - \# and node size of LSTM layers: 2 & 512 - Dropout (recurrent / fully connected): 0.2 / 0.5 - Number and node size of FC layers: 2, \[500, 300\] - Optimizer: Adam [@kingma2014adam] (LR:0.005, decay: 0.01) - $\beta$, $\gamma$: 0.5, 0.1 ------------------------------------------------------------- : \[table3\]Parameter setup of CRAN -0.3in We compare the highlights extracted by CRANs to those by the three baselines. We define two metrics for the evaluation. One is the time overlapped between the ground-truth and extracted highlights. The other is the recall of extracted highlights. Given a track $\mathbf{x}$ and an extracted highlight $H$, two metrics are defined as follows: $$\begin{aligned} O(\mathbf{x},H) = PT(GT(\mathbf{x}) \cap H)\\ Recall(\mathbf{x},H) = \left\{ {\begin{array}{*{20}{c}} {1,\,if\,O(\mathbf{x},H) > \,0.5 \times PT(H)}\\ {0,\,otherwise\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,} \end{array}} \right.\end{aligned}$$ where $PT(\mathbf{x})$ and $GT(\mathbf{x})$ denotes the playing time and the ground truth highlight of $\mathbf{x}$. In addition, five human experts rated the highlights extracted by each model in range of \[1, 5\] as the qualitative evaluation.   Models     Overlap (s)     Recall     Qual   ---------------------- -------------------- ------------ ----------- First 1 minute (F1M) 6.96$\pm$10.70 0.258 1.793 Mel-spectrogram 19.76$\pm$10.5 0.796 4.256 CAN 21.47$\pm$9.84 0.857 4.256 CRAN **21.63$\pm$9.78** **0.860** **4.268** : \[table4\]Comparison of quantitative performance -0.3in Quantitative and Qualitative Evaluations ----------------------------------------- Table \[table4\] presents the results. CRAN yields the best accuracy with respect to both qualitative and quantitative evaluations. This indicates that the high-level features improves the quality of the extracted music highlights. Interestingly, we can find that using F1M leads to very poor performance even if its playing time is twice. It indicates that the conventional preview is needed to be improved using the automatic highlight extraction-based method for enhancing user experience. Table \[table5\] presents the results with respect to overlap and recall according to genres. In Table \[table5\], values denote the overlapped time. Overall, CRAN yields a little better performance compared to CAN and outperforms the mel energy-based method and F1M. It indicate that the attention scores are helpful for improving the highlight quality in most genres. Interestingly, all models provide relatively low performance on hiphop and indie genres, resulting from their rap-oriented or informal composition. Genre Size F1M Mel CAN CRAN ------------- ------ ------- ----------- ----------- ----------- Dance 57 6.56 20.72 22.37 **22.40** Ballad 113 3.41 22.14 23.37 **23.75** Teuroteu 5 18.8 20.0 21.20 **22.34** Hiphop 42 13.33 14.52 15.79 **15.81** Rock 27 7.15 19.78 22.11 **22.40** Jazz 6 10.0 20.33 **20.83** **20.83** R&B 53 7.56 18.89 **21.77** 21.58 Indie Music 12 13.25 **18.75** 18.08 18.08 Classical 5 0.0 20.0 25.50 **25.75** Electronic 9 7.22 17.33 17.11 **18.67** : \[table5\]Comparison of mean overlapped time per genre -0.3in   Recall@3     CNN     CAN     CRN   CRAN -------------- --------- --------- --------- ----------- Popularity 0.804 0.898 0.858 **0.918** NewRelease 0.802 0.831 0.791 **0.871** : \[table6\]Comparison of quantitative performance -0.3in Genre Classification and Hyperparameter Effects ----------------------------------------------- We investigate the effects of the attention mechanism on genre learning and classification performance. Recall@3 was used as a evaluation metric, considering the ambiguity and similarity between genres [@panagakis2008music; @silla2007automatic]. Table \[table6\] depicts the classification performance of each model. As shown in Table \[table6\], the attention mechanism considerably improves the performances as at least 0.05 on two test datasets. In addition, CRAN provides better accuracy compared to CAN, and it indicates that sequential learning is useful for classifying genres. Fig. \[fig4\] shows the classification performance for model types and parameters with respect to time and accuracy. From Figs. \[fig4\](a) and (b), the usage of both sequential modeling and the attention mechanism prevents overfitting comparing the loss of CRAN to other models. It is interesting that the number and the hidden node size of the recurrent layers rarely contributes to improve the loss of the model from Fig. \[fig4\] (c) and (d). The use of attention does not require much training time while the use of recurrent layers slightly increases the model size, as shown in Fig. \[fig4\](d). Attention Analysis ------------------ Fig. \[fig5\] presents the distribution of the mean and the variance of the attention scores derived from CRAN per genre. As shown in Fig. \[fig5\], time slots with a large attention score vary by genre. In particular, ballad, rock, and R&B tracks show similar attention patterns. Hiphop and classical genres show relatively low standard deviation of attention scores due to their characteristics [@gall2005music]. This result indicates the attention by CRAN learns the properties of a genre. Fig. \[fig7\] presents the correlation coefficient between the attention scores and the energy of mel-spectrogram for each genre. we find that the regions with higher energy in the latter of a track are likely to be a highlight. In addition, high energy regions are obtained larger attention scores through entire time frames in classical music, compared to other genres. We infer that high-level features can play a complement role for extracting information from tracks, considering the different patterns between attention scores and low-level signals. Concluding Remarks ================== We demonstrate a new music highlight extraction method using high-level acoustic information as well as low-level signal features, using convolutional recurrent attention networks (CRAN) in an unsupervised manner. We evaluated CRAN on 32,083 tracks with 10 genres. Quantitative and qualitative evaluations show that CRAN outperforms baselines. Also, the results indicate that the attention scores generated by CRAN pay an important role in extracting highlights. As future work, CRAN-based highlights will be applied to Clova Music service, the AI platform of NAVER and LINE.
--- abstract: 'Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio (CDR) plays an important role in the screening and diagnosis of glaucoma. Thus, the accurate and automatic segmentation of optic disc (OD) and optic cup (OC) from fundus images is a fundamental task. Most existing methods segment them separately, and rely on hand-crafted visual feature from fundus images. In this paper, we propose a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly in a one-stage multi-label system. The proposed M-Net mainly consists of multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The multi-scale input layer constructs an image pyramid to achieve multiple level receptive field sizes. The U-shape convolutional network is employed as the main body network structure to learn the rich hierarchical representation, while the side-output layer acts as an early classifier that produces a companion local prediction map for different scale layers. Finally, a multi-label loss function is proposed to generate the final segmentation map. For improving the segmentation performance further, we also introduce the polar transformation, which provides the representation of the original image in the polar coordinate system. The experiments show that our M-Net system achieves state-of-the-art OD and OC segmentation result on ORIGA dataset. Simultaneously, the proposed method also obtains the satisfactory glaucoma screening performances with calculated CDR value on both ORIGA and SCES datasets.' author: - 'Huazhu Fu, Jun Cheng, Yanwu Xu, Damon Wing Kee Wong, Jiang Liu, and Xiaochun Cao [^1] [^2] [^3] [^4] [^5]' bibliography: - 'IEEEabrv.bib' - 'Deep\_CDR.bib' title: 'Joint Optic Disc and Cup Segmentation Based on Multi-label Deep Network and Polar Transformation' --- Deep learning, optic disc segmentation, optic cup segmentation, glaucoma screening, cup to disc ratio. Introduction ============ Glaucoma is the second leading cause of blindness worldwide (only second to cataracts), as well as the foremost cause of irreversible blindness [@Tham2014]. Since vision loss from glaucoma cannot be reversed, early screening and detection methods are essential to preserve vision and life quality. One major glaucoma screening technique is optic nerve head (ONH) assessment, which employs a binary classification to identify the glaucomatous and healthy subjects [@Garway-Heath352]. However, the manual assessment by trained clinicians is time consuming and costly, and not suitable for population screening. ![Structure of the optic nerve head. The region enclosed by the green dotted circle is the optic disc (OD); the central bright zone enclosed by the blue dotted circle is the optic cup (OC); and the region between them is the neuroretinal rim. The vertical cup to disc ratio (CDR) is calculated by the ratio of vertical cup diameter (VCD) to vertical disc diameter (VDD). PPA: Peripapillary Atrophy.[]{data-label="img-cover"}](figure/cover_img){width="1\linewidth"} For large-scale screening, automatic ONH assessment methods are needed. Some clinical measurements are proposed, such as the vertical cup to disc ratio (CDR) [@Jonas2000], rim to disc area ratio (RDAR), and disc diameter [@HANCOXOD199959]. In them, CDR is well accepted and commonly used by clinicians. In color fundus image, the optic disc (OD) appears as a bright yellowish elliptical region, and can be divided into two distinct zones: a central bright zone as optic cup (OC) and a peripheral region as the neuroretinal rim, as shown in Fig. \[img-cover\]. The CDR is calculated by the ratio of vertical cup diameter (VCD) to vertical disc diameter (VDD). In general, a larger CDR suggests a higher risk of glaucoma and vice versa. Accurate segmentations of OD and OC are essential for CDR measurement. Some methods automatically measure the disc and cup from 3-D optical coherence tomography (OCT) [@Lee2010; @WuMenglin15; @Fu2015TBME; @Fu2017TMI]. However, OCT is not easily available due to its high cost, the fundus image is still referred to by most clinicians. A number of works have been proposed to segment the OD and/or OC from the fundus image [@Joshi2011; @Cheng2013; @Zheng2013miccai; @Almazroa2015]. The main segmentation techniques include color and contrast thresholding, boundary detection, and region segmentation methods [@Almazroa2015]. In these methods, the pixels or patches of fundus images are determined as background, disc and cup regions, through a learned classifier with various visual features. However, most existing methods are based on hand-crafted features (e.g., RGB color, texture, Gabor filter, and gradient), which lack sufficiently discriminative representations and are easily affected by pathological regions and low contrast quality. In addition, most methods segment the OD and OC separately, i.e., segmenting OD first, followed by the OC without considering the mutual relation of them. In this paper, we consider OD and OC together, and provide a one-stage framework based on deep learning technique. Deep learning techniques have been recently demonstrated to yield highly discriminative representations that have aided in many computer vision tasks. For example, Convolutional Neural Networks (CNNs) have brought heightened performance in image classification [@Krizhevsky2012] and segmentation [@Long2017_FCN]. For retinal image, Gulshan *et al.* have demonstrated that the deep learning system could obtain a high sensitivity and specificity for detecting referable diabetic retinopathy [@Gulshan2016]. In fundus vessel segmentation, the deep learning systems [@Fu2016ISBI; @Fu2016; @Maninis2016] also achieve the state-of-the-art performances. These successes have motivated our investigation of deep learning for disc and cup segmentation from fundus images. In our paper, we address OD and OC segmentation as a multi-label task and solve it using a novel end-to-end deep network. The main contributions of our work include: 1. We propose a fully automatic method for joint OD and OC segmentation using a multi-label deep network, named M-Net. Our M-Net is an end-to-end deep learning system, which contains a multi-scale U-shape convolutional network with the side-output layer to learn discriminative representations and produces segmentation probability map. 2. For joint OD and OC segmentation, a multi-label loss function based on Dice coefficient is proposed, which deals well with the multi-label and imbalance data of pixel-wise segmentation for fundus image. 3. Moreover, a polar transformation is utilized in our method to transfer the fundus image into a polar coordinate system, which introduces the advantages of spatial constraint, equivalent augmentation, and balancing cup proportion, and improves the segmentation performance. 4. At last, we evaluate the effectiveness and generalization capability of the proposed M-Net on ORIGA dataset. Our M-Net achieves state-of-the-art segmentation performance, with the average overlapping error of $0.07$ and $0.23$ for OD and OC segmentation, respectively. 5. Furthermore, the CDR is calculated based on segmented OD and OC for glaucoma screening. Our proposed method obtains highest performances with areas under curve (AUC) of $0.85$ and $0.90$ on ORIGA and SCES datasets. The remainders of this paper are organized as follows. We begin by reviewing techniques related to OD/OC segmentation in Section \[sec-related\]. The details of our system and its components are presented in Section \[sec-method\]. To verify the efficacy of our method, extensive experiments are conducted in Section \[sec\_exp\], and then we conclude with final remarks in Section \[sec\_conclusion\]. Related Works {#sec-related} ============= **Optic Disc Segmentation:** The OD is the location where ganglion cell axons exit the eye to form the optic nerve through which visual information of the photo-receptors is transmitted to the brain. Earlier, the template based methods are proposed firstly to obtain the OD boundary. For example, Lowell *et al.* employed the active contour model [@Lowell2004] to detect the contour based on image gradient. In [@Aquino2010; @Lu2011], the Circular-based Transformation techniques are employed to obtain the OD boundary. In [@Joshi2011], the local texture features around each point of interest in multidimensional feature space are utilized to provide robustness against variations in OD region. Recently, the pixel classification based method is proposed to transfer the boundary detection problem into a pixel classification task, which obtains a satisfactory performance. Cheng *et al.* [@Cheng2013] utilizes superpixel classifier to segment the OD and OC, which exploits using various hand-crafted visual features at superpixel level to enhance the detection accuracy. In [@Abra2007], the disparity values extracted from stereo image pairs are introduced to distinguish the OD and background. However, reliance on hand-crafted features make these methods susceptible to low quality images and pathological regions. **Optic Cup Segmentation:** The OC is restricted to the region inside OD. Segmenting OC from fundus images is a more challenging task due to the low contrast boundary. In [@Wong2008], an automatic OC segmentation algorithm based on a variational level set is proposed. Later, the blood vessel kinks are found to be useful for OC segmentation [@WongCup2009], and a similar concept but named as vessel bend is utilized in [@Joshi2011]. The main challenge in detecting kinks or vessel bending is that it is often affected by natural vessel bending that does not lie on the OC boundary. Moreover, the pixel classification based methods similar to OD segmentation [@Cheng2013] is also introduced to OC segmentation. The various hand-crafted visual features (e.g., center surround statistics, color histogram, and low-rank superpixel representation) are employed in [@Xu2011; @Cheng2013; @Xu2014] to represent the pixel/superpixel for OC segmentation. A common limitation of these algorithms is that they highly relied on hand-crafted visual feature, which are mainly based on contrast between the neuroretinal rim and the cup. ![image](figure/Framework){width="1\linewidth"} **Joint OD and OC Segmentation:** Most existing methods only focus on the single region segmentation (i.e., OC or OD). Especially, for the cup segmentation, the OD boundary could provide some useful prior informations, e.g., shape constraint and structure constraint [@Xu2012b]. The works in [@Joshi2011; @Cheng2013] deal with the OD and OC by two separate stages with different features. Zheng *et al.* integrated the OD and OC segmentation within a graph-cut framework [@Zheng2013miccai]. However, they consider the OD and OC as two mutual labels, which means for any pixel in fundus, it can only belong to one label (i.e., background, OD, and OC). Moreover, the method [@Zheng2013miccai] only employs color feature within a Gaussian Mixture Model to decide a posterior probability of the pixel, which makes it unsuitable for fundus image with low contrast. In [@Sevastopolsky2017], a modified U-Net deep network is introduced to segment the OD and OC. However, it still separates OD and OC segmentation in a sequential way. In [@ZILLY201728], an ensemble learning method is proposed to extract OC and OD based on the CNN architecture. An entropy sampling technique is used to select informative points, and then graph cut algorithm is employed to obtain the final segmentation result. However this multiple step deep system limits its effectiveness in the training phase. Proposed Method {#sec-method} =============== Fig. \[img-framework\] illustrates the overall flowchart of our OD and OC segmentation method, which contains M-Net deep network and the fundus image polar transformation. In our method, we firstly localize the disc center by using the existing automatic disc detection method [@XU20072063], and then transfers the original fundus image into polar coordinate system based on the detected disc center. Then the transferred image is fed into our M-Net, and generates the multi-label probability maps for OD and OC regions. Finally, the inverse polar transformation recovers the segmentation map back to the Cartesian coordinate. M-Net Architecture ------------------ Our M-Net is an end-to-end multi-label deep network, which consists of four main parts. The first is a multi-scale layer used to construct an image pyramid input and achieve multi-level receptive field fusion. The second is a U-shape convolutional network, which is employed as the main body structure to learn a rich hierarchical representation. The third part is side-output layer that works on the early convolutional layers to support deep layer supervision. Finally, a multi-label loss function is proposed to guarantee OD and OC segmentation jointly. ### U-shape Convolutional Network In our paper, we modify the U-shape convolutional network (U-Net) in [@Ronneberger2015] as the main body in our deep architecture. U-Net is an efficient fully convolutional neural network for the biomedical image segmentation. Similar to the original U-Net architecture, our method consists of the encoder path (left side) and decoder path (right side). Each encoder path performs convolution layer with a filter bank to produce a set of encoder feature maps, and the element-wise rectified-linear non-linearity (ReLU) activation function is utilized. The decoder path also utilizes the convolution layer to output decoder feature map. The skip connections transfer the corresponding feature map from encoder path and concatenate them to up-sampled decoder feature maps. Finally, the high dimensional feature representation at the output of the final decoder layer is fed to a trainable multi-label classifier. In our method, the final classifier utilizes $1 \times 1$ convolutional layer with *Sigmoid* activation as the pixel-wise classification to produce the probability map. For multi-label segmentation, the output is a $K$ channel probability map, where $K$ is the class number ($K=2$ for OD and OC in our work). The predicted probability map corresponds to the class with maximum probability at each pixel. ### Multi-scale Input Layer The multi-scale input or image pyramid has been demonstrated to improve the quality of segmentation effectively. Different from the other works, which fed the multi-scale images to multi-scream networks separately and combine the final output map in the last layer [@Li2016TIP; @Liu2017], our M-Net employs the average pooling layer to downsample the image naturally and construct a multi-scale input in the encoder path. Our multi-scale input layer has following advantages: 1) integrating multi-scale inputs into the decoder layers to avoid the large growth of parameters; 2) increasing the network width of decoder path. ### Side-output Layer In our M-Net, we also introduce the side-output layer, which acts as a classifier that produces a companion local output map for early layers [@Lee2015]. Let $\mathbf{W}$ denote the parameters of all the standard convolutional layers, and there are $M$ side-output layers in the network, where the corresponding weights are denoted as $\mathbf{w}=(\mathbf{w}^{(1)},...,\mathbf{w}^{(M)})$. The objective function of the side-output layer is given as: $$\mathcal{L}_{s}(\mathbf{W}, \mathbf{w}) = \sum^M_{m=1} \alpha _m L^{(m)}_s(\mathbf{W}, \mathbf{w}^{(m)}), \label{Eq_CNN_loss}$$ where $\alpha _m$ is the loss function fusion-weight for each side-output layer ($\alpha _m = 0.25$ in our paper), $M$ is the side-output number, and $L^{(m)}_s (,)$ denotes the multi-label loss of the $m$-th side-output layer. To directly utilize side-output prediction map, we employ an average layer to combine all side-output maps as the final prediction map. The main advantages of the side-output layer are: first, the side-output layer back-propagates the side-output loss to the early layer in the decoder path with the final layer loss, which could relieve gradient vanishing problem and help the early layer training. It can be treated as a special bridge link between the loss and early layer; second, the multi-scale fusion has been demonstrated to achieve a high performance, and the side-output layer supervises the output map of each scale to output the better result. ### Multi-label Loss Function {#sec-loss} In our work, we formulate the OD and OC segmentation as a multi-label problem. The existing segmentation methods usually belong to the multi-class setting, which assign each instance to one unique label of multiple classes. By contrast, multi-label method learns an independent binary classifier for each class, and assigns each instance to multiple binary labels. Especially for OD and OC segmentation, the disc region overlays the cup pixels, which means the pixel marked as cup also has the label as disc. Moreover, for the glaucoma cases, the disc pixels excluded cup region shapes as a thin ring, which makes the disc label extremely imbalance to background label under the multi-class setting. Thus, multi-label method, considering OD and OC as two independent binary classifiers, is more suitable for addressing these issues. In our method, we propose a multi-label loss function based on Dice coefficient. The Dice coefficient is a measure of overlap widely used to assess segmentation performance when the ground truth is available [@Crum2006]. Our multi-label loss function $L_s$ is defined as: $$L_s = 1 - \sum_{k}^{K} \dfrac{2 w_k \sum_{i}^{N} p_{(k,i)} g_{(k,i)}}{\sum_{i}^{N} p_{(k,i)}^2 + \sum_{i}^{N} g_{(k,i)}^2} , \label{Eq_loss}$$ where $N$ is the pixel number, $p_{(k,i)} \in [0, 1]$ and $g_{(k,i)} \in \{0, 1\} $ denote predicted probability and binary ground truth label for class $k$, respectively. $K$ is the class number, and $\sum_k w_k =1$ are the class weights. Our multi-label loss function in Eq. (\[Eq\_loss\]) is equivalent to the traditional Dice coefficient by setting $K=1$. In our method, we set $K=2$ for OD and OC segmentation. Note that the Dice loss function indicates the foreground mask overlapping ratio, and can deals with the imbalance issue in the pixels of foreground (i.e., OD or OC) region and background. Under our multi-label setting, the pixel can be labeled as OD or/and OC independently. Thus, the imbalance issue does not exist between OD and OC. $w_k$ in Eq. (\[Eq\_loss\]) is the trade-off weight to control the contribution of OD and OC. For glaucoma screening, both the OD and OC are important, thus we set $w_k = 0.5$. Our multi-label loss function $L_s$ can be differentiated yielding the gradient as: $$\begin{aligned} \dfrac{\partial L_s}{\partial p_{(k,i)} }= \sum_{k}^{K} 2 w_k \left[ - \dfrac{ g_{(k,i)} }{ \sum_{i}^{N} p_{(k,i)}^2 + \sum_{i}^{N} g_{(k,i)}^2 } \right. \nonumber \\ \left. + \dfrac{ 2 p_{(k,i)} \sum_{i}^{N} p_{(k,i)} g_{(k,i)} }{(\sum_{i}^{N} p_{(k,i)}^2 + \sum_{i}^{N} g_{(k,i)}^2)^2} \right] .\end{aligned}$$ This loss is efficiently integrated into back-propagation via standard stochastic gradient descent. Polar Transformation for Fundus Image ------------------------------------- ![Illustration of the mapping from Cartesian coordinate system (A) to the polar coordinate system (C) by using the polar transformation. The point $p(u,v)$ in Cartesian coordinate corresponds to the point $p'(\theta, r)$ in polar coordinate. (B) and (D) are the corresponding ground truth, where yellow, red, and black regions denote the optic cup, optic disc, and background, respectively.[]{data-label="img-polar"}](figure/palor_flow){width="1\linewidth"} In our method, we introduce a polar transformation for improving the OD and OC segmentation performance. The pixel-wise polar transformation transfers the original fundus image to the polar coordinate system. Let $p(u,v)$ denotes the point on fundus image plane, where the origin is set as the disc center $O(u_o, v_o)$, and $(u,v)$ is the Cartesian coordinates, as shown in Fig. \[img-polar\] (A). The corresponding point on polar coordinate system is $p'(\theta, r)$, as shown in Fig. \[img-polar\] (C), where $r$ and $\theta$ are the radius and directional angle of the original point $p$, respectively. The transform relation between the polar and Cartesian coordinates is as follow: $$\left\{ {\begin{array}{{l}} u = r \cos \theta \\ v = r \sin \theta \end{array}} \right. \Leftrightarrow \left\{ {\begin{array}{{l}} r=\sqrt{u^2 + v^2} \\ \theta = \tan ^{-1} v/u \end{array}} . \right. \label{Eq_pt}$$ The height and width of transferred polar image are transformation radius $R$ and discretization $2\pi$. The polar transformation provides a pixel-wise representation of the original image in the polar coordinate system, which has the following properties:\ **1) Spatial Constraint:** In the original fundus image, a useful geometry constraint is that the OC should be within the OD region, as shown in Fig. \[img-polar\] (B). But this redial relationship is difficult to implement in the original Cartesian coordinate. By contrast, our polar transformation transfers this redial relationship to a spatial relationship, where the regions of cup, disc, and background appear the ordered layer structure, as shown in Fig. \[img-polar\] (D). This layer-like spatial structure is convenient to use, especially some layer-based segmentation methods [@Lang2013; @Dufour2013] can be employed as the post-processing.\ **2) Equivalent Augmentation:** Since the polar transformation is a pixel-wise mapping, the data augmentation on original fundus image is equivalent to that on polar coordinate. For example, moving the expansion center $O(u_o, v_o)$ is equivalent to the drift cropping transformations on polar coordinate. Using different transformation radius $R$ is same as augmenting with the various scaling factor. Thus the data augmentation for deep learning can be done during the polar transformation with various parameters.\ **3) Balancing Cup Proportion:** In the original fundus image, the distribution of OC/background pixels is heavily biased. Even in the cropped ROI, the cup region still accounts for a low proportion. Using Fig. \[img-polar\] (B) as an example, the cup region only occupies about $4\%$. This extremely imbalance proportion easily leads the bias and overfitting in training the deep model. Our polar transformation flat the image based on OD center, that could enlarge the cup region by using interpolation and increase the OC proportion. As shown in Fig. \[img-polar\] (D), the ratio of cup region increases to $23.4\%$ over the ROI, which is more balanced than that in original fundus image. The balanced regions help avoid the overfitting during the model training and improve the segmentation performance further. Note that the method in [@Chisako2009] also utilizes the polar transformation to detect the cup outline based on the depth map estimated from stereo retinal fundus image pairs. Our work has significant difference compared to [@Chisako2009]. 1) Motivations are different. The polar transformation used in [@Chisako2009] aims at finding strongest depth edge in the radial direction as initial candidate points of cup border. In our work, we use polar transformation to obtain spatial constraint and augments the cup/disc region proportion. 2) Methods are different. The method in [@Chisako2009] detects the OD and OC boundaries sequentially, and polar transformation only use for OC segmentation. Our method segments OD and OC regions jointly, and considers their mutual relation under polar coordinate. Experiments {#sec_exp} =========== Implementation -------------- Our M-Net is implemented with Python based on Keras with Tensorflow backend. During training, we employ stochastic gradient descent (SGD) for optimizing the deep model. We use a gradually decreasing learning rate starting from $0.0001$ and a momentum of $0.9$. The transformation radius $R$ is set to $R=400$, and the directional angles are draw into $400$ distinct bins, thus the size of transferred polar image is $400\times400$. The output of our M-Net is a 2-channel posterior probability map for OD and OC, where each pixel value represents the probability. A fixed threshold $0.5$ is employed to get a binary mask from the probability map. As same as that in previous works [@Cheng2013; @Cheng2017SVM], the largest connected region in OD/OC mask is selected and the ellipse fitting is utilized to generate the final segmentation result. Segmentation Experiments ------------------------ We first evaluate the OD and OC segmentation performance. We employ the ORIGA dataset [@Zhang2010a] containing 650 fundus images with 168 glaucomatous eyes and 482 normal eyes. The 650 images with manual ground truth boundaries are divided into 325 training images (including 73 glaucoma cases) and 325 testing images (including 95 glaucoma) as same as that in [@Xu2014; @Cheng2017BOE]. To evaluate the segmentation performance, we use the overlapping error ($E$) and balanced accuracy ($A$) as the evaluation metrics for OD, OC, and rim regions: $$E = 1 - \dfrac{Area(S \bigcap G)}{Area(S \bigcup G)}, \; A = \dfrac{1}{2} (Sen + Spe),$$ with $$Sen = \dfrac{TP}{TP+FN}, \; Spe = \dfrac{TN}{TN+FP},$$ where $S$ and $G$ denote the segmented mask and the manual ground truth, respectively. $TP$ and $TN$ denote the number of true positives and true negatives, respectively, and $FP$ and $FN$ denote the number of false positives and false negatives, respectively. Moreover, we follow the clinical convention to compute the vertical cup to disc ratio (CDR) which is an important indicator for glaucoma screening. When CDR is greater than a threshold, it is glaucomatous, otherwise, healthy. Thus an evaluation metric, named absolute CDR error $\delta_E$, is defined as: $\delta _E = | CDR_S - CDR_G |,$ where $CDR_G$ denotes the manual CDR from trained clinician, and $CDR_S$ is the CDR calculated by the segmented result. \ Method $E_{disc}$ $A_{disc}$ $E_{cup}$ $A_{cup}$ $E_{rim}$ $A_{rim}$ $\delta _E$ -------------------------- ------------ ------------ ----------- ----------- ----------- ----------- ------------- R-Bend [@Joshi2011] 0.129 - 0.395 - - - 0.154 ASM [@Yin2011] 0.148 - 0.313 - - - 0.107 Superpixel [@Cheng2013] 0.102 0.964 0.264 0.918 0.299 0.905 0.077 LRR [@Xu2014] - - 0.244 - - - 0.078 QDSVM [@Cheng2017SVM] 0.110 - - - - - - U-Net [@Ronneberger2015] 0.115 0.959 0.287 0.901 0.303 0.921 0.102 Joint U-Net 0.108 0.961 0.285 0.913 0.325 0.903 0.083 Our M-Net 0.083 0.972 0.256 0.914 0.265 0.921 0.078 Joint U-Net + PT 0.072 0.979 0.251 0.914 0.250 0.935 0.074 Our M-Net + PT **0.071** **0.983** **0.230** **0.930** **0.233** **0.941** **0.071** \ \[Tab\_Seg\_score\] We compare our M-Net with the several state-of-the-art OD/OC segmentation approaches: relevant-vessel bends (R-Bend) method in [@Joshi2011], active shape model (ASM) method in [@Yin2011], superpixel-based classification (Superpixel) method in [@Cheng2013], quadratic divergence regularized SVM (QDSVM) method in [@Cheng2017SVM], and low-rank superpixel representation (LRR) method in [@Xu2014]. Additional, we compare with the deep learning method U-Net [@Ronneberger2015]. We report two results of U-Net, one is the original U-Net for segmenting OC and OD separately, and the other is U-Net utilized our multi-label loss function (Joint U-Net) for segmenting OC and OD jointly. We also provide segmentation results with/without the polar transformation (PT). The performances are shown in Table \[Tab\_Seg\_score\]. R-Bend [@Joshi2011] provides a parameterization technique based on vessel bends, and ASM [@Yin2011] employs the circular Hough transform initialization to segment the OD and OC regions. These two bottom-up methods extract the OD and OC regions separately, and do not perform well on the ORIGA dataset. Superpixel method in [@Cheng2013] utilizes superpixel classification to detect the OD and OC boundaries. It obtains a better performance than the other two bottom-up methods [@Joshi2011; @Yin2011]. The methods in LRR [@Xu2014] and QDSVM [@Cheng2017SVM] obtain good results. However, they only focus on either OD or OC segmentation, and can not calculate the CDR for glaucoma screening. Joint U-Net with our multi-label loss utilizes the mutual relation of OD and OC, and obtains a better performance than that in traditional U-Net [@Ronneberger2015]. Our M-Net with multi-scale input and side-output layers achieves a higher score than single-scale network and superpixel method [@Cheng2013]. It demonstrates that the multi-scale input and side-output layers are useful to guide the early layer training. The polar transformation as one contribution of our work augments the proportion of cup region. One major advantage is that the polar transformation augments the proportion of cup region, and makes the area of the disc/cup and background more balance. The balanced regions help avoid the overfitting during the model training and improve the segmentation performance further. From Table \[Tab\_Seg\_score\], polar transformation reduces about $0.03$ in Joint U-Net and $0.02$ in M-Net on $E_{cup}$ scores. Note that the performance of Joint U-Net with PT is slight better than that in M-Net without PT. It shows that the gains of the polar transformation may be higher than that using multi-scale input and side-output layers. Finally, our M-Net with PT achieves the best performance, and outperforms other state-of-the-art methods. ![image](figure/Exp_CDRSeg){width="1\linewidth"} Fig. \[img-exp-seg\] shows the visual examples of the segmentation results, where the first two rows are normal eyes and the rest rows are glaucoma cases. For the superpixel method [@Cheng2013], the segmented OC is smaller than ground truth in glaucoma cases, which may cause an under-estimated CDR. The deep learning methods (e.g., Joint U-Net and M-Net) obtain more accurate cup boundary, but it easily generates a larger OD. By contrast, our M-Net with PT can effectively and accurately segment OD and OC regions. The last row in Fig. \[img-exp-seg\] shows a challenging case for segmentation, where the image is blurred and has low-contrast for identifying the OC boundary. For this case, all the methods fail to produce accurate OC segmentation. This issue could potentially be addressed in future work through the use of more powerful network or additional image enhancement pre-processing. Glaucoma Screening ------------------ We also evaluate the proposed method on glaucoma screening by using the calculated CDR value. Two datasets are used, one is ORIGA dataset, the second is Singapore Chinese Eye Study (SCES) dataset. For ORIGA dataset, we employ 325 image for training and rest for testing, as same as used in segmentation experiment. For the SCES dataset, it consists of 1676 images, of which 46 ($\sim 3\%$) are glaucoma cases. Since the SCES dataset provides only clinical diagnoses, it will be used only to assess the diagnostic performance of our system. We use all the 650 images in ORIGA dataset for training and the whole 1676 images of SCES for testing. We report the Receiver Operating Characteristic (ROC) curve and area under ROC curve (AUC) as the overall measure of the diagnostic strength. The performances for glaucoma screening based on CDR are shown in Fig. \[exp\_auc\]. From the glaucoma screening results, we have the following observations: 1) The non-deep learning method, superpixel [@Cheng2013], produces a competitive performance ($AUC=0.814$) on ORIGA dataset, which is better than M-Net ($AUC=0.8014$). But its performance is lower than others on SCES dataset. 2) The Joint U-Net with PT obtains higher scores than superpixel [@Cheng2013] and U-Net on both ORIGA and SCES datasets. 3) Our M-Net with PT achieves the best performances on ORIGA dataset ($AUC=0.8508$) and SCES dataset ($AUC=0.8997$). Especially, our M-Net with PT has more than $5 \%$ improvement on AUC than M-Net without PT, which demonstrates the effectiveness of polar transformation on glaucoma screening. 4) Our method also outperforms other deep learning based diagnostic method. For example, the deep learning method in [@Chen2015MICCAI] provides a glaucoma screening system by using deep visual feature, which obtained $AUC = 0.838$ and $AUC = 0.898$ on ORIGA and SCES dataset. However, it can not provide the CDR value as a clinical explanation. Our result of M-Net with PT is comparable to that of deep system [@Chen2015MICCAI]. 5) Final, all the deep learning based methods have better performances in SCES dataset than those in ORIGA dataset. One possible reason is that the size of training set on ORIGA is only 325. The more training data promotes the representation capability of deep learning. Discussion ---------- ### Running Time The entire training phase of our method takes about 5 hours on a single NVIDIA Titan X GPU (100 iterations). However, the training phase could be done offline. In online testing, it costs only $0.5 s$ to generate the final segmentation map for one fundus image, which is faster than the existing methods, e.g., superpixel method [@Cheng2013] takes $10 s$, ASM method [@Yin2011] takes $4 s$, R-Bend method [@Joshi2011] takes $4 s$, sequential segmentation of OD and OC by using original U-Net [@Ronneberger2015] takes $1s$. ### Repeatability Experiment Data Glaucoma (n=39) Normal (n=1481) All (n=1520) ------------- ----------------- ----------------- -------------- Coefficient 0.8833 0.8262 0.8357 $p$-value $< 0.0001$ $< 0.0001$ $< 0.0001$ : Correlation Coefficients of Repeatability Experiment. \ \[Tab\_repeat\] ![Scatter plot of the CDR correspondence on the repeatability dataset.[]{data-label="img-repeat"}](figure/Repeat){width="0.9\linewidth"} In this experiment, we evaluate the repeatability of proposed method. We collect a repeatability dataset with two corresponding sets (A and B) consisting of 1520 fundus image pairs. For each pair, one image is selected from SCES dataset, and other one is the different image from the same visit. We run our proposed method on these two sets, and calculate the correlation coefficients of CDR values. Table \[Tab\_repeat\] reports the repeatability test result, and the scatter plot of the CDR correspondence is shown in Fig. \[img-repeat\]. As it can be seen, our method gets $p< 0.0001$ of $p$-value and appears a good repeatability. ### Clinical Measurement Our M-Net method segments the whole OD and OC regions, which could be used to calculate other clinical measurements. In this experiment, we evaluate the rim to disc area ratio (RDAR) [@Jonas362] defined as: ${Area(Disc - Cup)}/{Area(Disc)}$. The comparison of CDR and RDAR is shown in Table \[Tab\_RDAR\], where our M-Net with PT obtains the best screening performance based on RDAR value on both datasets, which is consistent with the experiment based on CDR. Moreover, CDR measurement shows a better screening performance than RDAR. The possible reasons may be that the rim is calculated by subtracting of disc and cup regions, which contains the errors of both disc and cup segmentations. Thus the rim error is larger than cup error. This also observes from Table \[Tab\_Seg\_score\], where the rim error ($E_{rim} = 0.233$) is larger than cup error ($E_{cup} = 0.230$) based on our M-Net with PT. Moreover, since the central retinal vessel trunk usually locates in the nasal optic disc sector [@Jonas12322002], it renders difficult the automatic delineation of the optic disc region in horizontal. Thus, the vertical disc and cup may obtain the higher accuracy than that in horizontal. ------------------ -------- -------- -------- -------- Method $CDR$ $RDAR$ $CDR$ $RDAR$ Our M-Net 0.8019 0.7981 0.8397 0.8290 Joint U-Net + PT 0.8152 0.7921 0.8612 0.8003 Our M-Net + PT 0.8508 0.8425 0.8998 0.8488 ------------------ -------- -------- -------- -------- : AUC performance of the different Clinical Measurements. (CDR: vertical cup to disc ratio. RDAR: rim to disc area ratio) \ \[Tab\_RDAR\] Conclusion {#sec_conclusion} ========== In this paper, we have developed a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly into a one-stage multi-label framework. The proposed M-Net employed the U-shape convolutional network as the body structure. And the multi-scale layer constructed an image pyramid to fed a multi-level inputs, while the side-output Layers acted as a early classifier to produce the companion local prediction maps for early scale layers. A multi-label loss function has been proposed to guarantee the final output for segmenting OD and OC together. For improving the segmentation result further, we also have introduced a polar transformation to transfer the original fundus image to the polar coordinate system. We have demonstrated that our system produces state-of-the-art segmentation results on ORIGA dataset. Simultaneously, the proposed method also obtained the satisfactory glaucoma screening performances by using the calculated CDR on both ORIGA and SCES datasets. The work implementation details are available at <http://hzfu.github.io/proj_glaucoma_fundus.html>. [^1]: H. Fu and D. W. K. Wong are with Institute for Infocomm Research, Agency for Science, Technology and Research, Singapore 138632 (e-mail: [email protected], [email protected]). [^2]: J. Cheng is with Institute for Infocomm Research, Agency for Science, Technology and Research, Singapore 138632, and also with the Cixi Institute of Biomedical Engineering, Chinese Academy of Sciences, Zhejiang 315201, China (e-mail: [email protected]) [^3]: Y. Xu is with Guangzhou Shiyuan Electronics Co., Ltd. (CVTE), Guangzhou 510670, China (e-mail: [email protected]). [^4]: J. Liu is with the Cixi Institute of Biomedical Engineering, Chinese Academy of Sciences, Zhejiang 315201, China (e-mail: [email protected]). [^5]: X. Cao is with the State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China (e-mail: [email protected])
--- abstract: 'The recently developed weakly nonlinear theory of dynamic fracture predicts $1/r$ corrections to the standard asymptotic linear elastic $1/\sqrt{r}$ displacement-gradients, where $r$ is measured from the tip of a tensile crack. We show that the $1/r$ singularity does not automatically conform with the notion of autonomy (autonomy means that any crack tip nonlinear solution is uniquely determined by the surrounding linear elastic $1/\sqrt{r}$ fields) and that it does not automatically satisfy the resultant Newton’s equation in the crack parallel direction. We show that these two properties are interrelated and that by requiring that the resultant Newton’s equation is satisfied, autonomy of the $1/r$ singular solution is retained. We further show that the resultant linear momentum carried by the $1/r$ singular fields vanishes identically. Our results, which reveal the physical and mathematical nature of the new solution, are in favorable agreement with recent near tip measurements.' author: - Eran Bouchbinder title: Autonomy and Singularity in Dynamic Fracture --- A weakly nonlinear theory of dynamic fracture, which extends the standard theory of fracture [@98Fre; @99Bro], was recently developed [@08BLF; @09BLF]. This theory was shown to be in excellent agreement with groundbreaking experimental measurements of the deformation near the tip of rapid cracks [@08LBF; @08BLF; @09BLF; @10LBSF]. Furthermore, it may be relevant to understanding currently poorly understood crack tip instabilities [@09Bouchbinder]. In this Rapid Communication we derive a series of theoretical results that further elucidate the physical and mathematical nature of the theory. The standard approach to dynamic fracture - linear elastic fracture mechanics (LEFM) [@98Fre; @99Bro] - assumes that linear elasticity dominates the deformation fields outside a small region near the tip of a crack. Its major prediction is that crack tips concentrate large deformation-gradients and stresses which are characterized by a universal $1/\sqrt{r}$ singular behavior. Many results in this theoretical framework are derived from the latter property [@98Fre; @99Bro]. The basic physical idea underlying the weakly nonlinear theory of dynamic fracture is that linear elasticity breaks down when elastic nonlinearities intervene near the tip of a crack. This is a physically intuitive idea since atomes/molecules are expected to sample reversible (i.e. elastic) anharmonic parts of the interaction potential before their separation is large enough to induce irreversible deformation (e.g. damage, plasticity and eventually fracture). To mathematically formulate this idea, we consider the following expansion of the displacement field ${{\bm{u}}}$ [@08BLF; @09BLF] $$\begin{aligned} \label{expansion} {{\bm{u}}}(r,\theta;v) &\simeq& \epsilon \tilde{{{\bm{u}}}}^{(1)}(r,\theta;v)+\epsilon^2 \tilde{{{\bm{u}}}}^{(2)}(r,\theta;v)+ {{\mathcal{O}}}(\epsilon^3)\nonumber\\ &\equiv& {{\bm{u}}}^{(1)}(r,\theta;v) + {{\bm{u}}}^{(2)}(r,\theta;v)+ {{\mathcal{O}}}(\epsilon^3)\ ,\end{aligned}$$ where $\epsilon$ quantifies the magnitude of the displacement-gradients [@08BLF], $(r,\theta)$ is a polar coordinates system located at a crack’s tip and moving with it at a speed $v$ in the $\theta\!=\!0$ direction. The first order term in $\epsilon$ corresponds to linear elasticity, which is actually only a first term in a more general expansion, while the second order term corresponds to the leading nonlinearity that intervenes when the deformation is large enough. As higher order nonlinearities are neglected in Eq. (\[expansion\]), the theory based on it is termed “weakly nonlinear theory of dynamic fracture” [@08BLF; @09BLF; @08BL]. The expansion in Eq. (\[expansion\]) can be substituted in a general elastic strain energy functional $U({{\bm{F}}})$, where $F_{ij}\!=\!\delta_{ij}+{\partial}_j u_i$, from which the first Piola-Kirchhoff stress tensor ${{\bm{s}}}$ can be derived as $$\label{1st_PK} {{\bm{s}}} = \frac{{\partial}U}{{\partial}{{\bm{F}}}} \ .$$ ${{\bm{s}}}$ quantifies forces in the deformed configuration per unit areas in the reference configuration [@Holzapfel]. Then, the momentum balance equations and the crack faces traction-free boundary conditions are obtained order by order in $\epsilon$ [@08BLF; @09BLF; @08BL]. ${{\bm{u}}}^{(1)}(r,\theta;v)$ in Eq. (\[expansion\]) satisfies the first order problem, which is a standard LEFM one [@98Fre; @99Bro]. The near crack tip (asymptotic) fields for steady state propagation under Mode I (opening) symmetry are given by [@98Fre; @99Bro] $$\begin{aligned} u_x^{(1)}(r, \theta;v)&=&\frac{K_I \sqrt{r}}{4\mu\sqrt{2\pi}}\Omega_x(\theta;v),\nonumber\\ \label{firstO} u_y^{(1)}(r,\theta;v)&=&\frac{K_I\sqrt{r}}{4\mu\sqrt{2\pi}}\Omega_y(\theta;v).\end{aligned}$$ Here $K_I$ is the Mode I “stress intensity factor” which cannot be determined by the asymptotic analysis, but rather by the [*global*]{} crack problem. ${{\bm{\Omega}}}(\theta;v)$ is a known universal function [@98Fre; @99Bro; @08BLF] and $\mu$ is the shear modulus. $x$ corresponds to the propagation direction, $\theta\!=\!0$, and $y$ corresponds to the direction in which the tensile loadings are applied, $\theta \!=\! \pm \pi/2$. The displacement fields in Eq. (\[firstO\]) give rise to the famous $1/\sqrt{r}$ displacement-gradients and stress singularity [@98Fre; @99Bro]. ${{\bm{u}}}^{(2)}\!(r,\theta;v)$ in Eq. (\[expansion\]) satisfies the second order problem, which was explicitly derived in [@08BLF; @09BLF]. It has the following form $$\begin{aligned} \label{solution} u_x^{(2)}(r,\theta;v)\!&=&\!\frac{K_I^2}{32\pi\mu^2}\Big[A\log{r}+\frac{A}{2}\log{\left(1-\frac{v^2\sin^2\theta}{c_d^2} \right)}\nonumber\\ +\,B\alpha_s\log{r}\!\!&+&\!\!\frac{B \alpha_s}{2}\log{\left(1-\frac{v^2\sin^2\theta}{c_s^2} \right)}+\Upsilon_x(\theta;v)\Big],\nonumber\\ u_y^{(2)}(r,\theta;v)\!&=&\!\frac{K_I^2}{32\pi\mu^2}\Big[-A\alpha_d\theta_d-B\theta_s+\Upsilon_y(\theta;v)\Big], \nonumber\\ \tan{\theta_{d,s}}&=&\alpha_{d,s}\tan{\theta},\quad\alpha^2_{d,s}\equiv1-v^2/c_{d,s}^2 \ ,\end{aligned}$$ where $c_{d,s}$ are the dilatational and shear wave speeds, respectively. ${{\bm{\Upsilon}}}(\theta;v)$ is given in the form $$\Upsilon_x(\theta;v) \!=\! \sum_{n}\!\!c_n(v) \cos(n\theta),~~\Upsilon_y(\theta;v) \!=\! \sum_{n}\!\!d_n(v) \sin(n\theta),$$ where the coefficients $\{c_n(v) ,d_n(v)\}$ can be easily obtained by solving a set of linear algebraic equations [@08BLF; @09BLF]. The coefficients $A$ and $B$ are related by the traction-free boundary conditions on the crack faces [@08BLF; @09BLF] through $$A= \frac{2\mu B \alpha_s -(\lambda+2\mu){\partial}_\theta \!\Upsilon_y(\pi;v)-\mu\,\kappa(v)}{\lambda - (\lambda+2\mu)\alpha_d^2} \ . \label{A_B_bc}$$ Here $$\begin{aligned} \label{kappa} \kappa(v) = -\frac{16\alpha_d^2 v^4 \left(\lambda + \mu \right)}{\,\mu\, c_s^4 \left[4\alpha_s\alpha_d-(1+\alpha_s^2)^2 \right]^2}\end{aligned}$$ and $\lambda$ is the second Lamé coefficient [@98Fre; @99Bro]. The displacement fields in Eq. (\[solution\]) contain $\log{(r)}$ terms and give rise to $1/r$ singular displacement-gradients. Both of these features were directly verified by near tip measurements [@08LBF; @08BLF]. The solution in (\[solution\])-(\[A\_B\_bc\]) satisfies the second order momentum balance equations and the traction-free boundary conditions on the crack faces. Therefore, one may reach the conclusion that the second order solution contains a parameter $B$ that is not uniquely determined by the stress intensity factor $K_I$. If true, this result has profound theoretical implications as it suggests that the concept of the autonomy of the near crack tip nonlinear region [@98Fre; @99Bro] is not always valid. The basic idea behind this concept is that the mechanical state within the near tip nonlinear zone, which is surrounded by the LEFM fields of Eqs. (\[firstO\]), is uniquely determined by the value of $K_I$ and is otherwise independent of the applied loadings and the geometric configuration in a given problem. This implies, for example, that systems with the same $K_I$, but with different applied loadings and geometric configurations, will be in the same mechanical state within the near tip nonlinear zone. Autonomy is a central concept in fracture mechanics [@98Fre; @99Bro]. In [@08BLF] values of $B$ were directly extracted from the experimental data, without addressing the question of whether they can be in fact theoretically determined. In [@09BLF] it was shown that in the quasi-static limit, $v \!\to\! 0$, $B$ can be theoretically determined by $K_I$ and hence the autonomy of the near tip nonlinear region is retained. This may appear as a puzzling result, because Eq. (\[solution\]), with Eq. (\[A\_B\_bc\]), satisfies the asymptotic second order boundary-value problem for [*all*]{} $B$’s - what is then the missing physical ingredient that is not contained within the asymptotic boundary-value problem? To answer this question, which was only partially addressed in [@09BLF], consider the net force per unit sample thickness ${{\bm{f}}}$ acting on a line of radius $r$ encircling a crack’s tip $$\begin{aligned} \label{f} f_i \equiv \int_{-\pi}^{\pi} s_{ij} n_j r d\theta \ ,\end{aligned}$$ where ${{\bm{n}}}$ is an outward unit normal on the circle. The Mode I symmetry immediately implies $f_y\!=\!0$. Moreover, as no force is acting on the crack tip in the crack propagation direction (x), and focusing first on the quasi-static limit ($v \!\to\! 0$) in which material inertia does not play a role, we must have $f_x\!=\!0$. In the framework of quasi-static LEFM, one can show that a $1/r$ contribution to ${{\bm{s}}}$ results in $f_x\!\ne\!0$ [@74Rice]. Therefore, $f_x\!=\!0$ is satisfied within quasi-static LEFM if and only if the $1/r$ singularity is discarded altogether, i.e. its prefactor is set equal zero [@74Rice]. The $1/r$ singularity generates an unbalanced (spurious) force in the crack parallel direction (where no boundary conditions are imposed), even though it satisfies the asymptotic boundary-value problem. Hence, in the framework of LEFM this singularity is unphysical. This conclusion is in sharp contrast to the corresponding situation in the quasi-static weakly nonlinear theory [@09BLF]. In this case, a $1/r$ singular contribution to ${{\bm{s}}}$ also does not automatically lead to $f_x\!=\!0$, but the latter can be recovered without discarding the whole solution by properly choosing $B$ as a function of $K_I$ in Eqs. (\[solution\]) and (\[A\_B\_bc\]), retaining autonomy [@09BLF]. Therefore, the special property of the $1/r$ singularity discussed above is precisely the missing physical ingredient, which is not contained within the the asymptotic boundary-value problem, that ensures that autonomy is not violated. We thus see that in contrast to LEFM, the $1/r$ singularity in quasi-static weakly nonlinear fracture mechanics is a physically sound solution that does not violate any physical principle and conforms with the concept of autonomy. The discussion above was restricted to the quasi-static limit. The generalization to the fully dynamic case, $v\!>\!0$, is somewhat more subtle because material inertia can play a role. To understand this, we write down the resultant Newton’s equation for the material enclosed within a circle of radius $r$ around the tip. It is a balance between the force ${{\bm{f}}}$ in Eq. (\[f\]) and the time rate of change of linear momentum $\dot{{{\bm{p}}}}$ (both per unit sample thickness) $$\begin{aligned} \label{f_dynamic} \!\!\!\!\!\!f_i \equiv \int_{-\pi}^{\pi}\!\! s_{ij} n_j r d\theta = v^2 \rho \int_0^r \!\! r' dr' \int_{-\pi}^{\pi} {\partial}_{xx} u_i d\theta \equiv \dot{p}_i,\end{aligned}$$ where the steady state relation ${\partial}_t\!=\!-v{\partial}_x$ was used and $r'$ is a dummy integration variable. We expect Eq. (\[f\_dynamic\]) to provide the necessary condition for determining $B$ in the dynamic case, though it is clear that ${{\bm{f}}}\!\ne\!0$ is possible if $\dot{{{\bm{p}}}} \ne 0$, without violating any physical law. In order to demonstrate the latter possibility, consider the asymptotic first order solution given in Eq. (\[firstO\]), which can be used to derive the standard first order (linear elastic) stress tensor ${{\bm{s}}}^{(1)}$ [@98Fre]. Using ${{\bm{u}}}^{(1)}$ and ${{\bm{s}}}^{(1)}$ in Eq. (\[f\_dynamic\]), ${{\bm{f}}}^{(1)}$ and $\dot{{{\bm{p}}}}^{(1)}$ can be calculated. Recall that Mode I symmetry implies that $f_y\!=\!\dot p_y\!=\!0$, so all the discussion to follow focuses on the the crack parallel direction $x$. In Fig. \[f\_x\] we plot $r^{-1/2}f_x^{(1)}/K_I$ vs. $v/c_s$. The figure shows that $f_x^{(1)}\!\ne\! 0$ for $v\!>\!0$ (except for an isolated point). Moreover, a direct calculation shows that $f_x^{(1)}\!=\!\dot p_x^{(1)}$. Therefore, the standard LEFM $1/\sqrt{r}$ singularity automatically satisfies Eq. (\[f\_dynamic\]), though $f_x^{(1)}\! \ne\! 0$. The situation is different when the $1/r$ singularity of the weakly nonlinear theory is considered. Equation (\[f\_dynamic\]) was shown above to be satisfied automatically to first order in $\epsilon$, when the fields in Eq. (\[firstO\]) are used. Consider now Eq. (\[f\_dynamic\]) to second order in $\epsilon$. The second order stress tensor ${{\bm{s}}}^{(2)}$ has the following scaling property $$s^{(2)} \sim {\partial}u^{(1)} {\partial}u^{(1)} + {\partial}u^{(2)} \sim r^{-1} \ ,$$ where the tensorial notation was omitted for simplicity. This implies that $f_x^{(2)}$ is a constant independent of the radius $r$. On the other hand, we have $${\partial}_{xx}u^{(2)} \sim r^{-2} \ ,$$ which implies that $\dot p_x^{(2)}$ depends on $r$ (in fact it diverge logarithmically). Therefore, the only way in which Eq. (\[f\_dynamic\]) can be satisfied to second order is by having $\dot p_x^{(2)}\!=\!0$, which implies that $$\begin{aligned} \label{dynamic_condition} f^{(2)}_x\equiv\int_{-\pi}^{\pi} s^{(2)}_{xj} n_j r d\theta = 0 \ .\end{aligned}$$ Therefore, the $1/r$ singular fields carry no net linear momentum and $f_x^{(2)}\!=\!0$ is the condition that determines $B$, precisely as in the quasi-static limit. The latter prediction, i.e. that $\dot p_x^{(2)}\!=\!0$, can be directly checked using the explicit second order solution in Eq. (\[solution\]). In order to calculate $\dot p_x^{(2)}$, we evaluate the different contributions to ${\partial}_{xx} u_x^{(2)}$ $$\begin{aligned} &&{\partial}_{xx} \Upsilon_x(\theta;v) = \sum_{n} - n ~c_n(v) \sin\theta \frac{n \cos(n\theta) \sin\theta + 2\cos\theta \sin(n \theta)}{r^2} \ ,\nonumber\\ &&{\partial}_{xx} \left[A\log{r}+\frac{A}{2}\log{\left(1-\frac{v^2\sin^2\theta}{c_d^2} \right)}+B\alpha_s\log{r}+\frac{B \alpha_s}{2}\log{\left(1-\frac{v^2\sin^2\theta}{c_s^2} \right)}\right] =\nonumber\\ && -\frac{2A c_d^2 \left[v^2+ \left(2c_d^2-v^2 \right)\cos(2\theta) \right]}{\left[2c_d^2 - v^2 + v^2 \cos(2\theta) \right]^2 r^2} -\frac{2B c_s^2 \sqrt{1-v^2/c_s^2} \left[v^2+ \left(2c_s^2-v^2 \right)\cos(2\theta) \right]}{\left[2c_s^2 - v^2 + v^2\cos(2\theta) \right]^2 r^2} \ .\end{aligned}$$ Hence, calculating analytically the angular integrals over the above expressions, which sum up to $\int_{-\pi}^{\pi} {\partial}_{xx} u_x^{(2)} d\theta$, we obtain $$\begin{aligned} \!\!\!&&\int_{-\pi}^{\pi} {\partial}_{xx} \left[A\log{r}+\frac{A}{2}\log{\left(1-\frac{v^2\sin^2\theta}{c_d^2} \right)}+B\alpha_s\log{r}+\frac{B \alpha_s}{2}\log{\left(1-\frac{v^2\sin^2\theta}{c_s^2} \right)}+ \Upsilon_x(\theta;v) \right] d\theta=\\ \!\!\!&&\left[-\frac{A \sin(2\theta)}{\left[1+\alpha_d^2 + \frac{v^2}{c_d^2}\cos(2\theta) \right] r^2} -\frac{B \alpha_s \sin(2\theta)}{\left[1 + \alpha_s^2+ \frac{v^2}{c_s^2}\cos(2\theta) \right] r^2}+ \sum_{n} n c_n(v) \frac{\sin[(n-2)\theta]-2\sin(n\theta)+\sin[(n+2)\theta]}{4r^2}\right]_{-\pi}^{\pi}\!\!\!=\!0 \nonumber \ .\end{aligned}$$ which explicitly verifies that the $1/r$ singular fields carry no net linear momentum, $\dot p_x^{(2)}\!=\!0$. We reiterate that surprisingly, even at finite crack velocities ($v\!>\!0$) inertia does not play a role in determining $B$ and in retaining autonomy, and the condition to be satisfied remains as in the quasi-static limit, i.e. Eq. (\[dynamic\_condition\]). As was stressed several times above, Eq. (\[dynamic\_condition\]) is not satisfied for any $B$, but rather determines it. Therefore, in order to calculate $B$, we substitute ${{\bm{s}}}^{(2)}$ (which is obtained from expanding Eq. (\[1st\_PK\]) in orders of $\epsilon$, cf. [@08BLF; @09BLF]) in the integrand of (\[dynamic\_condition\]) and look for the value of $B$ that makes the integral vanish. This cannot be done analytically, but is easily achieved numerically. To test the theory, we compare its predictions to the direct near-tip deformation measurements of [@08LBF]. For that aim, we focus on $v\!=\!0.53c_s$ and use $\{c_n(0.53c_s) ,d_n(0.53c_s)\}$, $\lambda \!=\! 2\mu$, $\mu \!=\! 32.5$kPa as reported in [@08BLF] for the material used in [@08LBF]. Using these numbers in Eqs. (\[firstO\])-(\[kappa\]) to obtain ${{\bm{s}}}^{(2)}$ and then calculating numerically the integral in Eq. (\[dynamic\_condition\]), we obtain $B \simeq 18.5$. This value cannot be directly compared to the value obtained by a fitting procedure in Fig. 1(b) of [@08BLF] because in the latter case the asymptotic LEFM fields of Eqs. (\[firstO\]) had to be supplemented with a subleading term (corresponding to the “T-stress” [@08LBF; @08BLF]) and hence an additional parameter was involved. Instead, we focus on a smaller region near the tip, where the subleading term that we do not consider here is less significant, and use the stress intensity factor of Fig. 1(b) in [@08BLF], $K_I \!=\! 1250$Pa$\sqrt{m}$. Recall that, as required by autonomy, the stress intensity factor is the only parameter that is needed as an input to the asymptotic near tip theory. In Fig. \[Strain\_yy\] we plot ${\partial}_y u_y(r,0)$ of the weakly nonlinear theory, cf. Eq. (\[expansion\]), together with the experimental data of [@08LBF; @08BLF]. The agreement between the theory and the experimental data is remarkable, supporting the theoretical results derived above. In summary, in this Rapid Communication we theoretically explored some properties of the $1/r$ singularity in the framework of the recently developed weakly nonlinear theory of dynamic fracture. It was shown that the theory is consistent with the notion of the autonomy of the nonlinear near-tip region for any crack tip velocity $v$, extending the quasi-static results of [@09BLF]. In addition, it was shown that no net linear momentum is carried by the $1/r$ singular fields. As only Mode I symmetry was considered here, a direction for future investigation is the development of a weakly nonlinear theory for more general fracture conditions. [99]{} L. B. Freund, [*Dynamic Fracture Mechanics*]{}, (Cambridge University Press, Cambridge, 1998). K. B. Broberg, [*Cracks and Fracture*]{}, (Academic Press, 1999). E. Bouchbinder, A. Livne and J. Fineberg, Phys. Rev. Lett. [**101**]{}, 264302 (2008). E. Bouchbibder, A. Livne and J. Fineberg, J. Mech. Phys. Solids [**57**]{}, 1568 (2009). A. Livne, E. Bouchbinder and J. Fineberg, Phys. Rev. Lett. [**101**]{}, 264301 (2008). A. Livne, E. Bouchbinder, I. Svetlizky and J. Fineberg, Science [**327**]{}, 1359 (2010). E. Bouchbinder, Phys. Rev. Lett. [**103**]{}, 164301 (2009). E. Bouchbinder and T. S. Lo, Phys. Rev. E [**78**]{}, 056105 (2008). G. A. Holzapfel, [*Nonlinear Solid Mechanics*]{}, (Wiley, Chichester, 2000). J. R. Rice, J. Mech. Phys. Solids [**22**]{}, 17 (1974).
--- abstract: 'A common explanation for the observed X-ray emission of A-type stars is the presence of a hidden late-type companion. While this hypothesis can be shown to be correct in some cases, there is also evidence suggesting that low-mass companions cannot be the proper cause for the observed X-ray activity in all cases. Babel & Montmerle (1997) presented a theoretical framework to explain the X-ray emission for magnetic Ap/Bp stars, focusing on the A0p star IQ Aur. We test if this theoretical model is capable to explain the observed X-ray emissions. We present observations of 13 A-type stars that have been associated with X-ray emission detected by ROSAT. To determine the mean longitudinal magnetic field strength we measured the circular polarization in the wings of the Balmer lines using FORS 1. Although the emission of those objects with magnetic fields fits the prediction of the Babel & Montmerle model, not all X-ray detections are related to the presence of a magnetic field. Additionally, the strengths of magnetic fields do not correlate with the X-ray luminosity and thus the magnetically confined wind shock model cannot explain the X-ray emission from all investigated stars.' author: - 'C.Schröder' - 'S.Hubrig' - 'J.H.M.M.Schmitt' title: 'Magnetic fields in X-ray emitting A-type stars' --- Observations ============ The observations have been carried out on August 28th 2006 with FORS 1 at the VLT Kueyen. This multi-mode instrument is equipped with polarization analyzing optics comprising super-achromatic half-wave and quarter-wave phase retarder plates, and a Wollaston prism with a beam divergence of 22 in standard resolution mode. The grism 600B and the grism 1200B were used, which cover all H Balmer lines from H$\beta$ to the Balmer jump. Most stars have been observed with the grism 600B at a spectral resolution of 2000. Since we had only one observing night, we decided to observe only the two most promising targets with the grism 1200B at a resolving power of R$\sim$4000 and a blue limit at 3885Å. A more detailed description of this technique was given by Hubrig et al. (2004a, 2004b). Result ====== Out of 13 stars, seven are likely weakly magnetic. Magnetic fields in HD147084, HD148898 and HD159312 are detected at a 3$\sigma$ level (see Fig. 1), while for HD174240 and HD224392 they are detected at a 2$\sigma$ level. For HD186219 and HD217186 the detection has been achieved just below the 2$\sigma$ level. The measurements for the five stars HD163336, HD172555, HD186219, HD217186 and HD224361 yielded no detection, but a close inspection revealed Zeeman features in several lines in the Stokes V spectra. These stars are therefore promising targets for further observations. Only HD159217 showed no sign of a magnetic field. We found no correlation between the X-ray luminosity and the measured magnetic field strength. We have to note that because of the strong dependence of the longitudinal field on the rotational aspect, its usefulness to characterize actual field strength distributions is rather limited (Hubrig et al. 2007). This can be overcome by additional future observations to sample various rotation phases. On the other hand, those stars with a detected magnetic field possess X-ray emission which fits the predicted values from the model by Babel & Montmerle. Babel, J. & Montmerle, T. 1997, [A&A]{}, [323]{}, [121]{} Hubrig, S., Kurtz, D.W., Bagnulo, S., et al. 2004a, [A&A]{}, [415]{}, [661]{} Hubrig, S., Szeifert, T., Schöller, M. et al. 2004b, [A&A]{}, [415]{}, [685]{} Hubrig, S., North, P., Schöller, M. & Mathys, G. 2007, [Astronomische Nachrichten]{}, [328]{}, [475]{}
--- abstract: 'Let ${\mathbf{K}}$ be an algebraically closed field. The Cremona group ${\operatorname{Cr}_{2}(\mathbf{K})}$ is the group of birational transformations of the projective plane ${\mathbb{P}}^2_{{\mathbf{K}}}$. We carry out an overall study of centralizers of elements of infinite order in ${\operatorname{Cr}_{2}(\mathbf{K})}$ which leads to a classification of embeddings of ${\mathbf{Z}}^2$ into ${\operatorname{Cr}_{2}(\mathbf{K})}$, as well as a classification of maximal non-torsion abelian subgroups of ${\operatorname{Cr}_{2}(\mathbf{K})}$.' author: - ShengYuan Zhao bibliography: - 'biblicentralizers.bib' nocite: '\nocite{}' title: Centralizers of elements of infinite order in plane Cremona groups --- Introduction ============ Let ${\mathbf{K}}$ be an algebraically closed field. The *plane Cremona group* ${\operatorname{Cr}_{2}(\mathbf{K})}$ is the group of birational transformations of the projective plane ${\mathbb{P}}^2_{{\mathbf{K}}}$. It is isomorphic to the group of ${\mathbf{K}}$-algebra automorphisms of ${\mathbf{K}}(X_1,X_2)$, the function field of ${\mathbb{P}}^2_{{\mathbf{K}}}$. Using a system of homogeneous coordinates $[x_0;x_1;x_2]$, a birational transformation $f\in{\operatorname{Cr}_{2}(\mathbf{K})}$ can be written as $$[x_0:x_1:x_2]\dashrightarrow [f_0(x_0,x_1,x_2):f_1(x_0,x_1,x_2):f_2(x_0,x_1,x_2)]$$ where $f_0,f_1,f_2$ are homogeneous polynomials of the same degree without common factor. This degree does not depend on the system of homogeneous coordinates. We call it the *degree* of $f$ and denote it by $deg(f)$. Geometrically it is the degree of the pull-back by $f$ of a general projective line. Birational transformations of degree $1$ are homographies and form ${\operatorname{Aut}(\mathbb{P}_{\mathbf{K}}^2)}= {\operatorname{PGL}_{3}(\mathbf{K})}$, the group of automorphisms of the projective plane. #### Four types of elements. Following the work of M.H. Gizatullin, S. Cantat, J. Diller and C. Favre, we can classify an element $f\in{\operatorname{Cr}_{2}(\mathbf{K})}$ into exactly one of the four following types according to the growth of the sequence $(deg(f^n))_{n\in {\mathbf{N}}}$ (The standard reference [@DF01] is written for ${\mathbf{K}}={\mathbf{C}}$ but it is known that the same proof works over an algebraically closed field ${\mathbf{K}}$ of characteristic different from $2$ and $3$. The only problem with characteristics $2$ and $3$ is that the important ingredient [@Giz80] does not deal with quasi-elliptic fibrations. This minor issue has been clarified in [@CanDol12] and [@CGL19] so that the following classification holds for arbitrary characteristic.): 1. The sequence $(deg(f^n))_{n\in {\mathbf{N}}}$ is bounded, $f$ is birationally conjugate to an automorphism of a rational surface $X$ and a positive iterate of $f$ lies in the connected component of the identity of the automorphism group $\operatorname{Aut}(X)$. We call $f$ an *elliptic* element. 2. The sequence $(deg(f^n))_{n\in {\mathbf{N}}}$ grows linearly, $f$ preserves a unique pencil of rational curves and $f$ is not conjugate to an automorphism of any rational surface. We call $f$ a *[Jonquières ]{}twist*. 3. The sequence $(deg(f^n))_{n\in {\mathbf{N}}}$ grows quadratically, $f$ is conjugate to an automorphism of a rational surface preserving a unique elliptic fibration. We call $f$ a *Halphen twist*. 4. The sequence $(deg(f^n))_{n\in {\mathbf{N}}}$ grows exponentially and $f$ is called *loxodromic*. #### The [Jonquières ]{}group Fix an affine chart of ${\mathbb{P}}^2$ with coordinates $(x,y)$. *The [Jonquières ]{}group* ${\operatorname{Jonq}(\mathbf{K})}$ is the subgroup of the Cremona group of all transformations of the form $$(x,y)\dashrightarrow \left ( \frac{ax+b}{cx+d},\frac{A(x)y+B(x)}{C(x)y+D(x)} \right ),\quad \begin{pmatrix} a&b\\c&d \end{pmatrix}\in {\operatorname{PGL}_{2}(\mathbf{K})},\quad \begin{pmatrix} A&B\\C&D \end{pmatrix}\in \operatorname{PGL}_2({\mathbf{K}}(x)).$$ In other words, ${\operatorname{Jonq}(\mathbf{K})}$ is the group of all birational transformations of ${\mathbb{P}}^1\times{\mathbb{P}}^1$ permuting the fibres of the projection onto the first factor; it is isomorphic to the semi-direct product ${\operatorname{PGL}_{2}(\mathbf{K})}\ltimes \operatorname{PGL}_2({\mathbf{K}}(x))$. A different choice of the affine chart yields a conjugation by an element of ${\operatorname{PGL}_{3}(\mathbf{K})}$. More generally a conjugation by an element of the Cremona group yields a group preserving a pencil of rational curves; conversely any two such groups are conjugate in ${\operatorname{Cr}_{2}(\mathbf{K})}$. Elements of ${\operatorname{Jonq}(\mathbf{K})}$ are either elliptic or [Jonquières ]{}twists. We denote by ${\operatorname{Jonq}_0(\mathbf{K})}$ the normal subgroup of ${\operatorname{Jonq}(\mathbf{K})}$ that preserves fibrewise the rational fibration, i.e. the subgroup of those transformations of the form $(x,y)\dashrightarrow \left ( x,\frac{A(x)y+B(x)}{C(x)y+D(x)} \right )$; it is isomorphic to $\operatorname{PGL}_2({\mathbf{K}}(x))$. A [Jonquières ]{}twist of the de [Jonquières ]{}group will be called a *base-wandering [Jonquières ]{}twist* if its action on the base of the rational fibration is of infinite order. If ${\mathbf{K}}={\overline{\mathbf{F}_p}}$ is the algebraic closure of a finite field, then ${\mathbf{K}},{\mathbf{K}}^*$ and ${\operatorname{PGL}_{2}(\mathbf{K})}$ are all torsion groups. Thus, if ${\mathbf{K}}={\overline{\mathbf{F}_p}}$ then base-wandering [Jonquières ]{}twists do not exist. Whenever ${\operatorname{char}(\mathbf{K})}=0$, or ${\operatorname{char}(\mathbf{K})}=p>0$ and ${\mathbf{K}}\neq{\overline{\mathbf{F}_p}}$, there exist base-wandering [Jonquières ]{}twists. The group of automorphisms of a Hirzebruch surface will be systematically considered as a subgroup of the [Jonquières ]{}group in the following way: $$\operatorname{Aut}({\mathbb{F}}_n)=\left\{(x,y)\dashrightarrow \left(\frac{ax+b}{cx+d},\frac{y+t_0+t_1x+\cdots+t_nx^n}{(cx+d)^n}\right)\vert \begin{pmatrix} a&b\\c&d\end{pmatrix}\in \operatorname{GL}_{2}({\mathbf{K}}), t_0,\cdots,t_n\in{\mathbf{K}}\right\}.$$ #### Main results. \[virtuallyabelian\] Let $f\in{\operatorname{Cr}_{2}(\mathbf{K})}$ be an element of infinite order. If the centralizer of $f$ is not virtually abelian, then $f$ is an elliptic element and a power of $f$ is conjugate to an automorphism of ${\mathbb{A}}^2$ of the form $(x,y)\mapsto (x,y+1)$ or $(x,y)\mapsto (x,\beta y)$ with $\beta\in{\mathbf{K}}^*$. \[zzthm\] Let $\Gamma$ be a subgroup of ${\operatorname{Cr}_{2}(\mathbf{K})}$ which is isomorphic to ${\mathbf{Z}}^2$. Then $\Gamma$ has a pair of generators $(f,g)$ such that one of the following (mutually exclusive) situations happens up to conjugation in ${\operatorname{Cr}_{2}(\mathbf{K})}$: 1. $f,g$ are elliptic elements and $\Gamma\subset \operatorname{Aut}(X)$ where $X$ is a rational surface; 2. $f,g$ are Halphen twists which preserve the same elliptic fibration on a rational surface $X$, and $\Gamma\subset \operatorname{Aut}(X)$; 3. one or both of the $f,g$ are [Jonquières ]{}twists, and there exist $m,n\in{\mathbf{N}}^*$ such that the finite index subgroup of $\Gamma$ generated by $f^m$ and $g^n$ is in an $1$-dimensional torus over ${\mathbf{K}}(x)$ in ${\operatorname{Jonq}_0(\mathbf{K})}=\operatorname{PGL}_2({\mathbf{K}}(x))$; 4. $f$ is a base-wandering [Jonquières ]{}twist and $g$ is elliptic. In some affine chart, we can write $f,g$ in one of the following forms: - $g$ is $(x,y)\mapsto(\alpha x,\beta y)$ and $f$ is $(x,y)\dashrightarrow(\eta(x),yR(x^k))$ where $\alpha,\beta\in{\mathbf{K}}^*, \alpha^k=1, R\in {\mathbf{K}}(x),\eta\in {\operatorname{PGL}_{2}(\mathbf{K})}, \eta(\alpha x)=\alpha \eta(x)$ and $\eta$ is of infinite order; - (only when ${\operatorname{char}(\mathbf{K})}=0$) $g$ is $(x,y)\mapsto(\alpha x,y+1)$ and $f$ is $(x,y)\dashrightarrow(\eta(x),y+R(x))$ where $\alpha\in{\mathbf{K}}^*, R\in {\mathbf{K}}(x), R(\alpha x)=R(x), \eta\in {\operatorname{PGL}_{2}(\mathbf{K})}, \eta(\alpha x)=\alpha \eta(x)$ and $\eta$ is of infinite order. When $K$ is the algebraic closure of a finite field, the above list can be shortened since there is no elliptic elements of infinite order nor base-wandering [Jonquières ]{}twists. \[degreefunction\] From Theorem \[zzthm\] it is easy to see that (we will give a proof), when $\Gamma$ is isomorphic to ${\mathbf{Z}}^2$, the degree function $deg:\Gamma\rightarrow {\mathbf{N}}$ is governed by the word length function with respect to some generators in the following sense. In the first case of the above theorem it is bounded. In the second case it is up to a bounded term a positive definite quadratic form over ${\mathbf{Z}}^2$. In the third case, if $f$ is elliptic then $deg$ is up to a bounded term $f^i\circ g^j\mapsto c \vert j\vert$ for some $c\in{\mathbf{Q}}_+$; otherwise we can choose two generators $f_0,g_0$ of $\Gamma\cap {\operatorname{Jonq}_0(\mathbf{K})}$ such that $deg$ restricted to $\Gamma\cap {\operatorname{Jonq}_0(\mathbf{K})}$ is up to a bounded term $f_0^i\circ g_0^j\mapsto c_1 \vert i\vert+c_2 \vert j\vert$ for some $c_1,c_2\in{\mathbf{Q}}_+$. In the fourth case the degree function is up to a bounded term $f^i\circ g^j\mapsto c \vert i\vert$ for some $c\in{\mathbf{Q}}_+$. Note that if $f$ and $g$ are two [Jonquières ]{}twists of ${\operatorname{Jonq}(\mathbf{K})}$ that do not necessarily commute, then the degree of $f^i\circ g^j$ is always dominated by $deg(f)\vert i\vert+deg(g)\vert j\vert$ (see Lemma 5.7 [@BC16]). A direct corollary of Theorem \[zzthm\] is: Let $G\subset {\operatorname{Cr}_{2}(\mathbf{K})}$ be a subgroup isomorphic to ${\mathbf{Z}}^2$. If $G$ is not an elliptic subgroup then there exists a non-trivial element of $G$ which preserves each member of a pencil of rational or elliptic curves. Theorem \[zzthm\] is based on several known results. The main new feature is the fourth case. We reformulate this special case as a corollary (see Theorem \[mainthm\] for a more precise reformulation): \[nozz\] Let $G\subset {\operatorname{Jonq}(\mathbf{K})}$ be a subgroup isomorphic to ${\mathbf{Z}}^2$. Suppose that the action of $G$ on the base of the rational fibration is faithful. Then $G$ is an elliptic subgroup. A *maximal abelian subgroup* is an abelian subgroup which is not strictly contained in any other abelian subgroup. Over the field of complex numbers, finite abelian subgroups of ${\operatorname{Cr}_{2}(\mathbf{C})}$ have been classified in [@Bla07]. We will use Theorem \[zzthm\] to classify maximal abelian subgroups of ${\operatorname{Cr}_{2}(\mathbf{K})}$ which contain at least one element of infinite order, see Theorem \[abmaxthm\]. #### Previously known results. Let us begin with the group of polynomial automorphism of the affine plane $\operatorname{Aut}({\mathbb{A}}^2)$. It can be seen as a subgroup of ${\operatorname{Cr}_{2}(\mathbf{K})}$. It is the amalgamated product of the group of affine automorphisms with the so called *elementary group* $$\operatorname{El}({\mathbf{K}})=\{(x,y)\mapsto (\alpha x +\beta, \gamma y+P(x))\vert \alpha,\beta,\gamma\in{\mathbf{K}}, \alpha\beta\neq 0, P\in{\mathbf{K}}[x]\}.$$ Let ${\mathbf{K}}$ be the field of complex numbers. S. Friedland and J. Milnor showed in [@FM89] that an element of $\operatorname{Aut}({\mathbf{C}}^2)$ is either conjugate to an element of $\operatorname{El}({\mathbf{K}})$ or to a gengeralized Hénon map, i.e. a composition $f_1\circ \cdots \circ f_n$ where the $f_i$ are Hénon maps of the form $(x,y)\mapsto(y,P_i(y)-\delta_i x)$ with $\delta_i\in{\mathbf{C}}^*$, $P_i\in{\mathbf{C}}[y]$, $deg(P_i)\geq 2$. S. Lamy and C. Bisi showed in [@Lam01] and [@Bis04] that the centralizer in $\operatorname{Aut}({\mathbf{C}}^2)$ of a generalized Hénon map is finite by cyclic, and that of an element of $\operatorname{El}({\mathbf{C}})$ is uncountable (see also [@Bis08] for partial extensions to higher dimension). Note that, when viewed as elements of ${\operatorname{Cr}_{2}(\mathbf{C})}$, a generalized Hénon map is loxodromic and an element of $\operatorname{El}({\mathbf{C}})$ is elliptic. As regards the Cremona group, centralizers of loxodromic elements are known to be finite by cyclic (S. Cantat [@Can11], J. Blanc-S. Cantat [@BC16]). Centralizers of Halphen twists are virtually abelian of rank at most $8$ (M.K. Gizatullin [@Giz80], S. Cantat [@Can11]). When ${\mathbf{K}}$ is the field of complex numbers, centralizers of elliptic elements of infinite order are completely described by J. Blanc-J. Déserti in [@BD15] and centralizers of [Jonquières ]{}twists in ${\operatorname{Jonq}_0(\mathbf{K})}$ are completely described by D. Cerveau-J. Déserti in [@CD12]. Centralizers of base-wandering [Jonquières ]{}twists are also studied in [@CD12] but they were not fully understood, for example the results in loc. cit. are not sufficient for classifying pairs of [Jonquières ]{}twists generating a copy of ${\mathbf{Z}}^2$. Thus, in order to obtain a classification of embeddings of ${\mathbf{Z}}^2$ in ${\operatorname{Cr}_{2}(\mathbf{K})}$, we need a detailed study of centralizers of base-wandering [Jonquières ]{}twists, which is the main task of this article. Regarding the elements of finite order and their centralizers in ${\operatorname{Cr}_{2}(\mathbf{K})}$, the problem is of a rather different flavour and we refer the readers to [@Bla07], [@DI09], [@Ser10], [@Ure] and the references therein. There is a topology on ${\operatorname{Cr}_{2}(\mathbf{K})}$, called Zariski toplogy, which is introduced by M. Demazure and J-P. Serre in [@Dem70] and [@Ser10]. Note that the Zariski topology does not make ${\operatorname{Cr}_{2}(\mathbf{K})}$ an infinite dimensional algebraic group (cf. [@BF13]). With respect to the Zariski topology, the centralizer of any element of ${\operatorname{Cr}_{2}(\mathbf{K})}$ is closed (J-P. Serre [@Ser10]). When $K$ is a local field, J. Blanc and J-P. Furter construct in [@BF13] an Euclidean topology on ${\operatorname{Cr}_{2}(\mathbf{K})}$ which when restricted to ${\operatorname{PGL}_{3}(\mathbf{K})}$ coincides with the Euclidean topology of ${\operatorname{PGL}_{3}(\mathbf{K})}$; centralizers are also closed with respect to the Euclidean toplogy. In particular the intersection of the centralizer of an element in ${\operatorname{Cr}_{2}(\mathbf{K})}$ with an algebraic subgroup $G$ of ${\operatorname{Cr}_{2}(\mathbf{K})}$ is a closed subgroup of $G$, with respect to the Zariski topology of $G$ (and with respect to the Euclidean topology when the later is present). #### Comparison with other results. S.Smale asked in the ’60s if, in the group of diffeomorphisms of a compact manifold, the centralizer of a generic diffeomorphism consists only of its iterates. There has been a lot of work on this question, see for example [@BCW09] for an affirmative answer in the $C^1$ case. Similar phenomenons also appear in the group of germs of $1$-dimensional holomorphic diffeomorphisms at $0\in{\mathbf{C}}$ ([@Eca81]). See the introduction of [@CD12] for more references in this direction. With regard to ${\operatorname{Cr}_{2}(\mathbf{K})}$, it is known that loxodromic elements form a Zariski dense subset of ${\operatorname{Cr}_{2}(\mathbf{K})}$ (cf. [@Xie15], [@BedDil05]) and that their centralizers coincide with the cyclic group formed by their iterates up to finite index (cf. [@BC16]). Centralizers of general [Jonquières ]{}twists are also finite by cyclic (Remark \[exampledeserti\]). One may compare our classification of ${\mathbf{Z}}^2$ in ${\operatorname{Cr}_{2}(\mathbf{K})}$ to the following two theorems where the situations are more rigid. The first can be seen as a continuous counterpart and is proved by F. Enriques [@Enriques] and M. Demazure [@Dem70], the second can be seen as a torsion counterpart and is proved by A. Beauville [@Bea07]: 1. *If ${\mathbf{K}}^{*r}$ embeds as an algebraic subgroup into ${\operatorname{Cr}_{2}(\mathbf{K})}$, then $r\leq 2$; if $r=2$ then the embedding is conjugate to an embedding into the group of diagonal matrices $\Delta$ in $\operatorname{PGL}_3({\mathbf{K}})$.* 2. *If $p\geq 5$ is a prime number different from the characteristic of ${\mathbf{K}}$ and if $({\mathbf{Z}}/p{\mathbf{Z}})^{r}$ embeds into ${\operatorname{Cr}_{2}(\mathbf{K})}$, then $r\leq 2$; if $r=2$ then the embedding is conjugate to an embedding into the group of diagonal matrices $\Delta$ in $\operatorname{PGL}_3({\mathbf{K}})$.* The classification of ${\mathbf{Z}}^2$ in ${\operatorname{Cr}_{2}(\mathbf{K})}$ is a very natural special case of the study of finitely generated subgroups of ${\operatorname{Cr}_{2}(\mathbf{K})}$; and information on centralizers can be useful for studying homomorphisms from other groups into ${\operatorname{Cr}_{2}(\mathbf{K})}$, see for example [@Des06]. We refer the reader to the surveys [@Fav10],[@Can18] for representations of finitely generated groups into ${\operatorname{Cr}_{2}(\mathbf{K})}$ and [@CX18] for general results in higher dimension. #### Acknowledgement. I would like to address my warmest thanks to my supervisor Serge Cantat for initiating me into Cremona groups, for numerous discussions, for his constant support and for encouraging me to write this paper. I would also like to thank JunYi Xie for helpful discussions on related topics ranging from proof details to general background. Elements which are not base-wandering [Jonquières ]{}twists =========================================================== This section contains a quick review of some scattered results about centralizers from [@Can11],[@BD15],[@CD12],[@BC16]. Some of the proofs are reproduced, because the original proofs were written over ${\mathbf{C}}$ on the one hand, and because we will need some by-products of the proofs on the other hand. Loxodromic elements ------------------- \[loxothm\] Let $f\in {\operatorname{Cr}_{2}(\mathbf{K})}$ be a loxodromic element. The infinite cyclic group generated by $f$ is a finite index subgroup of the centralizer of $f$ in ${\operatorname{Cr}_{2}(\mathbf{K})}$. We provide a proof which is simpler than [@BC16]. The Cremona group ${\operatorname{Cr}_{2}(\mathbf{K})}$ acts faithfully by isometries on an infinite dimensional hyperbolic space $\mathbb{H}$ and the action of a loxodromic element is loxodromic in the sense of hyperbolic geometry (see [@Can11], [@Can18]). In particular there is an $f$-invariant geodesic $Ax(f)$ on which $f$ acts by translation and the translation length is $\log(\lim_{n\rightarrow \infty}deg(f^n)^{1/n})$. The centralizer $\operatorname{Cent}(f)$ preserves $Ax(f)$ and by considering translation lengths we get a morphism $\phi:\operatorname{Cent}(f)\rightarrow {\mathbf{R}}$. We claim that the image of $\phi$ is discrete thus cyclic. Let us see first how the conclusion follows from the claim. Let $x\in \mathbb{H}$ be a point which corresponds to an ample class and let $y$ be an arbitrary point on $Ax(f)$. Since the kernel $\operatorname{Ker}(\phi)$ fixes $Ax(f)$ pointwise, for any element $g$ of $\operatorname{Ker}(\phi)$ the distance $d(x,g(x))$ is bounded by $2d(x,y)$. This implies that $\operatorname{Ker}(\phi)$ is a subgroup of ${\operatorname{Cr}_{2}(\mathbf{K})}$ of bounded degree. If $\operatorname{Ker}(\phi)$ were infinite then its Zariski closure $G$ in ${\operatorname{Cr}_{2}(\mathbf{K})}$ would be an algebraic subgroup of strictly positive dimension contained, after conjugation, in the automorphism group of a rational surface. As $\operatorname{Cent}(f)$ is Zariski closed, the elements of $G$ commute with $f$. The orbits of a one-parameter subgroup of $G$ would form an $f$-invariant pencil of curves. This contradicts the fact that $f$ is loxodromic. Consequently $\operatorname{Ker}(\phi)$ is finite and hence $\operatorname{Cent}(f)$ is finite by cyclic. Now let us prove the claim that the image of $\phi$ is discrete. This follows directly from a spectral gap property for translation lengths of loxodromic elements proved in [@BC16]. We give here an easier direct proof found with S. Cantat. Suppose by contradiction that there is a sequence $(g_n)_n$ of distinct elements of $\operatorname{Cent}(f)$ whose translation lengths on $Ax(f)$ tend to $0$ when $n$ goes to infinity. Without loss of generality, we can suppose the existence of a point $y$ on $Ax(f)$ and a real number $\epsilon>0$ such that $\forall n, d(y,g_n(y))<\epsilon$. Let $x\in\mathbb{H}$ be an element which corresponds to an ample class. Then it follows that $$\forall n, d(x,g_n(x))\leq d(x,y)+d(y,g_n(y))+d(g_n(y),g_n(x))<2d(x,y)+\epsilon=:d,$$ i.e. the sequence $(g_n)_n$ is of bounded degree $d$. Elements of degree less than $d$ of the Cremona group form a quasi-projective variety $\operatorname{Cr}_2^d({\mathbf{K}})$. JunYi Xie proved in [@Xie15] that for any $0<\lambda<\log(d)$, the loxodromic elements of $\operatorname{Cr}_2^d({\mathbf{K}})$ whose translation lengths are greater than $\lambda$ form a Zariski open dense subset of $\operatorname{Cr}_2^d({\mathbf{K}})$. Thus the $g_n$ give rise to a strictly ascending chain of Zariski open subsets of $\operatorname{Cr}_2^d({\mathbf{K}})$, contradicting the noetherian property of Zariski topology. This finishes the proof. Note that [@Xie15] is also used to prove the spectral gap property in [@BC16]. Halphen twists -------------- We only recall here the final arguments of the proofs. \[halphenthm\] Let $f\in {\operatorname{Cr}_{2}(\mathbf{K})}$ be a Halphen twist. The centralizer $\operatorname{Cent}(f)$ of $f$ in ${\operatorname{Cr}_{2}(\mathbf{K})}$ contains a finite index abelian subgroup of rank less than or equal to $8$. Being a Halphen twist, the birational transformation $f$ is up to conjugation an automorphism of a rational surface and preserves a relatively minimal elliptic fibration. This $f$-invariant fibration is unique. As a consequence $\operatorname{Cent}(f)$ acts by automorphisms preserving this fibration. It is proved in [@Giz80] (see [@CGL19] for a clarification in characteristics $2$ and $3$) that the automorphism group of a rational minimal elliptic surface has a finite index abelian subgroup of rank less than $8$. Elliptic elements of infinite order ----------------------------------- In this section we reproduce a part of [@BD15]; we follow the original proofs (for ${\operatorname{char}(\mathbf{K})}=0$) in loc. cit. and some extra details are added in case ${\operatorname{char}(\mathbf{K})}>0$. We omit the proof of the following key proposition which is based on a $G$-Mori-program for rational surfaces due to J. Manin [@Man67] and V. Iskovskih [@Isk79]. \[GMori\] Let $S$ be a smooth rational surface over ${\mathbf{K}}$. Let $f\in \operatorname{Aut}(S)$ be an automorphism of infinite order whose action on $\operatorname{Pic}(S)$ is of finite order. Then there exists a birational morphism $S\rightarrow X$ where $X$ is a Hirzebruch surface ${\mathbb{F}}_n$ ($n\neq 1$) or the projective plane ${\mathbb{P}}^2$, which conjugates $f$ to an automorphism of $X$. \[ellipticnormalform\] Let $f\in {\operatorname{Cr}_{2}(\mathbf{K})}$ be an elliptic element of infinite order. Then $f$ is conjugate to an automorphism of ${\mathbb{P}}^2$. Furthermore there exists an affine chart with affine coordinates $(x,y)$ on which $f$ acts by automorphism of the following form: 1. $(x,y)\mapsto (\alpha x,\beta y)$ where $\alpha,\beta \in {\mathbf{K}}^*$ are such that the kernel of the group homomorphism ${\mathbf{Z}}^2\rightarrow {\mathbf{K}}^*,(i,j)\mapsto\alpha^i\beta^j$ is generated by $(k,0)$ for some $k\in{\mathbf{Z}}$; 2. $(x,y)\mapsto (\alpha x,y+1)$ where $\alpha\in {\mathbf{K}}^*$ and $\alpha$ is of infinite order if ${\operatorname{char}(\mathbf{K})}>0$. If ${\mathbf{K}}={\overline{\mathbf{F}_p}}$ then every elliptic element is of finite order. As a byproduct of the proof of Proposition \[ellipticnormalform\], we will get the following: \[jonqzelliptic\] Let $f$ be an automorphism of a Hirzebruch surface which preserves the rational fibration fibre by fibre (we do not assume that $f$ is of infinite order). Then there exists an affine chart on which $f$ acts as an automorphism of the following form: 1. $(x,y)\mapsto (x,\beta y)$ where $\beta \in {\mathbf{K}}^*$; 2. $(x,y)\mapsto (x,y+1)$. Here $x$ is the coordinate on the base of the rational fibration. Proposition \[GMori\] says that $f$ is conjugate to an automorphism of ${\mathbb{P}}^2$ or of a Hirzebruch surface. Let’s consider first the case when $f\in\operatorname{Aut}({\mathbb{P}}^2)={\operatorname{PGL}_{3}(\mathbf{K})}$. By putting the corresponding matrix in Jordan normal form, we can find an affine chart on which $f$ is, up to conjugation, of one of the following form: 1) $(x,y)\mapsto (\alpha x,\beta y)$; 2) $(x,y)\mapsto (\alpha x,y+1)$; 3) $(x,y)\mapsto (x+y,y+1)$. If ${\operatorname{char}(\mathbf{K})}>0$ then $f$ can not be of the third form since it would be of finite order; if ${\operatorname{char}(\mathbf{K})}=0$ then in the third case $f$ is conjugate by $[x:y:z]\dashrightarrow [xz-\frac{1}{2}y(y-z):yz:z^2]$ to $(x,y)\mapsto (x,y+1)$. We now show that in the first case $\alpha,\beta$ can be chosen to verify the conditon in the proposition. Let $\phi:(x,y)\mapsto (\alpha x,\beta y)$ be a diagonal automorphism, we denote by $\Delta(\phi)$ the kernel of the group morphism ${\mathbf{Z}}^2\rightarrow {\mathbf{K}}^*,(i,j)\mapsto \alpha^i\beta^j$. For $M=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in {\operatorname{GL}_{2}(\mathbf{Z})}$, we denote by $M(\phi)$ the diagonal automorphism $(x,y)\mapsto (\alpha^a\beta^bx,\alpha^c\beta^dy)$, i.e. the conjugate of $\phi$ by the monomial map $(x,y)\dashrightarrow(x^ay^b,x^cy^d)$. We have the relation $\Delta(M(\phi))=(M^{\intercal})^{-1}(\Delta(\phi))$. This implies that up to conjugation by a monomial map we can suppose that our elliptic element $f$ satisfies $\Delta(f)=<(k_1,0),(0,k_1k_2)>$ where $k_1,k_2\in {\mathbf{Z}}$. Since $f$ is of infinite order, $k_1k_2$ must be $0$. If $f\in\operatorname{Aut}({\mathbb{F}}_0)=\operatorname{Aut}({\mathbb{P}}^1\times{\mathbb{P}}^1)$, then we reduce to the case of ${\mathbb{P}}^2$ by blowing up a fixed point and contracting the strict transforms of the two rulings passing through the point. If $f\in\operatorname{Aut}({\mathbb{F}}_n)$ for $n\geq 2$ and if $f$ has a fixed point which is not on the exceptional section, then we can reduce to ${\mathbb{F}}_{n-1}$ by making an elementary transformation at the fixed point. Suppose now that $f\in\operatorname{Aut}({\mathbb{F}}_n), n\geq 2$ and its fixed points are all on the exceptional section. By removing the exceptional section and an invariant fibre of the rational fibration, we get an open subset isomorphic to ${\mathbb{A}}^2$ on which $f$ can be written as: $(x,y)\mapsto(\alpha x,\beta y+Q(x))$ or $(x,y)\mapsto(x+1,\beta y+Q(x))$ where $\alpha,\beta\in {\mathbf{K}}^*$ and $Q$ is a polynomial of degree $\leq n$. In the first case, the fact that there is no extra fixed point on the fibre $x=0$ implies $\beta=1$ and $Q(0)\neq 0$. The action on the fibre at infinity can be obtained by a change of variables $(x',y')=(1/x,y/x^n)$, so the fact that there is no extra fixed point on it implies $\beta=\alpha^n$ and $\deg(Q)=n$. This forces $\alpha$ to be a primitive $r$-th root of unity for some $r\in{\mathbf{N}}$. Conjugating $f$ by $(x,y)\mapsto(x,y+\gamma x^d)$, we replace $Q(x)$ with $Q(x)+\gamma(\alpha^d-1)x^d$. This allows us to eliminate the term $x^d$ of $Q$ unless $\alpha^d=1$. So we can assume that $f$ is of the form $(x,y)\mapsto (\alpha x, y+\tilde{Q}(x^r))$ where $\alpha^r=1$ and $\tilde{Q}\in{\mathbf{K}}[x]$. Then $f$ is conjugate to $(x,y)\mapsto (\alpha x,y+1)$ by $(x,y)\dashrightarrow(x,y/\tilde{Q}(x^r))$. Remark that this case does not happen in positive characteristic because an automorphism of this form would be of finite order. Note that in this paragraph we did not use the fact that $f$ is of infinite order, so that Proposition \[jonqzelliptic\] is proved. Suppose now we are in the second case. There is no extra fixed point if and only if $\beta=1$ and $\deg (Q)=n$. If ${\operatorname{char}(\mathbf{K})}>0$ and if $\beta=1$, then $f$ would be of finite order. Therefore we can assume ${\operatorname{char}(\mathbf{K})}=0$. In that case, we can decrease the degree of $Q$ by conjugating $f$ by a well chosen birational transformation of the form $(x,y)\dashrightarrow(x,y+\gamma x^{n+1})$ with $\gamma\in {\mathbf{K}}^*$. By induction we get $(x,y)\mapsto (x+1,y)$ at last. Once we have the above normal forms, explicit calculations can be done: \[ellipticthm\] Let $f\in{\operatorname{Cr}_{2}(\mathbf{K})}$ be an elliptic element of infinite order. 1. If $f$ is of the form $(x,y)\mapsto(\alpha x,\beta y)$ where $\alpha,\beta \in {\mathbf{K}}^*$ are such that the kernel of the group homomorphism ${\mathbf{Z}}^2\rightarrow {\mathbf{K}}^*,(i,j)\mapsto\alpha^i\beta^j$ is generated by $(k,0)$ for some $k\in{\mathbf{Z}}$, then the centralizer of $f$ in ${\operatorname{Cr}_{2}(\mathbf{K})}$ is $$\operatorname{Cent}(f)=\{(x,y)\dashrightarrow(\eta(x),yR(x^k))\vert R\in {\mathbf{K}}(x),\eta\in {\operatorname{PGL}_{2}(\mathbf{K})}, \eta(\alpha x)=\alpha \eta(x)\}.$$ 2. If ${\operatorname{char}(\mathbf{K})}=0$ and if $f$ is of the form $(x,y)\mapsto(\alpha x,y+1)$, then the centralizer of $f$ in ${\operatorname{Cr}_{2}(\mathbf{K})}$ is $$\operatorname{Cent}(f)=\{(x,y)\dashrightarrow(\eta(x),y+R(x))\vert \eta\in {\operatorname{PGL}_{2}(\mathbf{K})}, \eta(\alpha x)=\alpha \eta(x), R\in {\mathbf{K}}(x), R(\alpha x)=R(x)\}.$$ If $\alpha$ is not a root of unity then $R$ must be constant and $\eta(x)=\beta x$ for some $\beta\in{\mathbf{K}}^*$. 3. If ${\operatorname{char}(\mathbf{K})}=p>0$ and if $f$ is of the form $(x,y)\mapsto(\alpha x,y+1)$ (where $\alpha$ must be of infinite order), then the centralizer of $f$ in ${\operatorname{Cr}_{2}(\mathbf{K})}$ is $$\operatorname{Cent}(f)=\{(x,y)\dashrightarrow(R(y)x,y+t)\vert t\in{\mathbf{K}}, R(y)=S(y)S(y-1)\cdots S(y-p+1), S\in {\mathbf{K}}(y)\}.$$ $$\{\eta\in{\operatorname{PGL}_{2}(\mathbf{K})}\vert\eta(\alpha x)=\alpha \eta(x)\}=\begin{cases}{\operatorname{PGL}_{2}(\mathbf{K})}\quad \text{if} \quad \alpha=1\\ \{x\mapsto\gamma x^{\pm 1}\vert \gamma\in{\mathbf{K}}^*\}\quad \text{if} \quad \alpha=-1\\ \{x\mapsto\gamma x\vert \gamma\in{\mathbf{K}}^*\}\quad \text{if} \quad \alpha\neq\pm 1\\ \end{cases}$$ *First case.* We treat first the case where $f$ is of the form $(x,y)\mapsto(\alpha x,\beta y)$. Let $(x,y)\dashrightarrow(\frac{P_1(x,y)}{Q_1(x,y)},\frac{P_2(x,y)}{Q_2(x,y)})$ be an element of $\operatorname{Cent}(f)$; here $P_1,P_2,Q_1,Q_2\in{\mathbf{K}}[x,y]$. The commutation relation gives us $$\frac{P_1(\alpha x,\beta y)}{Q_1(\alpha x,\beta y)}=\frac{\alpha P_1(x,y)}{Q_1(x,y)},\quad \frac{P_2(\alpha x,\beta y)}{Q_2(\alpha x,\beta y)}=\frac{\beta P_2(x,y)}{Q_2(x,y)}$$ which imply that $P_1,P_2,Q_1,Q_2$ are eigenvectors of the ${\mathbf{K}}$-linear automorphism ${\mathbf{K}}[x,y]\rightarrow{\mathbf{K}}[x,y], g(x,y)\mapsto g(\alpha x,\beta y)$. Therefore each one of the $P_1,P_2,Q_1,Q_2$ is a product of a monomial in $x,y$ with a polynomial in ${\mathbf{K}}[x^k]$. Then we must have $\frac{P_1(x,y)}{Q_1(x,y)}=xR_1(x^k)$ and $\frac{P_2(x,y)}{Q_2(x,y)}=yR_2(x^k)$ for some $R_1,R_2\in{\mathbf{K}}(x)$. The first factor $\frac{P_1(x,y)}{Q_1(x,y)}$ only depends on $x$, so for $f$ to be birational it must be an element of ${\operatorname{PGL}_{2}(\mathbf{K})}$. The conclusion in this case follows. *Second case.* We now treat the case where ${\operatorname{char}(\mathbf{K})}=0$ and where $f$ is of the form $(x,y)\mapsto(\alpha x,y+1)$. Let $(x,y)\dashrightarrow(\frac{P_1(x,y)}{Q_1(x,y)},\frac{P_2(x,y)}{Q_2(x,y)})$ be an element of $\operatorname{Cent}(f)$. We have $$\frac{P_1(\alpha x,y+1)}{Q_1(\alpha x,y+1)}=\frac{\alpha P_1(x,y)}{Q_1(x,y)}\quad \frac{P_2(\alpha x,y+1)}{Q_2(\alpha x,y+1)}=\frac{P_2(x,y)}{Q_2(x,y)}+1.\label{eq:ellipticsecondcase}$$ The first equation implies that $P_1,Q_1$ are eigenvectors of the ${\mathbf{K}}$-linear automorphism ${\mathbf{K}}[x,y]\rightarrow{\mathbf{K}}[x,y], g(x,y)\mapsto g(\alpha x,y+1)$. We view an element of ${\mathbf{K}}[x,y]$ as a polynomial in $x$ with coefficients in ${\mathbf{K}}[y]$. Since the only eigenvector of the ${\mathbf{K}}$-linear automorphism ${\mathbf{K}}[y]\rightarrow{\mathbf{K}}[y], g(y)\mapsto g(y+1)$ is $1$ (this is not true if ${\operatorname{char}(\mathbf{K})}>0$), we deduce that $P_1,Q_1$ depend only on $x$. Thus, $\frac{P_1(x,y)}{Q_1(x,y)}$ is an element $\eta$ of ${\operatorname{PGL}_{2}(\mathbf{K})}$. We derive $\psi=\frac{P_2}{Q_2}$ and get $$\frac{\partial\psi}{\partial y}(\alpha x,y+1)=\frac{\partial \psi}{\partial y}(x,y),\quad \frac{\partial\psi}{\partial x}(\alpha x,y+1)=\alpha^{-1}\frac{\partial \psi}{\partial x}(x,y).$$ As before, this means that $\frac{\partial\psi}{\partial y},\frac{\partial\psi}{\partial x}$ only depend on $x$ (not true if ${\operatorname{char}(\mathbf{K})}>0$). Hence, we can write $\psi$ as $ay+B(x)$ with $a\in{\mathbf{K}}^*$ and $B\in{\mathbf{K}}(x)$. Then equation implies $B(\alpha x)=B(x)+1-a$, which implies further $x\frac{\partial B}{\partial x}(x)$ is invariant under $x\mapsto \alpha x$. If $\alpha$ is of infinite order, then $\frac{\partial B}{\partial x}(x)=\frac{c}{x}$ for some constant $c\in{\mathbf{K}}$. This is only possible if $c=0$. So $B$ is constant and $a=1$ in this case. If $\alpha$ is a primitive $k$-th root of unity, then $(\eta(x),ay+B(x))$ commutes with $f^k:(x,y)\mapsto(x,y+k)$. This yields $a=1$ and $B(\alpha x)=B(x)$. *Third case.* We finally treat the case where ${\operatorname{char}(\mathbf{K})}=p>0$ and where $f$ is of the form $(x,y)\mapsto(\alpha x,y+1)$ with $\alpha$ of infinite order. Let $g\in\operatorname{Cent}(f)$. Then $g$ commutes with $f^p:(x,y)\mapsto(\alpha^p x,y)$ which is in the form of case 1 (the roles of $x,y$ are exchanged). Thus, we know that $g$ writes as $(A(y)x,\eta(y))$ where $\eta\in{\operatorname{PGL}_{2}(\mathbf{K})}$ and $A\in{\mathbf{K}}(x)$. Then $f\circ g=g\circ f$ implies that $\eta$ is $y\mapsto y+R$ for some $R\in{\mathbf{K}}$ and that $A(y+1)=A(y)$. The last equation implies $A(y)=S(y)S(y-1)\cdots S(y-p+1)$ for some $S\in{\mathbf{K}}(x)$. For later use, we determine when an element of the centralizers appeared in Theorem \[ellipticthm\] is elliptic. Though we will use some of the materials of Section \[algebraicallystablemaps\] in the proofs, we find it more natural to state these facts here. \[telescopic\] Let $f:(x,y)\dashrightarrow(\eta(x),yR(x)), \eta\in{\operatorname{PGL}_{2}(\mathbf{K})}, R\in{\mathbf{K}}(x)$ be an elliptic element. Then 1. either $R\in{\mathbf{K}}$, 2. or $R(x)=\frac{rS(x)}{S(\eta(x))}$ with $r\in{\mathbf{K}}^*$ and $S\in{\mathbf{K}}(x)\backslash {\mathbf{K}}$. If $\eta$ is the identity, then we see easily, by looking at the degree growth, that $f$ is elliptic if and only if $R$ is constant. From now on assume that $\eta$ is not the identity. We claim that $f$ is conjugate by an element of ${\operatorname{Jonq}_0(\mathbf{K})}$ to an automorphism of a Hirzebruch surface. By Corollary \[torsionellipticjonq\], this does not hold if and only if $\eta$ is of finite order $d$ and $f^d$ is a [Jonquières ]{}involution (see Corollary \[ordertwoboy\] for the terminology). However if $\eta$ is of finite order $d$ then $f^d$ is of the form $(x,y)\dashrightarrow(\eta(x),y\tilde{R}(x))$ with $\tilde{R}(x)=R(x)\cdots R(\eta^{d-1}(x))$, which is never a [Jonquières ]{}involution. This proves the claim. By Theorem \[algstabjonq\], the conjugation which turns $f$ into an automorphism of a Hirzebruch surface is a sequence of elementary transformations. After conjugation it preserves the two strict transforms of the two sections $\{y=0\}$ and $\{y=\infty\}$. Therefore there exists $g\in {\operatorname{Jonq}_0(\mathbf{K})}$ of the form $(x,y)\dashrightarrow(x,yS(x)), S\in{\mathbf{K}}(x)$ such that $g\circ f\circ g^{-1}$ is $(x,y)\dashrightarrow(\eta(x),r y)$ with $r\in {\mathbf{K}}^*$. Hence $f$ is $(x,y)\dashrightarrow(\eta(x),y\frac{rS(x)}{S(\eta(x))}$. In the above lemma $S$ may not be unique. If $\eta$ has finite order and $T\in{\mathbf{K}}(x)$ is such that $T(x)=T(\eta(x))$, then $\frac{S(x)}{S(\eta(x))}=\frac{T(x)S(x)}{T(\eta(x))S(\eta(x))}$. \[Risapolynomial\] Let $f:(x,y)\dashrightarrow(\eta(x),y+R(x)), \eta\in{\operatorname{PGL}_{2}(\mathbf{K})}, R\in{\mathbf{K}}(x)$ be an elliptic element. Then 1. either $\eta$ has finite order, 2. or for a coordinate $x'$ such that $\eta$ is $x'\mapsto x'+1$ or $x'\mapsto \nu x'$ with $\nu\in{\mathbf{K}}^*$, $R$ is a polynomial in $x'$. It is clear that, if $\eta$ has finite order then the degree of $f^n$ is bounded for all $n\in{\mathbf{Z}}$. Assume that $\eta$ has infinite order, then for some coordinate $x'$, $\eta$ writes as $\eta'(x')=x'\mapsto \nu x'+u$ with $\nu,u\in{\mathbf{K}}$. In coordinates $(x',y)$, write the transformation $f$ as $(x,y)\dashrightarrow(\eta'(x'),y+R'(x'))$ where $R'(x')=\frac{P(x')}{Q(x')}$ with $P,Q\in {\mathbf{K}}[x']$. For $n\in{\mathbf{N}}^*$, the iterate $f^n$ is $$(x,y)\dashrightarrow\left(\eta'(x'),y+\frac{P(x')}{Q(x')}+\cdots+\frac{P(\eta'^{n-1}(x'))}{Q(\eta'^{n-1}(x'))}\right).$$ If $Q\notin {\mathbf{K}}$, then the number of different factors of the polynomials $Q(x'),\cdots,Q(\eta'^{n-1}(x'))$ would go to infinity when $n$ tends to infinity, which would imply that the degrees of the $f^n$ are not bounded. Therefore for $f$ to be elliptic, $R'$ must be a polynomial. [Jonquières ]{}twists with trivial action on the base {#jonqone} ----------------------------------------------------- We follow [@CD12] in this section. Let $f\in {\operatorname{Jonq}(\mathbf{K})}$ be a [Jonquières ]{}twist. Let $\operatorname{Cent}(f)$ be the centralizer of $f$ in ${\operatorname{Cr}_{2}(\mathbf{K})}$. Then $\operatorname{Cent}(f)\subset {\operatorname{Jonq}(\mathbf{K})}$. The rational fibration preserved by a [Jonquières ]{}twist $f$ is unique, thus is also preserved by $\operatorname{Cent}(f)$. Let us consider centralizers of [Jonquières ]{}twists in ${\operatorname{Jonq}_0(\mathbf{K})}=\operatorname{PGL}_2({\mathbf{K}}(x))$ which is a linear algebraic group over the function field ${\mathbf{K}}(x)$. Let $f\in{\operatorname{Jonq}_0(\mathbf{K})}$ and $M=\begin{pmatrix}A&B\\C&D\end{pmatrix}\in \operatorname{GL}_2({\mathbf{K}}(x))$ be a matrix representing $f$ where $A,B,C,D\in {\mathbf{K}}[x]$. We introduce the function $\Delta:=\frac{\operatorname{Tr}^2}{\det}$ which is well defined in $\operatorname{PGL}$ and is invariant by conjugation. This invariant $\Delta$ indicates the degree growth: The rational function $\Delta(f)$ is constant if and only if $f$ is an elliptic element. Let $t_1,t_2$ be the two eigenvalues of the matrix $M$ which are elements of the algebraic closure of ${\mathbf{K}}(x)$. The invariant $\Delta(f)$ equals to $t_1/t_2+t_2/t_1+2$. Since ${\mathbf{K}}$ is algebraically closed, $\Delta(f)\in{\mathbf{K}}$ if and only if $t_1/t_2\in{\mathbf{K}}$. If $t_1=t_2$, then by conjugating $M$ to a triangular matrix we can write $f$ in the form $(x,y)\dashrightarrow (x,y+a(x))$ with $a\in{\mathbf{K}}(x)$ and it follows that $f$ is an elliptic element. Suppose now that $t_1\neq t_2$. Let $\zeta:C\rightarrow {\mathbb{P}}^1$ be the curve corresponding to the finite field extension ${\mathbf{K}}(x)\hookrightarrow {\mathbf{K}}(x)(t_1)$, here $\zeta$ is the identity map on ${\mathbb{P}}^1$ if $t_1,t_2\in{\mathbf{K}}(x)$. The birational transformation $f$ induces a birational transformation $f_C$ on $C\times{\mathbb{P}}^1$ by base change. The induced map $f_C$ is of the form $(x,(t_1/t_2)y)$ where $t_1/t_2$ is viewed as a function on $C$. The degree growth of $f_C$ which is the same as $f$ is linear if and only if $t_1/t_2$ is not a constant, i.e. if and only if $\Delta(f)$ is not a constant. From now on we suppose that $f$ is a [Jonquières ]{}twist so that $\Delta(f)\notin {\mathbf{K}}$. We still denote by $t_1,t_2$ the two eigenvalues of $M$ as in the above proof, we know that $t_1\neq t_2$. We first study the centralizer $\operatorname{Cent}_0(f)$ of $f$ in ${\operatorname{Jonq}_0(\mathbf{K})}=\operatorname{PGL}_2({\mathbf{K}}(x))$. Let $L$ be the finite extension of ${\mathbf{K}}(x)$ over which $M$ is diagonalisable; it is ${\mathbf{K}}(x)$ itself or a quadratic extension of ${\mathbf{K}}(x)$, depending on whether or not $t_1,t_2$ are in ${\mathbf{K}}(x)$. The centralizer $\operatorname{Cent}_0^L(f)$ of $f$ in $\operatorname{PGL}_2(L)$ is isomorphic to the multiplicative group $L^*$. So $\operatorname{Cent}_0(f)$, being contained in $\operatorname{Cent}_0^L(f)$ and containing all the iterates of $f$, must be a $1$-dimensional torus over ${\mathbf{K}}(x)$. It is split if $L={\mathbf{K}}(x)$, i.e. if $t_1,t_2\in{\mathbf{K}}(x)$. If $L={\mathbf{K}}(x)$, then up to conjugation $f$ can be written as $(x,y)\dashrightarrow (x,b(x)y)$ with $b\in{\mathbf{K}}(x)^*$ and $\operatorname{Cent}_0(f)=\{(x,y)\dashrightarrow(x,\gamma(x)y)\vert \gamma\in{\mathbf{K}}(x)^*\}$. If $L$ is a quadratic extension of ${\mathbf{K}}(x)$ and if ${\operatorname{char}(\mathbf{K})}\neq 2$, we can put $f$ in a simpler form and write $\operatorname{Cent}_0(f)$ explicitly as follows. We may assume that the matrix $M=\begin{pmatrix}A&B\\C&D\end{pmatrix}$ has entry $C=1$, after conjugation by $\begin{pmatrix}C&0\\0&1\end{pmatrix}$. Once we have $C=1$, a conjugation by $\begin{pmatrix}2&D-A\\0&2\end{pmatrix}$ allows us to put $M$ in the form $\begin{pmatrix}A&B\\1&A\end{pmatrix}$ with $A,B\in{\mathbf{K}}[x]$. Therefore $\operatorname{Cent}_0(f)$ is $\{Id,(x,y)\dashrightarrow(x,\frac{C(x)y+B(x)}{y+C(x)})\vert C\in{\mathbf{K}}(x)\}$ as the (${\mathbf{K}}(x)$-points of the) later algebraic group is easily seen to commute with $f$. Note that $B$ is not a square in ${\mathbf{K}}(x)$ because $M$ is not diagonalisable over ${\mathbf{K}}(x)$ and that the transformation $f:(x,y)\dashrightarrow (x,\frac{A(x)y+B(x)}{y+A(x)})$ fixes pointwise the hyperelliptic curve defined by $y^2=B(x)$. Now we look at the whole centralizer of $f$. For $\eta\in{\operatorname{PGL}_{2}(\mathbf{K})}$ and $f\in{\operatorname{Jonq}_0(\mathbf{K})}$ represented by a matrix $\begin{pmatrix}A(x)&B(x)\\C(x)&D(x)\end{pmatrix}$, we denote by $f_{\eta}$ the element of ${\operatorname{Jonq}_0(\mathbf{K})}$ represented by $\begin{pmatrix}A(\eta(x))&B(\eta(x))\\C(\eta(x))&D(\eta(x))\end{pmatrix}$. Let $f\in{\operatorname{Jonq}_0(\mathbf{K})}$ be a [Jonquières ]{}twist and $g:(x,y)\dashrightarrow (\eta(x),\frac{a(x)y+b(x)}{c(x)y+d(x)})$ be an element of ${\operatorname{Jonq}(\mathbf{K})}$. Writing down the commutation equation , we see that $g$ commutes with $f$ if and only if $f$ is conjugate to $f_{\eta}$ in $\operatorname{PGL}_2({\mathbf{K}}(x))$ by the transformation represented by $\begin{pmatrix}a(x)&b(x)\\c(x)&d(x)\end{pmatrix}$. We have thus $\Delta(f)(x)=\Delta(f_{\eta})(x)=\Delta(f)(\eta(x))$. Recall that $\Delta(f)\in{\mathbf{K}}(x)$ is not in ${\mathbf{K}}$. As a consequence the group $$\{\eta\in{\operatorname{PGL}_{2}(\mathbf{K})}, \Delta(f)(x)=\Delta(f)(\eta(x))\}$$ is a finite subgroup of ${\operatorname{PGL}_{2}(\mathbf{K})}$. We then obtain: \[jonqzthm\] Let $f\in{\operatorname{Jonq}_0(\mathbf{K})}$ be a [Jonquières ]{}twist preserving the rational fibration fibre by fibre. Let $\operatorname{Cent}(f)$ be the centralizer of $f$ in ${\operatorname{Cr}_{2}(\mathbf{K})}$. Then $\operatorname{Cent}(f)\subset{\operatorname{Jonq}(\mathbf{K})}$ and $\operatorname{Cent}_0(f)=\operatorname{Cent}(f)\cap{\operatorname{Jonq}_0(\mathbf{K})}$ is a finite index normal subgroup of $\operatorname{Cent}(f)$. The group $\operatorname{Cent}_0(f)$ has a structure of a $1$-dimensional torus over ${\mathbf{K}}(x)$. In particular $\operatorname{Cent}(f)$ is virtually abelian. In [@CD12], the authors give explicit description of the quotient $\operatorname{Cent}(f)/\operatorname{Cent}_0(f)$ when ${\operatorname{char}(\mathbf{K})}=0$. #### Finite action on the base. If $f\in{\operatorname{Jonq}(\mathbf{K})}$ is a [Jonquières ]{}twist which has a finite action on the base, then $f^k\in{\operatorname{Jonq}_0(\mathbf{K})}$ for some $k\in{\mathbf{N}}$. As $\operatorname{Cent}(f)\subset\operatorname{Cent}(f^k)$, we can use Theorem \[jonqzthm\] to describe $\operatorname{Cent}(f)$: If $f\in{\operatorname{Jonq}(\mathbf{K})}$ is a [Jonquières ]{}twist which has a finite action on the base, then $\operatorname{Cent}(f)$ is virtually contained in a $1$-dimensional torus over ${\mathbf{K}}(x)$. In particular $\operatorname{Cent}(f)$ is virtually abelian. We are contented with this coarse description of $\operatorname{Cent}(f)$ because this causes only a finite index problem as regards the embeddings of ${\mathbf{Z}}^2$ to ${\operatorname{Cr}_{2}(\mathbf{K})}$. We give an example to show how we expect $\operatorname{Cent}(f)$ to look like: If $f$ is $(x,y)\dashrightarrow (a(x),R(x)y)$ where $R\in{\mathbf{K}}(x)$ and $a\in{\operatorname{PGL}_{2}(\mathbf{K})}$ is of order $k<+\infty$. Then all maps of the form $(x,y)\dashrightarrow (x,S(x)S(a(x))\cdots S(a^{k-1}(x))y)$ with $S\in{\mathbf{K}}(x)$ commute with $f$. Base-wandering [Jonquières ]{}twists {#gnljonqtwists} ==================================== We introduce some notations. For a Hirzebruch surface $X$, let us denote by $\pi$ the projection of $X$ onto ${\mathbb{P}}^1$, i.e. the rational fibration. When $X={\mathbb{P}}^1\times{\mathbb{P}}^1$, $\pi$ is the projection onto the first factor. For $x\in {\mathbb{P}}^1$, we denote by $F_x$ the fibre $\pi^{-1}(x)$. If $f$ is a birational transformation of a Hirzebruch surface $X$ which preserves the rational fibration, we denote by $\overline{f}\in{\operatorname{PGL}_{2}(\mathbf{K})}$ the induced action of $f$ on the base ${\mathbb{P}}^1$ and we will consider $f$ as an element of ${\operatorname{Jonq}(\mathbf{K})}$. Assume now that $f$ is a [Jonquières ]{}twist such that $\overline{f}\in {\operatorname{PGL}_{2}(\mathbf{K})}$ if of infinite order, we will call it a *base-wandering [Jonquières ]{}twist*. We have an exact sequence: $$\{1\}\rightarrow \operatorname{Cent}_0(f)\rightarrow \operatorname{Cent}(f) \rightarrow \operatorname{Cent}_b(f)\rightarrow \{1\} \label{eq:centjonq}$$ where $\operatorname{Cent}_0(f)=\operatorname{Cent}(f)\cap {\operatorname{Jonq}_0(\mathbf{K})}$ and $\operatorname{Cent}_b(f)\subset \operatorname{Cent}(\overline{f})\subset {\operatorname{PGL}_{2}(\mathbf{K})}$. The action $\overline{f}$ on the base is conjugate to $x\mapsto \alpha x$ with $\alpha\in{\mathbf{K}}^*$ of infinite order or to $x\mapsto x+1$. The later case is only possible if ${\operatorname{char}(\mathbf{K})}=0$. Thus $\operatorname{Cent}_b(f)$ is a subgroup of $\{x\mapsto \gamma x,\gamma\in{\mathbf{K}}^*\}$ or of $\{x\mapsto x+\gamma,\gamma\in{\mathbf{K}}\}$. In both cases $\operatorname{Cent}_b(f)$ is abelian. We first remark: \[centzelliptic\] All elements of $\operatorname{Cent}_0(f)$ are elliptic. By Theorem \[jonqzthm\], a [Jonquières ]{}twist in ${\operatorname{Jonq}_0(\mathbf{K})}$ can not have a base-wandering [Jonquières ]{}twist in its centralizer. The rest of the article will essentially be occupied by the proof of the following theorem: \[mainthm\] Let $f\in{\operatorname{Jonq}(\mathbf{K})}$ be a base-wandering [Jonquières ]{}twist. The exact sequence $$\{1\}\rightarrow \operatorname{Cent}_0(f)\rightarrow \operatorname{Cent}(f) \rightarrow \operatorname{Cent}_b(f)\rightarrow \{1\}$$ satisfies - $\operatorname{Cent}_0(f)=\operatorname{Cent}(f)\bigcap{\operatorname{Jonq}_0(\mathbf{K})}$, if not trivial, is $\{(x,y)\mapsto (x,ty),t\in{\mathbf{K}}^*\}$, $\{(x,y)\mapsto (x,y+t),t\in{\mathbf{K}}\}$, $\langle (x,y)\mapsto (x,-y)\rangle$ or $\langle \text{a {Jonqui\`eres }involution}\rangle$; - $\operatorname{Cent}_b(f)\subset {\operatorname{PGL}_{2}(\mathbf{K})}$ is isomorphic to the product of a finite cyclic group with ${\mathbf{Z}}$. The infinite cyclic subgroup generated by $\overline{f}$ has finite index in $\operatorname{Cent}_b(f)$. The theorem is a consequence of Proposition \[centzprop\], Corollary \[centpersjonq\] and Proposition \[centnonpersjonq\]. \[basewanderingvirtuallyabelian\] The centralizer of a base-wandering [Jonquières ]{}twist is virtually abelian. This results directly from the fact that $\operatorname{Cent}_b(f)$ is virtually the cyclic group generated by $\overline{f}$. Theorem \[mainthm\] is optimal in the sense that $\operatorname{Cent}_b(f)$ can be ${\mathbf{Z}}$ (Remark \[exampledeserti\]) or a product of ${\mathbf{Z}}$ with a non trivial finite cyclic group (Example \[examplecentb\]) and $\operatorname{Cent}_0(f)$ can be trivial, isomorphic to ${\mathbf{K}}$, ${\mathbf{K}}^*$ or ${\mathbf{Z}}/2{\mathbf{Z}}$ (Section \[sectioncentz\]). \[exampledeserti\] A general base-wandering [Jonquières ]{}twist can not be written as $(\eta(x),yR(x^k))$ or $(\eta(x),y+R(x))$. So the centralizer of a general [Jonquières ]{}twist $f$ differs from the infinite cyclic group $\left\langle f\right\rangle$ only by some finite groups. For example, for a generic choice of $\alpha,\beta\in {\mathbf{K}}^*$, the centralizer of $f_{\alpha,\beta}:(x,y)\dashrightarrow(\alpha x,\frac{\beta y+x}{y+1})$ is $\left\langle f_{\alpha\beta}\right\rangle$, this is showed by J. Déserti in [@Des08]. Algebraically stable maps {#algebraicallystablemaps} ------------------------- If $f$ is a birational transformation of a smooth algebraic surface $X$ over ${\mathbf{K}}$, we denote by $\operatorname{Ind}(f)$ the set of indeterminacy points of $f$. We say that $f$ is *algebraically stable* if there is no curve $V$ on $X$ such that the strict transform $f^k(V)\subset \operatorname{Ind}(f)$ for some integer $k\geq 0$. There always exists a birational morphism $\hat{X}\rightarrow X$ which lifts $f$ to an algebraically stable birational transformation of $\hat{X}$ ([@DF01] Theorem 0.1). The following theorem says that for $f\in{\operatorname{Jonq}(\mathbf{K})}$, we can get a more precise algebraically stable model: \[algstabjonq\] Let $f$ be a birational transformation of a ruled surface $X$ that preserves the rational fibration. Then there is a rational ruled surface $\hat{X}$ and a birational map $\varphi:X\dashrightarrow\hat{X}$ such that - the only singular fibres of $\hat{X}$ are of the form $D_0+D_1$ where $D_0,D_1$ are $(-1)$-curves, i.e. $\hat{X}$ is a conic bundle; - $f_{\hat{X}}=\varphi\circ f\circ\varphi^{-1}$ is an algebraically stable birational transformation of $\hat{X}$ and it preserves the rational fibration of $\hat{X}$ which is induced by that of $X$; - $f_{\hat{X}}$ sends singular fibres isomorphically to singular fibres and all indeterminacy points of $f_{\hat{X}}$ and its iterates are located on regular fibres. - $\varphi$ is a sequence of elementary transformations and blow-ups. Let $z\in X$ be an indeterminacy point of $f$. Let $X\xleftarrow{u}Y\xrightarrow{v}X$ be a minimal resolution of the indeterminacy point $z$, i.e. $u,v$ are birational maps which are regular around the fibre over $\pi(z)$, $u^{-1}$ is a series of $n$ blow-ups at $z$ or at its infinitely near points and $n$ is minimal among possible integers. \[jonqindlem\] The total transform by $u^{-1}$ in $Y$ of $F_{\pi(z)}$, the fibre containing $z$, is a chain of $(n+1)$ rational curves $C_0+C_1+\cdots+C_n$: $C_0$ is the strict transform of $F_{\pi(z)}$, $C_0^2=C_n^2=-1$, $C_i^2=-2$ for $0<i<n$ and $C_i\cdot C_{i+1}=1$ for $0\leq i <n$. Let us write $u:Y\rightarrow X$ as $Y=Y_n\xrightarrow{u_n}Y_{n-1}\cdots\xrightarrow{u_2}Y_1\xrightarrow{u_1}Y_0=X$ where each $u_i$ is a single contraction of a $(-1)$-curve and $C_i$ is (the strict transform) of the contracted $(-1)$-curve. By an abuse of notation, we will use $C_i$ to denote all strict transforms of the $(-1)$-curve contracted by $u_i$. The connectedness of the fibres and the preservation of the fibration imply that for each $i$, the map $f\circ u_1\circ \cdots \circ u_{i}$ has at most one indeterminacy point on a fibre. To prove the lemma, it suffices to show that the indeterminacy point of $f\circ u_1\circ \cdots \circ u_{i}$ which by construction lies in $C_i$ is not the intersection point of $C_i$ with $C_{i-1}$. Suppose by contradiction that $C_{i+1}$ is obtained by blowing up the intersection point of $C_i$ with $C_{i-1}$. Then for $j>i$, the auto-intersection of $C_i$ on $X_j$ is less than or equal to $-2$. Let us write $v:Y\rightarrow X$ as $Y=Y_n\xrightarrow{v_n}Y_{n-1}\cdots\xrightarrow{v_2}Y_1\xrightarrow{v_1}Y_0=X$ where each $v_i$ is a single contraction of a $(-1)$-curve. Since $C_i$ is contracted by $v$, there must exist an integer $k$ such that $v_{k+1}\circ\cdots\circ v_n(C_i)$ is the $(-1)$-curve on $Y_{k}$ contracted by $v_{k}$. This is possible only if the $C_j,j>i$ are all contracted by $v_{k}\circ\cdots\circ v_n$. But by the minimality of the integer $n$, $C_n$ can not be contracted by $v$. Our proof is inspired by the proof of Theorem 0.1 of [@DF01]. Let $p_1,\cdots,p_k\in X$ be the indeterminacy points of $f$. By Lemma \[jonqindlem\], for $1\leq i\leq k$ the minimal resolution of $f$ at $p_i$ writes as $$X=X_{i0}\xleftarrow{u_{i1}}X_{i1}\xleftarrow{u_{i2}}\cdots\xleftarrow{u_{in_i}} X_{in_i}=Y_{in_i}\xrightarrow{v_{in_i}}\cdots\xrightarrow{v_{i2}}Y_{i1}\xrightarrow{v_{i1}}Y_{i0}=X$$ where $u_{i1},\cdots,u_{in_i},v_{i1},\cdots,v_{in_i}$ are single contractions of $(-1)$-curves and $X_{in_i}$ has one singular fibre which is a chain of rational curves $C_{i0}+\cdots+C_{in_i}$. Let us write the global minimal resolution of indeterminacy of $f$ by keeping in mind the rational fibration: $$\begin{tikzcd} X=X_0 \arrow{r}{f_0} \arrow{d}{\pi} & X_1 \arrow{r}{f_1} \arrow{d}{\pi} & \cdots \arrow{r}{f_{n-1}} &X_{n} \arrow{r}{f_n} \arrow{d}{\pi} & \cdots \arrow{r}{f_{2n-2}} & X_{2n-1} \arrow{r}{f_{2n-1}} \arrow{d}{\pi} & X_{2n}=X \arrow{d}{\pi} \\ {\mathbb{P}}^1 \arrow{r}{\overline{f_0}} & {\mathbb{P}}^1 \arrow{r}{\overline{f_1}} & \cdots \arrow{r}{\overline{f_{n-1}}} &{\mathbb{P}}^1 \arrow{r}{\overline{f_n}} & \cdots \arrow{r}{\overline{f_{2n-2}}} & {\mathbb{P}}^1 \arrow{r}{\overline{f_{2n-1}}} & {\mathbb{P}}^1 \end{tikzcd}$$ where $n=n_1+\cdots+n_k$ and - $f_0,\cdots,f_{n-1}$ are blow-ups which correspond to the inverses of $u_{11},\cdots,u_{1n_1},\cdots,u_{k1},\cdots,u_{kn_k}$; - $f_n,\cdots,f_{2n-1}$ are blow-downs which correspond to $v_{11},\cdots,v_{1n_1},\cdots,v_{k1},\cdots,v_{kn_k}$; - $X_n$ has $k$ singular fibres which are chains of rational curves $C_{i0}+\cdots+C_{in_i}, 1\leq i \leq k$; - the abusive notation $\pi$ is self-explaining and we will also denote by $C_{il}$ its strict transforms (if it remains a curve) on the surfaces $X_j$. On $X_0=X_{2n}$, it is possible that $C_{i'0}=C_{in_i}$ for $1\leq i,i'\leq k$. (-9.270985411675113,-0.7032400566351622) rectangle (-0.4111856521166486,5.417297941513924); (-8.996023788011064,1.3911188248603872)– (-8.416690318084575,0); (-7.589071075332448,1.4242235945704722)– (-7,0); (-7.619907567048753,0.9988180779807584)– (-7.01,2.4); (-8.216996768153384,0.9945550742012884) – (-7.8042473289468015,0.9945550742012884); (-6.777951426054756,1.005710464450115) – (-6.4321343283411325,1.005710464450115); (-5.780116247318101,0.9971761745479083) – (-5.43,1); (-5.1877617757623025,1.3804643620251889)– (-4.618636891326339,0); (-5.1877617757623025,1.008790968107826)– (-4.595407304206503,2.379336608178102); (-4.595407304206503,1.9960484207008213)– (-4.885777143204444,2.6697064471760417); (-4.4,1) – (-4,1); (-3.205106242667891,1.386310863707553)– (-2.5916941560150897,0); (-3.205106242667891,1.0096543192716227)– (-2.5916941560150897,2.3979027259069086); (-2.4,1) – (-2,1); (-1.795334604921979,1.4078340948181776)– (-1.203445749379802,0); (-5.197875349068657,3.821940607778324)– (-5.39208472785608,4.404568744140593); (-5.612188690481826,3.977308110808262)– (-4.006724492505796,4.611725414847177); (-4.006724492505796,4.391621452221431)– (-5.612188690481826,4.987196880502862); (-8.832275520199392,0.7573428747413243) node [$C_{i0}$]{}; (-7.419843674472681,0.7787433572523351) node [$C_{i0}$]{}; (-7.12023691931853,1.7631655527588315) node [$C_{i1}$]{}; (-6.24,1) circle (0.5pt); (-6.12,1) circle (0.5pt); (-6,1) circle (0.5pt); (-5.022989633239473,0.7573428747413243) node [$C_{i0}$]{}; (-4.701982395574311,1.7524653115033262) node [$C_{i1}$]{}; (-5,3) circle (0.5pt); (-5.059999046603208,3.238831331612004) circle (0.5pt); (-5.059999046603208,3.4827419963702733) circle (0.5pt); (-3.81,1) circle (0.5pt); (-3.6032860182144466,0.9988927037163103) circle (0.5pt); (-3.42,1) circle (0.5pt); (-3.0220445184599645,0.7573428747413243) node [$C_{i(n_i-1)}$]{}; (-2.7010372807948024,1.7631655527588315) node [$C_{in_i}$]{}; (-1.620312913988758,0.7680431159968297) node [$C_{in_i}$]{}; (-4.7126826368298165,4.267022006547094) node [$C_{i(n_i-1)}$]{}; (-4.7126826368298165,4.97323792941045) node [$C_{in_i}$]{}; For any $j\in {\mathbf{N}}$, we let $X_j=X_{j \mod 2n}$ and $f_j=f_{j \mod 2n}$. If $f_j$ blows up a point $r_j\in X_j$, then we denote by $V_{j+1}$ the exceptional curve on $X_{j+1}$. If $f_j$ contracts a curve $W_j\subset X_j$ then we denote by $s_{j+1}$ the point $f_j(W_j)\in X_{j+1}$. For each $V_j$ (resp. $W_j$), there is an $i$ such that $V_j$ (resp. $W_j$) is among $C_{i0},\cdots,C_{in_i}$. Suppose that $f$ is not algebraically stable on $H$. Then there exist integers $1\leq M <N$ such that $f_M$ contracts $W_M$ and $$f_{N-1}\circ \cdots \circ f_M(W_M)=r_N\in \operatorname{Ind}(f_N).$$ We can assume that $n\leq N \leq 2n-1$ and the length $(N-M)$ is minimal. Observe first that the minimality of the length implies for all $M\leq j<N-1$, the point $t_{j+1}:=f_j\circ\cdots\circ f_M(W_M)=f_j\circ\cdots\circ f_{M+1}(s_{M+1})$ is neither an indeterminacy point nor a point on a curve contracted by $f_{j+1}$. Secondly we assert that for all $M\leq j<N-1$, $t_{j+1}$ is not on the singular fibres of $X_{j+1}$. Indeed if some $t_{j+1}$ was on a singular fibre of $X_{j+1}$, then the sequence of points $t_{j+1},t_{j+2},\cdots$ would meet a contracted curve before meeting the first indeterminacy point $r_N$ (look at the picture), which contradicts our first observation. The second observation further implies that for $M\leq j <N-1$ such that $j+2n<N-1$, $t_{j+1},t_{j+2n+1}$ are not on the same fibre of $X_{j+1}=X_{j+2n+1}$ because otherwise there would exist $j<j'<j+2n+1$ such that $j'=M \mod 2n$ and $t_{j'}$ would be on the singular fibre containing $W_M$. Since $f_{N-1}$ maps isomorphically the fibre of $X_{N-1}$ containing $t_{N-1}$ (which is regular by the above observation) to the fibre of $X_N$ containing $r_N$, the fibre containing $r_N$ is just one rational curve. As $f_N$ is a blow-up, the fibre of $X_{N+1}$ containing $V_{N+1}$ is the union of two $(-1)$-curves, let us say, $C_{k0}$ and $C_{k1}=V_{N+1}$. Then the fibre of $X_N$ containing $r_N$ is just $C_{k0}$. Similarly the singular fibre of $X_M$ containing $W_M$ is $C_{mn_m}+C_{m(n_m-1)}$ for some $1\leq m\leq k$. *First case.* Suppose that $m=k$ and $n_k=1$. Let $a\in{\mathbf{N}}$ be the minimal integer such that $M+2an>N$. Then for $N<j\leq M+2an$, the surface $X_j$ has a singular fibre $C_{k0}+C_{k1}$ and the maps $f_N,\cdots,f_{M+2an-1}$ are all regular on $C_{k0}+C_{k1}$. Now we blow-up $t_{M+1},\cdots,t_{N-1},r_N$. For $j_1=j_2 \mod 2n$, we showed that $t_{j_1},t_{j_2}$ are not on the same fibre of $X_{j_1}=X_{j_2}$. This means that these blow-ups only give rise to singular fibres which are unions of two $(-1)$-curves. We denote by $\hat{X}_j$ the modified surfaces, and $\hat{f}_j$ the induced maps. Then every $\hat{X}_j$ has singular fibres of the form $C_{k0}+C_{k1}$ and every $\hat{f}_j$ is regular around these singular fibres. Let $\hat{f}=\hat{f}_{2n-1}\circ \cdots \circ \hat{f}_0$. The number of indeterminacy points of $\hat{f}$ (it was $k$ for $f$) has decreased by one. Note that $\hat{f}$ exchanges the two components $C_{k0}$ and $C_{k1}$. This fact will be used in the proof of Corollary \[torsionellipticjonq\]. *Second case.* Suppose that $m=k$ and $n_k>1$ or simply $m\neq k$. We blow-up $r_N$ and contract the strict transform of the initial fibre containing $r_N$ which is $C_{k0}$, obtaining a new surface $\hat{X}_N$ whose corresponding fibre is now the single rational curve $C_{k1}$. We perform elementary transformations at $t_{N-1},\cdots,t_{M+1}$, i.e. we blow-up $X_j$ at $t_j$ and contract the strict transform of the initial fibre, replacing $X_j$ with $\hat{X}_j$. This process has no ambiguity: if $j_1=j_2 \mod 2n$, we showed that $t_{j_1},t_{j_2}$ are not on the same fibre of $X_{j_1}=X_{j_2}$, so the corresponding elementary transformations do not interfere with each other. Let us denote by $\hat{f}_M,\cdots,\hat{f}_N$ the maps induced by $f_M,\cdots,f_N$. We now analyse the effects of $\hat{f}_M,\cdots,\hat{f}_N$. First look at $f_N$, it lifts to a regular isomorphism after blowing up $r_N$. Thus $\hat{f}_N$ is the blow-up at the point $e_N$ of $\hat{X}_N$ to which $C_{k0}$ is contracted. After this step, the map going from $X_{N-1}$ to $\hat{X}_N$ induced by $f_{N-1}$ is as following: it contracts the fibre containing $t_{N-1}$ to $e_N$ and blows up $t_{N-1}$. Then we make elementary transformations at $t_{N-1},\cdots,t_{M+1}$ in turn. The maps $\hat{f}_{N-1},\cdots,\hat{f}_{M+1}$ are all regular on the modified fibres, thus they are still single blow-ups or single blow-downs. The behaviour of $\hat{f}_{M}$ differs from the previous ones: it does not contract $C_{m(n_m-1)}$ any more, but contracts $C_{mn_m}$. The hypothesis $m\neq k$ (or $m=k$, $n_k>1$) forbids $C_{k0}\subset X_{N+1}$ to go back into the fibre of $X_{M+2na}=X_M$ containing $W_M$ without being contracted. More precisely this implies the existence of $N'>N$ such that - $X_{N+1},\cdots,X_{N'}$ all contain $C_{k0}$ and $C_{k1}$; - $f_{N+1},\cdots,f_{N'-1}$ are regular on $C_{k0}$ and $f_{N'}$ contracts $C_{k0}$; - if $a\in {\mathbf{N}}$ is the minimal integer such that $M+2na>N$, then $N'<M+2na$. On the surfaces $X_{N+1},\cdots,X_{N'}$, $C_{k0}$ is always a $(-1)$-curve, we contract all these $C_{k0}$ and obtain new surfaces $\hat{X}_{N+1},\cdots,\hat{X}_{N'}$. The second and the third property listed above mean that the new induced maps $\hat{f}_{N},\cdots,\hat{f}_{N'}$ are all single blow-ups, single blow-downs or simply isomorphisms. In summary we get a commutative diagram: $$\begin{tikzcd} \hat{X}_0 \arrow{r}{\hat{f}_0} \arrow{d}{} & \hat{X}_1 \arrow{r}{\hat{f}_1} \arrow{d}{} & \cdots \arrow{r}{\hat{f}_{n-1}} &\hat{X}_{n} \arrow{r}{\hat{f}_n} \arrow{d}{} & \cdots \arrow{r}{\hat{f}_{2n-2}} & \hat{X}_{2n-1} \arrow{r}{\hat{f}_{2n-1}} \arrow{d}{} & \hat{X}_{2n}=\hat{X}_0 \arrow{d}{} \\ X_0 \arrow{r}{f_0} & X_1 \arrow{r}{f_1} & \cdots \arrow{r}{f_{n-1}} &X_{n} \arrow{r}{f_n} & \cdots \arrow{r}{f_{2n-2}} & X_{2n-1} \arrow{r}{f_{2n-1}} & X_{2n}=X_0\end{tikzcd}$$ where the vertical arrows are composition of elementary transformations and blow-ups. Let us remark that: - the first vertical arrow $\hat{X}_0\dashrightarrow X_0$ is a composition of elementary transformations. - the blow-ups or the contractions of the $\hat{f}_j$ only concern the $k$ singular fibres and the exceptional curves are always among $C_{10},\cdots,C_{1n_1},\cdots,C_{k0},\cdots,C_{kn_k}$; - there is no more $C_{k0}$. We then do a renumbering: $C_{k1},\cdots,C_{kn_k}$ become $C_{k0},\cdots,C_{k(n_k-1)}$. Let $\hat{f}=\hat{f}_{2n-1}\circ \cdots \circ \hat{f}_0$. We repeat the above process. Either we are in the first case and $k$ decreases, or we are in the second case and the total number of $C_{10},\cdots,C_{1n_1},\cdots,C_{k0},\cdots,C_{kn_k}$ decreases. As a consequence, after a finite number of times, either we get an algebraically stable map $\hat{f}$, or we will get rid of all the $C_{10},\cdots,C_{1n_1},\cdots,C_{k0},\cdots,C_{kn_k}$. In the later case $\hat{f}$ is a regular automorphism, thus automatically algebraically stable. Theorem \[algstabjonq\] also gives a geometric complement to the study of elements of finite order of ${\operatorname{Jonq}(\mathbf{K})}$ in [@Bla11] Section 3. In particular the proof of Theorem \[algstabjonq\] implies the following corollary (which is already known, see for example [@Bla11]), one special case of which will be used in the next section: \[torsionellipticjonq\] Let $f\in{\operatorname{Jonq}(\mathbf{K})}$ be an elliptic element. If $f$ is not conjugate to an automorphism of a Hirzebruch surface, then it is a conjugate to an automorphism of a conic bundle and the order of $f$ is $2k$ for some $k\in{\mathbf{N}}^*$. Moreover $f^k$ is in ${\operatorname{Jonq}_0(\mathbf{K})}$ and exchanges the two components of some singular fibres of the conic bundle. We see by Theorem \[ellipticnormalform\] that an elliptic element of infinite order is always conjugate to an automorphism of Hirzebruch surface. Hence our hypothesis implies immediately that $f$ is of finite order. We can assume that $f$ is an algebraically stable map on a conic bundle $X$ which satisfies the conditions of Theorem \[algstabjonq\]. We claim that $f$ is an automorphism of $X$. Suppose by contradiction that $p$ is an indeterminacy point of $f$. It must lie on a regular fibre $F$ of $X$. The fact that $f$ is of finite order and the algebraic stability of $f$ imply that $f^{-1}$ has an indeterminacy point on $F$ different from $p$. But then $f$ can not be of finite order, contradiction. Since by hypothesis $X$ is not a Hirzebruch surface, it must have some singular fibres. By the proof of Theorem \[algstabjonq\] (see the *First case* in the proof), for each singular fibre there exists an iterate of $f$ which exchanges the two components of that fibre. Since there are finitely many singular fibres, we can find an integer $k>0$ such that $f^k$ is in ${\operatorname{Jonq}_0(\mathbf{K})}$ and exchanges the two components of at least one singular fibre. If we consider $f^k$ as an element of $\operatorname{PGL}_2({\mathbf{K}}(x))$, it is not diagonalizable over ${\mathbf{K}}(x)$. As we have seen in Section \[jonqone\], the map $f^k$, being non diagonalizable, fixes pointwise a hyperelliptic curve whose projection onto ${\mathbb{P}}^1$ is induced by the rational fibration. The map $f^{2k}$ does not exchange the components of the singular fibres, so it is conjugate to an automorphism of a Hirzebruch surface and is diagonalizable over ${\mathbf{K}}(x)$. A diagonalizable map does not fix any hyperelliptic curve like this unless the map is trivial. Hence $f^{2k}=\operatorname{Id}$. See [@Bla11] Section 3, especially Proposition 3.3 and Lemma 3.9, for more information on such elliptic elements of finite order; see also [@DI09]. We will use a special case of the above corollary: \[ordertwoboy\] Let $f\in{\operatorname{Jonq}_0(\mathbf{K})}$ be an elliptic element which is not conjugate to an automorphism of a Hirzebruch surface. Then $f$ is of order $2$ and is conjugate to an automorphism of a conic bundle on which it fixes pointwise a hyperelliptic curve whose projection onto the base ${\mathbb{P}}^1$ is a ramified double cover. In some affine chart $f$ writes as $(x,y)\dashrightarrow (x,\frac{a(x)}{y})$ with $a\in{\mathbf{K}}[x]$. The hyperelliptic curve is given by the equation $y^2=a(x)$. Such involutions are well known and are called *[Jonquières ]{}involutions*, see [@BB00]. An element of the form $(x,y)\dashrightarrow (\eta(x),yR(x))$ or $(x,y)\dashrightarrow (\eta(x),y+R(x))$ with $\eta\in{\operatorname{PGL}_{2}(\mathbf{K})}$ and $R\in{\mathbf{K}}(x)$ is never a [Jonquières ]{}twist. Thus by Theorem \[ellipticthm\], a [Jonquières ]{}twist never commutes with an elliptic element of infinite order. We will need an abelian elliptic group version of Theorem \[algstabjonq\]: \[ellipticsubgroupconjugation\] Let $G\subset {\operatorname{Jonq}(\mathbf{K})}$ be a finitely generated abelian elliptic subgroup without [Jonquières ]{}involutions. We can conjugate $G$ to a group of automorphisms of a Hirzebruch surface. The conjugation is a sequence of elementary transformations. Let $f_1,\cdots,f_d\in G$ be a finite set of generators of $G$. We apply Theorem \[algstabjonq\] to $f_1$, then to $f_2$, etc. Remark that by the proof of Theorem \[algstabjonq\], the elementary transformations of the conjugation are made at the indeterminacy points of the $f_i$. However $G$ is an abelian group, so that if $p$ is an indeterminacy point of $f_i$ and $g$ is another element of $G$, then either $g$ fixes $p$ or $p$ is an indeterminacy point of $g$ too. Therefore after applying Theorem \[algstabjonq\] to $f_{i+1}$, the previous ones $f_1,\cdots,f_i$ remain automorphisms. The group $\operatorname{Cent}_0(f)$ {#sectioncentz} ------------------------------------ Let $f$ be a base-wandering [Jonquières ]{}twist. In [@CD12], it is proved by explicit calculations, in the case where ${\mathbf{K}}={\mathbf{C}}$, that $\operatorname{Cent}_0(f)$ is isomorphic to ${\mathbf{C}}^*$, ${\mathbf{C}}^*\rtimes{\mathbf{Z}}/2{\mathbf{Z}}$, ${\mathbf{C}}$ or a finite group (this is not optimal). Their arguments do not work directly when ${\operatorname{char}(\mathbf{K})}>0$. With a more precise description of elements of ${\operatorname{Jonq}_0(\mathbf{K})}$, we simplify their arguments and improve their results. Let $g\in\operatorname{Cent}_0(f)$ be non trivial. Then either $g$ is conjugate to an automorphism of a Hirzebruch surface or $g$ is a [Jonquières ]{}involution as in Corollary \[ordertwoboy\]. In the first case, by proposition \[jonqzelliptic\], we can write $g$ as $(x,y)\mapsto (x,\beta y)$ or $(x,y)\mapsto (x,y+1)$. Suppose that there exists a non trivial $g\in\operatorname{Cent}_0(f)$ that writes as $(x,y)\mapsto (x,\beta y)$ with $\beta\in{\mathbf{K}}^*$. Either $f$ is of the form $(a(x),R(x)y^{-1})$ and $\operatorname{Cent}_0(f)$ is an order two group generated by the involution $(x,y)\mapsto(x,-y)$, or $f$ is of the form $(a(x),R(x)y)$ and $\operatorname{Cent}_0(f)$ is $\{(x,y)\mapsto (x,\gamma y),\gamma\in{\mathbf{K}}^*\}$. The map $g$ preserves $\{y=0\}$ and $\{y=\infty\}$ and these two curves are the only $g$-invariant sections. Thus $f$ permutes these two sections and is necessarily of the form $(x,y)\dashrightarrow(a(x),R(x)y^{\pm 1})$ where $R\in{\mathbf{K}}(x)$ and $a\in{\operatorname{PGL}_{2}(\mathbf{K})}$ is of infinite order. If $f$ is $(a(x),R(x)y^{-1})$, then $\beta=-1$. For the discussion which follows, it is not harmfull to replace $f$ by $f^2$ so that we can assume $f$ is $(a(x),R(x)y)$. The only $f$-invariant sections are $\{y=0\}$ and $\{y=\infty\}$. Indeed an invariant section $s$ satisfies $$s(a^n(x))=R(x)\cdots R(a^{n-1}(x))s(x) \quad \forall n\in{\mathbf{N}}.$$ If $s$ was not $\{y=0\}$ nor $\{y=\infty\}$, then the two sides of the equations are rational fractions and by comparing the degrees (of numerators and denominators) we get a contradiction because $R$ is not constant. Thus, an element of $\operatorname{Cent}_0(f)$ permutes the two $f$-invariant sections and is of the form $(x,A(x)y)$ or $(x,\frac{A(x)}{y})$ with $A\in{\mathbf{K}}(x)$. In the first case the commutation relation implies $A(a(x))=A(x)$ which further implies that $A$ is a constant. In the second case the commutation relation gives $A(a(x))^{-1}R(x)^2A(x)=1$ which further implies that $(a(x),R(x)^2 y)$ is conjugate by $(x,A(x)y)$ to an elliptic element $(a(x), y)$. This is not possible because the map $f':(x,y)\dashrightarrow(a(x),R(x)^2 y)$ is a [Jonquières ]{}twist. Indeed the iterates $f^n,f'^n$ are respectively $$(a^n(x), R(x)\cdots R(a^{n-1}(x))y)\quad \text{and} \quad (a^n (x), (R(x)\cdots R(a^{n-1}(x)))^2 y)$$ and they have the same degree growth. Reciprocally all elements of the form $(x,y)\mapsto (x,\beta y)$ with $\beta\in{\mathbf{K}}^*$ commute with $f:(x,y)\dashrightarrow(a(x),R(x)y)$ and we have already observed that $(x,y)\mapsto(x,-y)$ is the only non trivial element of ${\operatorname{Jonq}_0(\mathbf{K})}$ which commutes with $(a(x),R(x)y^{-1})$. Suppose that there exists a non trivial $g\in\operatorname{Cent}_0(f)$ that writes as $(x,y)\mapsto (x,y+1)$. Then $f$ is of the form $(a(x),y+S(x))$ with $S\in{\mathbf{K}}(x)$ and $\operatorname{Cent}_0(f)$ is $\{(x,y+\gamma),\gamma\in{\mathbf{K}}\}$. The section $\{y=\infty\}$ is the only $g$-invariant section. Thus $f$ preserves this section and is of the form $(x,y)\dashrightarrow(a(x),R(x)y+S(x))$ where $R,S\in{\mathbf{K}}(x)$ and $a\in{\operatorname{PGL}_{2}(\mathbf{K})}$ is of infinite order. Writing down the relation $f\circ g=g\circ f$, we see that $R=1$. Thus $f$ is $(a(x),y+S(x))$ where $S$ belongs to ${\mathbf{K}}(x)$ but not to ${\mathbf{K}}[x]$ since $f$ is a [Jonquières ]{}twist. The only $f$-invariant section is $\{y=\infty\}$. Indeed an invariant section $s$ satisfies $$s(a^n(x))=s(x)+S(x)+\cdots +S(a^{n-1}(x)) \quad \forall n\in{\mathbf{N}}.$$ If $s$ was not $\{y=\infty\}$, then the two sides of the equations are rational fractions. The degree of the right-hand side grows linearly in $n$ while the degree of the left-hand side does not depend on $n$, contradiction. Thus, an element of $\operatorname{Cent}_0(f)$ fixes $\{y=\infty\}$ and is of the form $(x,A(x)y+B(x))$ with $A,B\in{\mathbf{K}}(x)$. Writing down the commutation relation, we get $$A(x)y+B(x)+S(x)=A(a(x))y+A(a(x))S(x)+B(a(x)).$$ The fact that $a$ is of infinite order implies that $A$ is a constant. Then the equation is reduced to $$B(x)+(1-A)S(x)-B(a(x))=0.$$ If $A\neq 1$, then $f:(x,y)\dashrightarrow(a(x),y+S(x))$ would be conjugate by $(x,y+\frac{B(x)}{1-A})$ to the elliptic elment $(a(x),y)$. Therefore $A=1$ and $B$ is a constant. Reciprocally we see that all elements of the form $(x,y)\mapsto (x,y+\beta)$ with $\beta\in{\mathbf{K}}$ commute with $f:(x,y)\dashrightarrow(a(x),y+S(x))$. Assume that no non-trivial element of $\operatorname{Cent}_0(f)$ is conjugate to an automorphism of a Hirzebruch surface and that $\operatorname{Cent}_0(f)$ has a non-trivial element $g$. Then $g$ is a [Jonquières ]{}involution and is the only non-trivial element of $\operatorname{Cent}_0(f)$. By Lemma \[centzelliptic\], $g$ is an elliptic element. By Corollary \[ordertwoboy\], $g$ acts on a conic bundle $X$ and fixes pointwise a hyperelliptic curve $C$. The map $f$ induces an action on $C$, equivariant with respect to the ramified double cover. The action of $f$ on $C$ is infinite, this is possible only if the action of $f$ on the base is up to conjugation $x\mapsto \alpha x$ and if $C$ is a rational curve whose projection on the base ${\mathbb{P}}^1$ is ramified over $x=0,x=\infty$. Then the only singular fibres of $X$ are over $x=0,x=\infty$. If $f$ had an indeterminacy point on these two fibres, then it would be a fixed point of $g$ because $g$ commutes with $f$. But the only fixed point of $g$ on a singular fibre is the intersection point of the two components, which can not be an indeterminacy point by Lemma \[jonqindlem\]. Therefore the [Jonquières ]{}twist $f$ must have an indeterminacy point over a point whose orbit in the base is infinite. This implies that the indeterminacy points of all the iterates of $f$ form an infinite set. As $g$ commutes with all the iterates of $f$, it fixes an infinite number of these indeterminacy points. Thus, the hyperelliptic curve $C$ associated to $g$ is the Zariski closure of these indeterminacy points and is uniquely determined by $f$. However $C$ determines $g$ too by Corollary \[ordertwoboy\] (see [@Bla11] for more general results). Therefore $g$ is uniquely determined by $f$ and is the only non trivial element of $\operatorname{Cent}_0(f)$. Putting together the three previous lemmas, we obtain the following improvement of [@CD12]: \[centzprop\] Let $f$ be a base-wandering [Jonquières ]{}twist. If $\operatorname{Cent}_0(f)$ is not trivial, then it is $\{(x,y)\mapsto (x,ty),t\in{\mathbf{K}}^*\}$, $\{(x,y)\mapsto (x,y+t),t\in{\mathbf{K}}\}$, $\langle (x,y)\mapsto (x,-y)\rangle$ or $\langle \text{a {Jonqui\`eres }involution}\rangle$. Persistent indeterminacy points ------------------------------- ### general facts Let $f$ be a birational transformation of a surface $X$. An indeterminacy point $x\in X$ of $f$ will be called *persistent* if 1) for every $i>0$, $f^{-i}$ is regular at $x$; and 2) there are infinitely many curves contracted onto $x$ by the iterates $f^{-n}$, $n\in {\mathbf{N}}$. This notion of persistence and the following idea appeared first in a non published version of [@Can11], and it is also applied to some particular examples in [@Des08]. \[orbitargument\] Let $f$ be an algebraically stable birational transformation of a surface $X$. Suppose that there exists at least one persistent indeterminacy point with an infinite backward orbit. Let $n$ denote the number of such indeterminacy points. Then the centralizer $\operatorname{Cent}(f)$ of $f$ admits a morphism $\varphi:\operatorname{Cent}(f)\rightarrow {\mathscr{S}}_n$ to the symmetric group of order $n$ satisfying the following property: for any $g\in\operatorname{Ker}(\varphi)$, there exists $l\in{\mathbf{Z}}$ such that $g\circ f^l$ preserves fibre by fibre a pencil of rational curves. The algebraic stability of $f$ will be used throughout the proof, we will not recall it each time. Denote by $p_1,\cdots,p_n$ the persistent indeterminacy points of $f$. Let $g$ be a birational transformation of $X$ which commute with $f$. Fix an index $1\leq n_0\leq n$. Since $\{f^{-i}(p_{n_0}),i>0\}$ is infinite, there exists $k_0>0$ such that $g$ is regular at $f^{-k}(p_{n_0})$ for all $k\geq k_0$. For infinitely many $j>0$, $f^{-j}$ contracts a curve onto $p_1$, denote these curves by $C_{n_0}^j$. There exists $k_1>0$ such that $g$ does not contract $C_{n_0}^k$ for all $k\geq k_1$. We deduce, from the above observations and the fact that $f$ and $g$ commute, that for $k\geq k_0$ the point $g(f^{-k}(p_{n_0}))$ is an indeterminacy point of some $f^m$ with $0<m\leq k_0+k_1$. Then there exists $0\leq m_0< m$ such that - for $0\leq i\leq m_0$, $f^i$ is regular at $g(f^{-k}(p_{n_0}))$; - $f^{m_0}(g(f^{-k}(p_{n_0})))=g(f^{m_0-k}(p_{n_0}))$ is an indeterminacy point of $f$. By looking at $g(f^{-k}(p_{n_0}))$ and $C_{n_0}^{k'}$ for infinitely many $k,k'$, we see that the above indeterminacy point does not depend on $k$ and is persistent with an infinite backward orbit. So it is $p_{\sigma_g(n_0)}$ for some $1\leq \sigma_g(n_0)\leq n$. This gives us a well defined map $\sigma_g:\{1,\cdots,n\}\rightarrow \{1,\cdots,n\}$. Now let $g,h$ be two elements of $\operatorname{Cent}(f)$. Then by considering a sufficiently large $k$ for which $g$ is regular at $f^{-k}(p_{n_0})$ and $h$ is regular at $g(f^{-k}(p_{n_0}))$, we see that $\sigma_h\circ\sigma_g=\sigma_{h\circ g}$. By taking $h=g^{-1}$ we see that $\sigma_g$ is bijective. We have then a group homomorphism $\varphi$ from $\operatorname{Cent}(f)$ to the symmetric group ${\mathscr{S}}_n$ which sends $g$ to $\sigma_g$. Assume that $n_0$ is a fixed point of $\sigma_g$, this holds in particular when $g\in \operatorname{Ker}(\varphi)$. We keep the previous notations. Since $g(f^{-k}(p_{n_0}))$ is an indeterminacy point of $f^m$ whose forward orbit meets $p_{n_0}$, for an appropriate choice of $l\leq k$ we have $$g\circ f^l(f^{-k}(p_{n_0}))=f^{-k}(p_{n_0})$$ for all $k\geq k_0$. This implies further $$g\circ f^l(C_{n_0}^{k'})=C_{n_0}^{k'}$$ for all sufficiently large $k'$. We conclude by Lemma \[invhypersurfaces\] below. The proof of the following lemma in [@Can10] is written over ${\mathbf{C}}$ for rational self-maps. It is observed in [@Xie15] that the same proof works in all characteristics for birational transformations. \[invhypersurfaces\] A birational transformation of a smooth algebraic surface which preserves infinitely many curves preserves each member of a pencil of curves. ### persistent indeterminacy points for [Jonquières ]{}twists We examine the notion of persistence in the [Jonquières ]{}group and give a complement to Theorem \[algstabjonq\]: \[persjonq\] Let $f$ be a [Jonquières ]{}twist acting algebraically stably on a conic bundle $X$ as in the statement of Theorem \[algstabjonq\]. Then an indeterminacy point $p$ of $f$ is persistent if and only if the orbit of $\pi(p)\in {\mathbb{P}}^1$ under $\overline{f}$ is infinite. And in that case, every $f^{-i}, i\in {\mathbf{N}}^*$ contracts a curve onto $p$. If $\pi(p)$ has a finite orbit then $p$ certainly can not be persistent. Let us assume that the orbit of $\pi(p)$ is infinite. Then $\overline{f}$ is conjugate to $x\mapsto \alpha X$ with $\alpha\in K^*$ of infinite order or to $x\mapsto x+1$ (only when ${\operatorname{char}(\mathbf{K})}=0$). By the algebraic stability of $f$, $f^{-i}$ is regular at $p$ for all $i>0$ and all the points $f^{-i}(p),i>0$ are on distinct fibres. Denote by $x_0,x_1$ the points $\pi(p),\overline{f}(\pi(p))$. By Theorem \[algstabjonq\], we know that the fibres $F_{x_0},F_{x_1}$ are not singular. Thus $f$ is regular on $F_{x_0}\backslash \{p\}$ and contracts it onto a point $q\in F_{x_1}$; $f^{-1}$ is regular on $F_{x_1}\backslash \{q\}$ and contracts it onto $p$. Now pick a point $x_n$ in the forward orbit of $x_0$ by $\overline{f}$ and consider the fibre $F_{x_n}$. The fibre $F_{x_n}$ cannot be contracted onto $q$ by $f^{-(n-1)}$ because of the algebraic stability of $f$. As a consequence it is contracted by $f^{-n}$ onto $p$. \[centpersjonq\] Let $f$ be a [Jonquières ]{}twist acting algebraically stably on a conic bundle $X$ as in the statement of Theorem \[algstabjonq\]. Suppose that the base action $\overline{f}\in{\operatorname{PGL}_{2}(\mathbf{K})}$ is of infinite order and there is an indeterminacy point of $f$ located on a fibre $F_x\subset X$ such that $\overline{f}(x)\neq x$. 1. If $\overline{f}$ is of the form $x\mapsto x+1$ then $\operatorname{Cent}_b(f)$ is isomorphic to ${\mathbf{Z}}$; 2. if $\overline{f}$ is of the form $x\mapsto \alpha x$ then $\operatorname{Cent}_b(f)$ is isomorphic to the product of ${\mathbf{Z}}$ with a finite cyclic group. Note that the first case does not occur when ${\operatorname{char}(\mathbf{K})}\neq 0$. Proposition \[persjonq\] shows that the birational transformation $f$ satisfies the hypothesis of Proposition \[orbitargument\]. Let $n$ denote the number of persistent indeterminacy points of $f$ with infinite backward orbits. Let $g\in\operatorname{Cent}(f)$. Proposition \[orbitargument\] says that $g^{n!}\circ f^l$ preserves every member of a pencil of rational curves for some $l\in {\mathbf{Z}}$. The proof of Proposition \[orbitargument\] shows that certain members of this pencil of rational curves are fibres of the initial rational fibration on $X$, so this pencil of rational curves is the initial rational fibration. This means $\overline{g}^{n!}\circ \overline{f}^l=\operatorname{Id}\in {\operatorname{PGL}_{2}(\mathbf{K})}$. When ${\operatorname{char}(\mathbf{K})}=0$ and $\overline{f}$ is $x\mapsto x+1$, its centralizer in ${\operatorname{PGL}_{2}(\mathbf{K})}$ is isomorphic to the additive group ${\mathbf{K}}$ and this group is torsion free. Thus, $\operatorname{Cent}_b(f)$ is contained in an infinite cyclic group in which $<\overline{f}>$ is of index $\leq n!$. The conclusion follows in this case. When $\overline{f}$ is $x\mapsto \alpha x$ with $\alpha$ of infinite order, its centralizer in ${\operatorname{PGL}_{2}(\mathbf{K})}$ is isomorphic to the multiplicative group ${\mathbf{K}}^*$. The difference is that, in this case it is possible that $\overline{g}$ is of finite order $\leq n!$. Thus, we may have an additional finite cyclic factor of $\operatorname{Cent}_b(f)$. Local analysis around a fibre {#localanalysis} ----------------------------- Now we need to study the case where there is no persistent indeterminacy points. In this section we will work in the following setting: - Let $f$ be a base-wandering [Jonquières ]{}twist. We can suppose that $\overline{f}$ is $x\mapsto \alpha x$ or $x\mapsto x+1$. - Up to taking an algebraically stable model as in Theorem \[algstabjonq\], we can suppose that $f$ is a birational transformation of a conic bundle $X$ which satisfies the properties in Theorem \[algstabjonq\]. - We assume that the only indeterminacy points of $f$ are on the fibres $F_0,F_{\infty}$. Without loss of generality, let us suppose that $f$ has an indeterminacy point $p$ on the fibre $F_{\infty}$. By algebraic stability $f^{-1}$ has an indeterminacy point $q\neq p$ on $F_{\infty}$. If $x\in{\mathbb{P}}^1$ is not $0$ nor $\infty$, then the orbit of $x$ under $\overline{f}$ is infinite and the fibre $F_x$ is regular. As $f$ has an indeterminacy point on $F_{\infty}$, the fibre $F_{\infty}$ is also regular. Assume that $F_{0}$ is singular, then it is the union of two $(-1)$-curves and $f$ exchanges the two components. Since the aim of this section is to prove that $\operatorname{Cent}_b(f)$ is finite by cyclic, it is not harmful to replace $f$ with $f^2$ so that the two components of $F_{0}$ are no more exchanged and we can assume that $F_{0}$ is regular. Thus, we can suppose that - the surface $X$ is a Hirzebruch surface. If $\overline{f}$ is $x\mapsto \alpha x$, then $\operatorname{Cent}_b(f)$ is contained in $\{(x\mapsto \gamma x), \gamma \in{\mathbf{K}}^*\}$ and all elements of $\operatorname{Cent}_b(f)$ fix $0$ and $\infty$. Similarly if $\overline{f}$ is $x\mapsto x+1$ then all elements of $\operatorname{Cent}_b(f)$ fix $\infty$. Thus $F_0$ or $F_{\infty}$ is $\operatorname{Cent}(f)$-invariant (under total transforms), we will study the (semi-)local behaviour of the elements in $\operatorname{Cent}(f)$ around such an invariant fibre. ### An infinite chain {#infinitechain} We blow up $X$ at $p,q$ the indeterminacy points of $f,f^{-1}$, obtaining a new surface $X_1$. The fibre of $X_1$ over $0$ is a chain of three rational curves $C_{-1}+C_0+C_1$ where $C_1$ (resp. $C_{-1}$) is the exceptional curve corresponding to $p$ (resp. $q$) and $C_0$ is the strict transform of $F_{\infty}\subset X$. Now $f$ induces a birational transformation $f_1$ of $X_1$. As in Lemma \[jonqindlem\], we know that $f_1$ (resp. $f_1^{-1}$) has an indeterminacy point $p_2$ (resp. $q_2$) on $C_1$ (resp. $C_{-1}$) which is disjoint from $C_0$. We then blow up $p_2,q_2$ and repeat the process. We have: - for every $n\in {\mathbf{N}}$, a surface $X_n$ on which $f$ induces a birational transformation $f_n$; - the fibre of $X_n$ over $0$ is a chain of rational curves $C_{-n},\cdots,C_0,\cdots,C_n$; - $f_n$ (resp. $f^{-n}$) has an indeterminacy point $p_{n+1}$ (resp. $q_{n+1}$) on $C_n$ (resp. $C_{-n}$) disjoint from $C_{n-1}$ (resp. $C_{-(n-1)}$). Let $g$ be a birational transformation of $X$ which commutes with $f$. We already observed that $F_{\infty}$ is an invariant fibre of $g$. If $g$ is regular on $F_{\infty}$, then the commutativity implies that $g$ preserves the set $\{p,q\}$. Suppose that $g$ is not regular on $F_{\infty}$. Then $g$ (resp. $g^{-1}$) has an indeterminacy point $p'$ (resp. $q'$) on $F_{\infty}$. Replacing $g$ by $g^{-1}$ or $f$ by $f^{-1}$, we can suppose that $p'\neq q$. Then for every point $x\in F_{\infty}$ such that $x\neq p, p'$, we have that $g(q)=g(f(x))=f(g(x))$ is a point, thus equals $q$. This further implies $q=q'$. Then we apply the same argument to $g,f^{-1}$, obtaining $p=p'$. In summary, $g$ is either regular on $F_{\infty}$ and preserves $\{p,q\}$, or the set of indeterminacy points of $g, g^{-1}$ on $F_{\infty}$ is exactly $\{p,q\}$. We lift $g$ to a birational transformation on $X_n$. By repeating the above arguments, we deduce that for all $n\in{\mathbf{N}}$ the two indeterminacy points of $f_n,f_n^{-1}$ on the fibre $F_{\infty}\subset X_n$ coincide with that of $g_n,g_n^{-1}$ if the later exist. This means that for a $C_i$ given, and for sufficiently large $n$, the rational curve $C_i$ is a component of the fibre of $X_n$ and $g_n$ maps it to another component $C_j$ of the fibre. In other words $g$ acts on the infinite chain of rational curves $\sum_{n\in {\mathbf{Z}}}C_n$. The dual graph of this infinite chain of rational curves is a chain of vertices indexed by ${\mathbf{Z}}$. The action of $f$ on the dual graph is just a non trivial translation. The isomorphism group of the dual graph is isomorphic to ${\mathbf{Z}}\rtimes{\mathbf{Z}}/2{\mathbf{Z}}$. Those isomorphisms which commute with a non trivial translation coincide with the subgroup of translations ${\mathbf{Z}}$. The above considerations can be summarized as follows: \[chainaction\] There is a group homomorphism $\Phi:\operatorname{Cent}(f)\rightarrow {\mathbf{Z}}$ such that $g(C_n)=C_{\Phi(g)+n}$ for $g\in\operatorname{Cent}(f)$. An element $g\in\operatorname{Cent}(f)$ is in the kernel of $\Phi$ if and only if $g(C_n)=C_n$ for every $n\in{\mathbf{Z}}$. In other words an element $g$ of the kernel of $\Phi$ is regular on the fibre $F_{\infty}$ and fixes the indeterminacy points of $f,f^{-1}$ on this fibre. \[nootherind\] Let $g$ be an element of $\operatorname{Cent}(f)$. Let $x\in {\mathbb{P}}^1$ be a point not fixed by $\overline{f}$. Then $g$ can not have any indeterminacy points on the fibre $F_x$ over $x$. By our hypothesis $f$ is regular on all fibres $F_{x_n}$ where $\{x_n,n\in{\mathbf{Z}}\}$ denote the orbit of $x$ under $\overline{f}$. If $g$ had an indeterminacy point $p$ on $F_x$, then $f(p),f^2(p),\cdots$ would give us an infinite number of indeterminacy points of $g$. \[cyclicprop1\] Suppose that $\overline{f}$ is conjugate to $x\mapsto x+1$ (in particular ${\operatorname{char}(\mathbf{K})}=0$). Let $g\in \operatorname{Cent}(f)$ be in the kernel of $\Phi:\operatorname{Cent}(f)\rightarrow {\mathbf{Z}}$. Then $g$ is an automorphism of $X$. Furthermore $g$ preserves the rational fibration fibre by fibre. Lemma \[chainaction\] says that $g$ does not have any indeterminacy point on the fibre $F_{\infty}$. Lemma \[nootherind\] says that $g$ does not have any indeterminacy point elsewhere neither. Thus, $g$ is an automorphism. Since $\overline{g}$ commutes with $\overline{f}:x\mapsto x+1$, $\overline{g}$ is $x\mapsto x+v$ for some $v\in{\mathbf{K}}$. Suppose by contradiction that $v\neq 0$. Then $g$ is an elliptic element of infinite order and $f\in\operatorname{Cent}(g)$. We can apply Theorem \[ellipticthm\] to $g,f$ and put them in normal form. As $f$ is a [Jonquières ]{}twist, the rational fibration preserved simultaneously by $f$ and $g$ is unique and it must be the rational fibration appeared in the normal form. Hence, Theorem \[ellipticthm\] forbids $\overline{f},\overline{g}$ to be both non-trivial and of the form $x\mapsto x+sth$. When $\overline{f}$ is of the form $x\mapsto \alpha x$, there are two special fibres $F_0,F_{\infty}$ and the above easy argument does not work. ### Formal considerations along a fibre In the rest of this section we will assume that $\overline{f}$ is $x\mapsto \alpha x$. There are two invariant fibers $F_{\infty}$ and $F_0$ in this case. We assume that $f$ has an indeterminacy point $q$ on $F_0$. The idea of what we do in the sequel is as follows. Let us look at the case where ${\mathbf{K}}={\mathbf{C}}$. The indeterminacy point $q\in F_0$ of $f^{-1}$ is a fixed point of $f$, at which the differential of $f$ has two eigenvalues $0$ and $\alpha$; the fibre directon is superattracting and in the transverse direction $f$ is just $x\mapsto \alpha x$. Therefore there is a local invariant manifold at $q$ for $f$, which is a local holomorphic section of the rational fibration. Likewise, there is a local invariant manifold at $p\in F_0$, the indeterminacy point of $f$. These two local holomorphic sections allow us to conjugate locally holomorphically $f$ to $(\alpha x,a(x) y)$ where $a$ is a germ of holomorphic function. The structure of [Jonquières ]{}maps is nice enough to allow us to apply this geometric idea over any field in an elementary way. We need just to work with formal series instead of polynomials. From now on we fix $f:(x,y)\dashrightarrow (\alpha x, \frac{A(x)y+B(x)}{C(x)y+D(x)})$ where $\alpha\in{\mathbf{K}}^*$ is of infinite order and $A,B,C,D\in {\mathbf{K}}[x]$. Without loss of generality, we suppose that 1) the point $(0,0)$ (resp. $(0,\infty)$) is an indeterminacy point of $f$ (resp. $f^{-1}$); 2) one of the $A,B,C,D$ is not a multiple of $x$. This implies $$B(0)=C(0)=D(0)=0, \ A(0)\neq 0. \label{eq:fatzero}$$ We will consider $A,B,C,D$ as elements of the ring of formal series ${\mathbf{K}}\llbracket x\rrbracket$. We will also view $f$ as an element of the formal [Jonquières ]{}group $\operatorname{PGL}_2({\mathbf{K}}{(\!(}x{)\!)})\rtimes {\mathbf{K}}^*$ whose elements are formal expressions of the form $(\mu x, \frac{a(x)y+b(x)}{c(x)y+d(x)})$ where $\mu\in{\mathbf{K}}^*$ and $a,b,c,d$ belong to ${\mathbf{K}}{(\!(}x{)\!)}$, the fraction field of ${\mathbf{K}}\llbracket x\rrbracket$. #### Normal form. We want to conjugate $f$ to a formal expression of the form $(\alpha x,\beta(x) y),\beta \in{\mathbf{K}}{(\!(}x{)\!)}$ by some formal expression $(x,\frac{E(x)y+F(x)}{G(x)y+H(x)})$ with $E,F,G,H\in {\mathbf{K}}\llbracket x\rrbracket$. This amounts to say that we are looking for $E,F,G,H\in {\mathbf{K}}\llbracket x\rrbracket$ such that $EF-GH\neq 0$ and $$\begin{pmatrix}E(\alpha x)&F(\alpha x)\\G(\alpha x)&H(\alpha x)\end{pmatrix}^{-1} \begin{pmatrix}A(x)&B(x)\\C(x)&D(x)\end{pmatrix} \begin{pmatrix}E(x)&F(x)\\G(x)&H(x)\end{pmatrix}$$ is a diagonal matrix. By writing out the explicit expressions of the up-right entry and the down-left entry of this matrix product, we obtain two equations to solve: $$\begin{aligned} F(x)H(\alpha x)A(x)+H(x)H(\alpha x)B(x)-F(x)F(\alpha x)C(x)-H(x)F(\alpha x)D(x)=0 \label{eq:efghone}\\ -E(x)G(\alpha x)A(x)-G(x)G(\alpha x)B(x)+E(x)E(\alpha x)C(x)+G(x)E(\alpha x)D(x)=0 \label{eq:efghtwo}\end{aligned}$$ We will use minuscules to denote the coefficients of the formal series, e.g. $E(x)=\sum_{i\in {\mathbf{N}}}e_ix^i$. Let us first look at the constant terms of equations , , they give $$-e_0g_0a_0-g_0^2b_0+e_0^2c_0+e_0g_0d_0=0=f_0h_0a_0+h_0^2b_0-f_0f_0c_0-f_0h_0d_0.$$ Since $b_0=c_0=d_0=0$ and $a_0\neq 0$ (see Equation ), we must have $e_0g_0=f_0h_0=0$. We can choose $f_0=g_0=0$ and $e_0=h_0=1$, this guarantees in particular that our solution will satisfy $EH-FG\neq 0$. Remark that the equations and involve respectively only $E,G$ and $F,H$, and they have exactly the same form. So it suffices to show the existence of $E,G$ which satisfy equation . The constant term is done, let us look at the $x$ term. This leads to a linear equation in $e_1,g_1$ with coefficients involving $a_0,b_0,c_0,d_0,e_0,g_0$ and $\alpha$. Therefore there exists at least one solution for $e_1,g_1$. Then we turn to the next term and get a linear equation in $e_2,g_2$, and so on. Hence, we can find $E,F,G,H$ which satisfy the desired properties. To sum up, we have: \[EFGHlem\] There exists $E,F,G,H\in {\mathbf{K}}\llbracket x\rrbracket$ such that: - $E(0)=H(0)=1$ and $F(0)=G(0)=0$, in particular $\begin{pmatrix}E&F\\G&H\end{pmatrix}\in \operatorname{PGL}_2({\mathbf{K}}{(\!(}x{)\!)})$; - $(x,\frac{E(x)y+F(x)}{G(x)y+H(x)})$ conjugates $f$ to $(\alpha x, \beta(x)y)$ for some $\beta\in{\mathbf{K}}{(\!(}x{)\!)}$; #### Projective line over ${\mathbf{K}}{(\!(}x{)\!)}$. We call an element of ${\mathbb{P}}^1({\mathbf{K}}{(\!(}x{)\!)})={\mathbf{K}}{(\!(}x{)\!)}\bigcup\{\infty\}$ a formal section. We say a formal section $\theta(x)$ passes through the origin if $\theta(0)=0$. An element $u=(\mu x, \frac{a(x)y+b(x)}{c(x)y+d(x)})$ of the formal [Jonquières ]{}group $\operatorname{PGL}_2({\mathbf{K}}{(\!(}x{)\!)})\rtimes {\mathbf{K}}^*$ acts on ${\mathbb{P}}^1({\mathbf{K}}{(\!(}x{)\!)})$ in the following way: $$\begin{aligned} \theta(x) &\mapsto u\cdot\theta(x)=\begin{cases}\infty \quad \text{if} \ c(\mu^{-1}x)\theta(\mu^{-1}x)+d(\mu^{-1}x)=0 \\ \frac{a(\mu^{-1}x)\theta(\mu^{-1}x)+b(\mu^{-1}x)}{c(\mu^{-1}x)\theta(\mu^{-1}x)+d(\mu^{-1}x)}\quad \text{otherwise}\end{cases},\\ \infty &\mapsto \begin{cases}\infty \quad \text{if} \ c=0\\ \frac{a(\mu^{-1}x)}{c(\mu^{-1}x)} \quad \text{if} \ c\neq 0\end{cases}. \end{aligned}$$ Geometrically this is saying that a formal section of the rational fibration is sent to another by a formal [Jonquières ]{}transformation. Remark that this action on ${\mathbb{P}}^1_{{\mathbf{K}}{(\!(}x{)\!)}}$ is not an automorphism of ${\mathbf{K}}{(\!(}x{)\!)}$-algebraic variety. In scheme theoretic language, we have a commutative diagram: $$\begin{tikzcd} {\mathbb{P}}^1_{{\mathbf{K}}{(\!(}x{)\!)}} \arrow{r}{\theta\mapsto u\cdot \theta} \arrow{d}{} & {\mathbb{P}}^1_{{\mathbf{K}}{(\!(}x{)\!)}}\arrow{d}{}\\ \operatorname{Spec}({\mathbf{K}}{(\!(}x{)\!)}) \arrow{r}{\mu x\mapsfrom x}& \operatorname{Spec}({\mathbf{K}}{(\!(}x{)\!)}). \end{tikzcd}$$ Thus, we have a group homomorphism from $\operatorname{PGL}_2({\mathbf{K}}{(\!(}x{)\!)})\rtimes {\mathbf{K}}^*$ to the group of such twisted automorphisms of ${\mathbb{P}}^1_{{\mathbf{K}}{(\!(}x{)\!)}}$. Now let $g\in\operatorname{Cent}(f)$ be an element in the kernel of $\Phi$. Recall (see Lemma \[chainaction\]) that $g$ is regular on the fibre $F_0$ and fixes $(0,0),(0,\infty)$. We showed that $f$ is conjugate by $\begin{pmatrix}E&F\\G&H\end{pmatrix}$ to a formal expression $\hat{f}$ of the form $(\alpha x,\beta(x)y)$. We conjugate $g$ by $\begin{pmatrix}E&F\\G&H\end{pmatrix}$ too to get a formal expression $\hat{g}$. Then $\hat{g}$ commutes with $\hat{f}$. Recall that, by Lemma \[EFGHlem\], we get $\begin{pmatrix}1&0\\0&1\end{pmatrix}$ when we evaluate the formal expression $\begin{pmatrix}E&F\\G&H\end{pmatrix}$ at $x=0$. Together with the fact that $g\in\operatorname{Ker}(\Phi)$, this implies that we get $y\mapsto \delta_0 y$ for some $\delta_0\in{\mathbf{K}}^*$ when we evaluate $\hat{g}$ at $x=0$. Let us consider the actions of $\hat{f},\hat{g}$ on ${\mathbb{P}}^1_{{\mathbf{K}}{(\!(}x{)\!)}}$ as described above. Since $\hat{f}$ is in diagonal form, it fixes the points $0$ and $\infty$ of ${\mathbb{P}}^1_{{\mathbf{K}}{(\!(}x{)\!)}}$. If $\theta\in{\mathbb{P}}^1_{{\mathbf{K}}{(\!(}x{)\!)}}$ satisfies $\theta(0)=0$ and $\hat{f}\cdot\theta(x)=\theta(x)$, then $\theta=0$. The equation $\hat{f}\cdot\theta(x)=\theta(x)$ writes as $\beta(\alpha^{-1}x)\theta(\alpha^{-1}x)=\theta(x)$, i.e. $\theta(\alpha x)^{-1}\beta(x)\theta(x)=1$. Suppose by contradiction that $\theta$ is not $0$. Then we can write $\theta(x)$ as $x^r\tilde{\theta}(x)$ where $r>0$ and $\tilde{\theta}(0)\neq 0$. Hence we have $\tilde{\theta}(\alpha x)^{-1}\beta(x)\tilde{\theta}(x)=\alpha^r$. This implies that $\hat{f}$ is conjugate by $(x,\tilde{\theta}(x)y)$ to $(\alpha x, \alpha^r y)$. Since $\tilde{\theta}(0)\neq 0$ and $\begin{pmatrix}E(0)&F(0)\\G(0)&H(0)\end{pmatrix}=\begin{pmatrix}1&0\\0&1\end{pmatrix}$, this implies that the initial [Jonquières ]{}twist $f$ is regular on the fibre $F_0$, contradiction. Since $\hat{g}$ is $y\mapsto \delta_0 y$ at $x=0$, it sends the formal section $0\in{\mathbb{P}}^1({\mathbf{K}}{(\!(}x{)\!)})$ to another former section passing through the origin. The fact that $\hat{f}$ and $\hat{g}$ commute and the fact that $0$ is the only fixed formal section of $\hat{f}$ which passes through the origin imply that $\hat{g}$ fixes $0\in {\mathbb{P}}^1_{{\mathbf{K}}{(\!(}x{)\!)}}$. Likewise $\hat{g}$ fixes $\infty$ too. Therefore $\hat{g}$ writes as $(\gamma x,\delta(x)y)$ where $\gamma\in{\mathbf{K}}^*$ and $\delta\in{\mathbf{K}}{(\!(}x{)\!)}$ satisfies $\delta(0)=\delta_0\neq 0$. #### Normal forms for a pair. Let us assume for the moment that $\gamma$ is not a root of unity; we are going to prove that this is impossible. We want to, under this hypothesis, conjugate $\hat{g}=(\gamma x,\delta(x)y)$ to $(\gamma x, \delta(0)y)$ by $h=(x,\xi(x)y)$ for some $\xi\in {\mathbf{K}}\llbracket x\rrbracket$. Remark that the conjugate of $\hat{f}$ by $h$ will still be in diagonal form. We write $\delta=\frac{\omega}{\sigma}$ where $\omega,\sigma\in {\mathbf{K}}\llbracket x\rrbracket$ satisfies $\omega(0)\neq 0,\sigma(0)\neq 0$ and $\frac{\omega(0)}{\sigma(0)}=\delta(0)$. We will write $\xi$ as $\sum_{i\in{\mathbf{N}}}\xi_ix^i$, and likewise for $\sigma,\omega$. After conjugation by $h=(x,\xi(x)y)$, $\hat{g}$ becomes $$\tilde{g}=h\circ \hat{g}\circ h^{-1}=(\gamma x,\frac{\xi(\gamma x)}{\xi(x)}\frac{\omega(x)}{\sigma(x)}y).$$ Therefore the equation we want to solve is $$\xi(\gamma x)\omega(x)=\frac{\omega_0}{\sigma_0}\xi(x)\sigma(x). \label{eq:}$$ The constant terms of the two sides are automatically equal, let us just choose $\xi_0=1$. Comparing the other terms, we obtain $$\begin{aligned} &\xi_0\omega_1+\gamma \xi_1 \omega_0=\frac{\omega_0}{\sigma_0}(\xi_0\sigma_1+\xi_1\sigma_0) \\ &\xi_0\omega_2+\gamma \xi_1\omega_1+\gamma^2\xi_2\omega_0=\frac{\omega_0}{\sigma_0}(\xi_0\sigma_2+\xi_1\sigma_1+\xi_2\sigma_0)\\ &\cdots \end{aligned}$$ which are equivalent to $$\begin{aligned} &(\gamma-1)\omega_0\xi_1=\frac{\omega_0}{\sigma_0}\xi_0\sigma_1-\xi_0\omega_1\\ &(\gamma^2-1)\omega_0\xi_2=\frac{\omega_0}{\sigma_0}(\xi_0\sigma_2+\xi_1\sigma_1)-\xi_0\omega_2-\gamma \xi_1\omega_1\\ &\cdots.\end{aligned}$$ For the $i$-th term, we have a linear equation whose coefficient before $\xi_i$ is $(\gamma^i-1)\omega_0$. Since $\omega\neq 0$ and we have supposed that $\gamma$ is not a root of unity, The above equations always have solutions. In summary, we have the following intermediate lemma (we will get from this lemma a contradiction so its hypothesis is in fact absurd): Suppose that $g\in\operatorname{Ker}(\Phi)$ and the action of $g$ on the base is of infinite order. Then we can conjugate $f$ and $g$, simultaneously by an element in $\operatorname{PGL}_2({\mathbf{K}}{(\!(}x{)\!)})$ whose evaluation at $x=0$ is $\operatorname{Id}:y\mapsto y$, to $$\tilde{g}=(\gamma x,\delta y),\ \tilde{f}=(\alpha x,\beta(x) y)$$ where $\alpha,\gamma,\delta\in {\mathbf{K}}^*$, $\beta\in {\mathbf{K}}{(\!(}x{)\!)}^*$ and $\alpha,\gamma$ are of infinite order. Writing down the equation $\tilde{f}\circ\tilde{g}=\tilde{g}\circ\tilde{f}$, we get $\delta\beta(x)=\delta\beta(\gamma x)$. As $\delta\neq 0$, we get $\beta(x)=\beta(\gamma x)$. We write $\beta=\frac{\beta^{num}}{\beta^{den}}$ with $\beta^{num},\beta^{den}\in {\mathbf{K}}\llbracket x\rrbracket$ such that at least one of the $\beta^{num}_0,\beta^{den}_0$ is not $0$. The equation becomes $$\beta^{num}(x)\beta^{den}(\gamma x)=\beta^{den}(x)\beta^{num}(\gamma x).$$ By comparing the coeffcients of two sides, we get $$\forall k\in{\mathbf{N}},\ \sum_{i+j=k}{\beta^{num}_i\beta^{den}_j\gamma^j}=\sum_{i+j=k}{\beta^{den}_i\beta^{num}_j\gamma^j}.$$ Then by induction on $k$ we get from these equations: 1. either $\beta^{num}=0$ (when $\beta^{num}_0=0$), this is impossible; 2. or $\beta^{den}=0$ (when $\beta^{den}_0=0$), this is again impossible; 3. or $\beta^{num}=\kappa\beta^{den}$ for some $\kappa\in{\mathbf{K}}^*$ (when $\beta^{num}_0\beta^{den}_0\neq 0$). Then$\tilde{f}=(\alpha x,\kappa y)$, this contradicts the fact that the original birational transformation $f$ has an indeterminacy point on the fibre $F_0$ because to get $\tilde{f}$ we only did conjugations whose evaluation at $x=0$ are the identity $y\mapsto y$. Thus, we get \[kerphielliptic\] Suppose that $g\in\operatorname{Ker}(\Phi)$. Then $\overline{g}$ is of finite order and $g$ is an elliptic element of ${\operatorname{Cr}_{2}(\mathbf{K})}$. We have already showed that $\overline{g}$ can not be of infinite order. Then an iterate $g^k$ is in ${\operatorname{Jonq}_0(\mathbf{K})}$ and $f\in\operatorname{Cent}(g^k)$. By Theorem \[jonqzthm\], an element which commutes with a [Jonquières ]{}twist in ${\operatorname{Jonq}_0(\mathbf{K})}$ can not have an infinite action on the base. As $\overline{f}$ is of infinite order, $g^k$ must be elliptic. So $g$ must be elliptic. ### Another fibre The base action $\overline{f}\in{\operatorname{PGL}_{2}(\mathbf{K})}$ is $x\mapsto \alpha x$, it has two fixed points $0$ and $\infty$. Recall that we are always under the hypothesis that the indeterminacy points of $f$ are on the fibres $F_0,F_{\infty}$. We have done analysis around the fibre $F_0$ on which $f$ has an indeterminacy point. We will denote by $\Phi_0$ the homomorphism $\Phi$ we considered before. In case $f$ has also an indeterminacy point on $F_{\infty}$, we denote the corresponding homomorphism by $\Phi_{\infty}$. We are going to reduce the proof to a situation where the following lemma applies. \[alggpcycliclem\] The image of $\operatorname{Aut}(X)\bigcap\operatorname{Ker}(\Phi_0)\subset\operatorname{Cent}(f)$ in $\operatorname{Cent}_b(f)\subset{\operatorname{PGL}_{2}(\mathbf{K})}$ is a finite cyclic group. We recall first that the automorphism group of a Hirzebruch surface is an algebraic group (see [@Mar71]). An element of $\operatorname{Cent}(f)$ which is regular everywhere on $H$ must be in $\operatorname{Ker}(\Phi_0)$. Thus, $\operatorname{Aut}(X)\bigcap\operatorname{Ker}(\Phi_0)=\operatorname{Aut}(X)\bigcap\operatorname{Cent}(f)$ is an algebraic subgroup of $\operatorname{Aut}(H)$. An automorphism of a Hirzebruch surface always preserves the rational fibration and there is a morphism of algebraic groups from $\operatorname{Aut}(X)$ to ${\operatorname{PGL}_{2}(\mathbf{K})}$ (see [@Mar71]). The image of $\operatorname{Aut}(X)\bigcap\operatorname{Ker}(\Phi_0)\subset\operatorname{Cent}(f)$ in $\operatorname{Cent}_b(f)\subset{\operatorname{PGL}_{2}(\mathbf{K})}$ is an algebraic subgroup $\Lambda$ of ${\operatorname{PGL}_{2}(\mathbf{K})}$. By Proposition \[kerphielliptic\], the elements of $\Lambda$ are all multiplication by roots of unity. If $\Lambda$ was infinite then it would equal to its Zariski closure in ${\operatorname{PGL}_{2}(\mathbf{K})}$ and would be isomorphic to the multiplicative group ${\mathbf{K}}^*$. But the existence of a base-wandering [Jonquières ]{}twist means that ${\mathbf{K}}^*$ contains elements of infinite order, for example $\alpha$. This contradicts the fact that $\Lambda={\mathbf{K}}^*$ is of torsion. The conclusion follows. We first look at the case where we have two homomorphisms $\Phi_0,\Phi_{\infty}$ to use: \[cyclicprop2\] If $f$ has an indeterminacy point on $F_{\infty}$, then $\operatorname{Ker}(\Phi_0)=\operatorname{Ker}(\Phi_{\infty})$ is a subgroup of $\operatorname{Aut}(X)$. The image of $\operatorname{Ker}(\Phi_0)$ in $\operatorname{Cent}_b(f)\subset{\operatorname{PGL}_{2}(\mathbf{K})}$ is a finite cyclic group. Let $g$ be an element of $\operatorname{Ker}(\Phi_0)$. By Proposition \[kerphielliptic\] $g$ is an elliptic element of ${\operatorname{Cr}_{2}(\mathbf{K})}$. If $\Phi_{\infty}(g)$ were not trivial, then $g$ would act by a non trivial translation on the corresponding infinite chain of rational curves and could not be conjugate to any automorphism. This means $g$ must belong to $\operatorname{Ker}(\Phi_{\infty})$ and consequently $g$ must be an automorphism of $H$. The second part of the statement follows from Lemma \[alggpcycliclem\]. When $f$ is regular on $F_{\infty}$, we may need to do a little more, but we get more precise information as well: \[cyclicprop3\] If $f$ has no indeterminacy points on $F_{\infty}$, then $\operatorname{Ker}(\Phi_0)$ is a finite cyclic group whose elements are automorphisms of ${\mathbb{P}}^1\times{\mathbb{P}}^1$ of the form $(x,y)\mapsto (\gamma x,y)$ with $\gamma$ a root of unity. Assume that $f$ is regular on $F_{\infty}$. Let $g\in\operatorname{Ker}(\Phi_0)$ be a non trivial element, it is regular on $F_0$. By Lemma \[nootherind\], an indeterminacy point of $g$ can only be located on $F_{\infty}$. Suppose that $g$ has an indeterminacy point $p$ on $F_{\infty}$. Then $g^{-1}$ also has an indeterminacy point $q$ on $F_{\infty}$. If $p\neq q$, then $g$ would act by translation on the corresponding infinite chain of rational curves. This means that $g$ would never be conjugate to an automorphism of some surface and contradicts Proposition \[kerphielliptic\] which asserts that $g$ is elliptic. Thus, we have $p=q$. The facts that $f$ commutes with $g$ and that $f$ is regular on $F_{\infty}$ imply $f(p)=p$. We blow up the Hirzebruch surface $X$ at $p$ to get a new surface $X'$ and induced actions $f',g'$. The induced action $f'$ is still regular on the fibre $F_{\infty}'$ and preserves both of the two irreducible components. If $g'$ has an indeterminacy point on $F_{\infty}'$, then as before it coincides with the indeterminacy point of $g'^{-1}$ and must be fixed by $f'$. Then we can keep blowing up indeterminacy points of maps induced from $g$, or contracting $g$-invariant $(-1)$-curves in the fibre, without loosing the regularity of the map induced by $f$. As $g$ is elliptic, we will get at last a surface $\hat{X}$ with induced actions $\hat{f},\hat{g}$ which are all regular on the fibre over $\infty$. We can suppose that $\hat{X}$ is minimal among the surfaces with this property. In particular $\hat{g}$ is an automorphism of $\hat{X}$. Moreover, the proof of Theorem \[algstabjonq\] shows that $\hat{X}$ is a conic bundle and the only possible singular fibre is $\hat{F}_{\infty}$. We claim that $\hat{F}_{\infty}$ is in fact regular. Suppose by contradiction that $\hat{F}_{\infty}$ is singular. Then it is a chain of two $(-1)$-curves and $\hat{g}$ exchanges the two components. However the conic bundle $\hat{X}$ is obtained from a Hirzebruch surface by a single blow-up, it has a unique section of negative self-intersection which passes through only one of the two components of the singular fibre. As a consequence, the automorphism $\hat{g}$ can not exchange the two components, contradiction. Thus, replacing $X$ by $\hat{X}$, we can suppose from the beginning that $g$ is an automorphism of the Hirzebruch surface $X$. Suppose by contradiction that $g$ preserves only finitely many sections of the rational fibration. Since $f$ commutes with $g$, we can assume, after perhaps replacing $f$ by some of its iterates, that $f$ and $g$ preserve simultaneously a section of the rational fibration. Removing this section and the fibre $F_0$ from $H$, we get an open set isomorphic to ${\mathbb{A}}^2$ restricted to which $f$ writes as $(x',y')\mapsto (\alpha^{-1} x', A(x')y'+B(x'))$ where $A,B\in {\mathbf{K}}(x')$. The rational function $A$ must be a constant because $f$ acts as an automorphism on this affine open set. Likewise the rational function $B$ must be a polynomial. But then $(deg(f^n))_{n\in{\mathbf{N}}}$ would be a bounded sequence. This contradicts the fact that $f$ is a [Jonquières ]{}twist. Hence, if $g\in \operatorname{Ker}(\Phi)$ is non-trivial then it preserves necessarily infinitely many sections. This forces $g$ to preserve each member of a pencil of rational curves on $X$ whose general members are sections (see Lemma \[invhypersurfaces\]). This is only possible if $X={\mathbb{P}}^1\times{\mathbb{P}}^1$ and $g$ acts as $(x,y)\mapsto (\gamma x,y)$ with $\gamma\in{\mathbf{K}}^*$; here the projection of ${\mathbb{P}}^1\times{\mathbb{P}}^1$ onto the first factor is the original rational fibration we were looking at. This allows us to conclude by Lemma \[alggpcycliclem\]. \[examplecentb\] Let $\mu$ be a $k$-th root of unity, the pair $f:(x,y)\mapsto (\alpha x,\frac{(1+x^k)y+x^k}{(2+x^k)y+1+x^k}), g:(x,y)\mapsto (\mu x, y)$ satisfy the conditions in Proposition \[cyclicprop3\]. Now let $f$ be a base-wandering [Jonquières ]{}twist which satisfies the hypothesis made at the beginning of Section \[localanalysis\]; in particular $f$ is regular outside $F_0\bigcap F_{\infty}$ and $\overline{f}$ is $x\mapsto \alpha x$ or $x\mapsto x+1$. The image $\Phi_{\infty}(\operatorname{Cent}(f))$ is an infinite cyclic subgroup of ${\mathbf{Z}}$ and is isomorphic to ${\mathbf{Z}}$, it is generated by $\Phi_{\infty}(g)$ for some $g\in\operatorname{Cent}(f)$. Then for any $h\in\operatorname{Cent}(f)$, there exists $k\in{\mathbf{Z}}$ such that $g^{-k}\circ h \in\operatorname{Ker}(\Phi_{\infty})$. Thus, $\overline{g}^{-k}\circ \overline{h}$ belongs to the image of $\operatorname{Ker}(\Phi_{\infty})$ in $\operatorname{Cent}_b(f)$. By Corollary \[cyclicprop1\], Proposition \[cyclicprop2\] and Proposition \[cyclicprop3\], the image of $\operatorname{Ker}(\Phi_{\infty})$ in $\operatorname{Cent}_b(f)$ is at worst finite cyclic. Note that $\operatorname{Cent}_b(f)$ is always abelian. Therefore we obtain the last piece of information to prove Theorem \[mainthm\]: \[centnonpersjonq\] Let $f$ be a base-wandering [Jonquières ]{}twist which satisfies the hypothesis made at the beginning of Section \[localanalysis\]. Let $g$ be an element of $\operatorname{Cent}(f)$ such that $\Phi_0(g)$ generates the image of $\Phi$. Then $\operatorname{Cent}_b(f)$ is the product of a finite cyclic group with the infinite cyclic group generated by $\overline{g}$. Proofs of the main results ========================== Centralizers of loxodromic elements are virtually cyclic by Theorem \[loxothm\] of Blanc-Cantat. It is proved in [@Giz80],[@Can11] that centralizers of Halphen twists are virtually abelian (see Theorem \[halphenthm\]). Centralizers of [Jonquières ]{}twists whose actions on the base are of finite order are contained in tori over the function field ${\mathbf{K}}(x)$, thus are abelian ([@CD12] see Theorem \[jonqzthm\]). Our Theorem \[basewanderingvirtuallyabelian\] says that centralizers of base-wandering [Jonquières ]{}twists are virtually abelian. Centralizers of infinite order elliptic elements (due to [@BD15]) are described in Theorem \[ellipticthm\], from which we see directly that the only infinite order elliptic elements which admit non virtually abelian centralizers are those given here. The proof is a direct combination of Theorems \[loxothm\], \[halphenthm\], \[ellipticthm\], \[jonqzthm\] and \[mainthm\]. In the first case $\Gamma$ is an elliptic subgroup, so the degree function is bounded. In the second case, the two Halphen twists $f$ and $g$ are automorphisms of a rational surface $X$ preserving an elliptic fibration $X\rightarrow {\mathbb{P}}^1$. The elliptic fibration is induced by the linear system corresponding to $mK_X$ for some $m\in{\mathbf{N}}^*$. For $n\in{\mathbf{N}}$, the actions of $f^n$ and $g^n$ on $Pic(X)$ are respectively $$D\mapsto D-mn(D\cdot K_X)\Delta_i+\left( -\frac{m^2}{2}(D\cdot K_X)\cdot (n\Delta_i)^2+m(D\cdot (n\Delta_i)) \right) K_X, \quad i=1,2$$ where $(\cdot)$ denotes the intersection form and $\Delta_i\in Pic(X)$ satisfies $\Delta_i\cdot K_X=0$ (cf. [@Giz80], [@BD15]). Therefore the action of $f^i\circ g^j$ on $Pic(X)$ is $$\begin{aligned} &D\mapsto D-mi(D\cdot K_X)\Delta_1-mj(D\cdot K_X)\Delta_2+\lambda_{ij} K_X \quad \text{where}\\ & \lambda_{ij}=-\frac{m^2}{2}(D\cdot K_X)\cdot \left (i^2\Delta_1^2 + j^2\Delta_2^2\right) +mD\cdot (i\Delta_1+j\Delta_2)-ijm^2(D \cdot K_X)(\Delta_1\cdot \Delta_2).\end{aligned}$$ Let $\Lambda$ be an ample class on $X$. Then the degree of $f^i\circ g^j$ is up to a bounded term (cf. [@BD15] Section 5) $$\begin{aligned} \Lambda\cdot(f^i\circ g^j)^*\Lambda=\Lambda^2-\frac{m^2}{2}(\Lambda\cdot K_X)^2 \left (i^2\Delta_1^2 + j^2\Delta_2^2\right)-ijm^2(\Lambda \cdot K_X)^2(\Delta_1\cdot \Delta_2).\end{aligned}$$ Note that $\Delta_1^2$ and $\Delta_2^2$ are negative. Let us consider the third case. Firstly assume that $\Gamma\cap{\operatorname{Jonq}_0(\mathbf{K})}$ is contained in a split torus over ${\mathbf{K}}(x)$. Then up to conjugation we can find two generators $f_0:(x,y)\dashrightarrow (x,\frac{P(x)}{Q(x)}y),g_0:(x,y)\dashrightarrow (x,\frac{R(x)}{S(x)}y)$ of $\Gamma\cap{\operatorname{Jonq}_0(\mathbf{K})}$ such that $P,Q,R,S\in{\mathbf{K}}[x]$ do not have common factors. If $f_0$ is elliptic, then $Q=1$ and $P\in{\mathbf{K}}$, so the degree of $f_0^ig_0^j$ is $\vert j\vert(deg(R)+deg(S))+1$. If $f_0,g_0$ are both [Jonquières ]{}twists, then the degree of $f_0^ig_0^j$ is $\vert i\vert(deg(P)+deg(Q))+\vert j\vert(deg(R)+deg(S))+1$. Now assume that $\Gamma\cap{\operatorname{Jonq}_0(\mathbf{K})}$ is contained in a non-split torus over ${\mathbf{K}}(x)$. The torus becomes split over a quadratic extension $L$ of ${\mathbf{K}}(x)$. The field $L$ is the function field of a double cover of ${\mathbb{P}}^1$, it has also a notion of degree. On ${\mathbf{K}}(x)$, the $L$-degree function is a multiple of the ${\mathbf{K}}(x)$-degree function. Therefore the arguments in the split case still work. In the fourth case the description of the degree function follows directly from the explicit expressions. \[abmaxthm\] Let $G\subset {\operatorname{Cr}_{2}(\mathbf{K})}$ be a maximal abelian subgroup which has at least one element of infinite order. Then up to conjugation one of the following possibilities holds: 1. $G$ is $\{(x,y)\mapsto (\alpha x,\beta y)\vert \alpha,\beta\in {\mathbf{K}}^*\}$, $\{(x,y)\mapsto (\alpha x,y+v)\vert \alpha\in {\mathbf{K}}^*,v\in {\mathbf{K}}\}$ or $\{(x,y)\mapsto (x+u,y+v)\vert u,v\in {\mathbf{K}}\}$; 2. $G$ is the product of $\{(x,y)\mapsto (x,\beta y)\vert \beta\in {\mathbf{K}}^*\}$ with an infinite torsion group $G_1$. Each element of $G_1$ is of the form $$(x,y)\dashrightarrow \left(\eta(x),y\frac{S(x)}{S(\eta(x))}\right) \ \text{with}\ \eta\in{\operatorname{PGL}_{2}(\mathbf{K})}, S\in{\mathbf{K}}(x)$$ and the morphism from $G_1$ to ${\operatorname{PGL}_{2}(\mathbf{K})}$ embeds $G_1$ as a subgroup of the group of roots of unity of ${\mathbf{K}}$ or a subgroup of the additive group ${\mathbf{K}}$. All elements of $G$ are elliptic but $G$ is not conjugate to a group of automorphisms of any rational surface. 3. $G$ has a finite index subgroup contained in ${\operatorname{Jonq}_0(\mathbf{K})}=\operatorname{PGL}_2({\mathbf{K}}(x))$. 4. A finite index subgroup $G'$ of $G$ is a cyclic group generated by a base-wandering [Jonquières ]{}twist. 5. A finite index subgroup $G'$ of $G$ is isomorphic to ${\mathbf{K}}^*\times {\mathbf{Z}}$ (resp. ${\mathbf{K}}\times {\mathbf{Z}}$) where the first factor is $\{(x,y)\mapsto (x,\beta y)\vert \beta\in{\mathbf{K}}^*\}$ (resp. $\{(x,y)\mapsto (x,y+v)\vert v\in{\mathbf{K}}\}$) and the second factor is generated by a base-wandering [Jonquières ]{}twist, as in the fourth case of Theorem \[zzthm\]; 6. A finite index subgroup $G'$ of $G$ is isomorphic to ${\mathbf{Z}}^s$ with $s\leq 8$ and $G'$ preserves fibrewise an elliptic fibration; 7. A finite index subgroup $G'$ of $G$ is a cyclic group generated by a loxodromic element. The existence of a type two maximal abelian group is less obvious than the others. We give here two examples. \[torsionexampleone\] Let $q\in{\mathbf{N}}^*$. Let $(\xi_n)_n$ be a sequence of elements of ${\mathbf{K}}^*$ such that $\xi_n$ is a primitive $q^n$-th root of unity and $\xi_n^q=\xi_{n-1}$ . Let $(R_n)_n$ be a sequence of non-constant rational fractions. For $i\in{\mathbf{N}}$, put $$f_{i+1}:(x,y)\dashrightarrow \left(\xi_{i+1}x,yS_{i+1}(x)\right) \ \text{with}\ S_{i+1}(x)=\frac{R_i(x^{q^i})}{R_i(\xi_1x^{q^i})}\frac{R_{i-1}(x^{q^{i-1}})}{R_{i-1}(\xi_2x^{q^{i-1}})}\cdots\frac{R_1(x)}{R_1(\xi_ix)}.$$ We have $f_{i+1}^q=f_i$ for all $i\in{\mathbf{N}}^*$ so that the group $G_1$ generated by all the $f_i$ is an infinite torsion abelian group. Let $T_i(x)=R_i(x^{q^i})\cdots R_1(x^q)$. The conjugation by $(x,y)\dashrightarrow (x,yT_i(x))$ sends the group generated by $f_1,\cdots,f_i$ into the cyclic elliptic group $\{(x,y)\mapsto (\xi_i^jx,y)\vert j=0,1,\cdots,q^i-1\}$. However the degree of $f_i$ goes to infinity when $i$ tends to infinity, which implies that $G_1$ can not be conjugate to an automorphism group. The product of $G_1$ with $\{(x,y)\mapsto (x,\beta y)\vert \beta\in {\mathbf{K}}^*\}$ is a maximal abelian subgroup of ${\operatorname{Cr}_{2}(\mathbf{K})}$; the maximality follows directly from Theorem \[ellipticthm\]. \[torsionexampletwo\] We can give an additive version of Example \[torsionexampleone\]. Suppose that ${\operatorname{char}(\mathbf{K})}=p>0$. Let $(t_n)_n$ be a sequence of elements of ${\mathbf{K}}$ linearly independant over $\mathbf{F}_p$. Let $R\in{\mathbf{K}}(x)$ be a non-constant rational fraction. For $i\in{\mathbf{N}}$, put $$f_{i+1}:(x,y)\dashrightarrow \left(x+t_{i+1},yS_{i+1}(x)\right) \ \text{with}\ S_{i+1}(x)=\frac{\prod_{(a_1,\cdots,a_i)\in\mathbf{F}_p^i}R(x-\sum_{k=1}^i a_kt_k)}{\prod_{(a_1,\cdots,a_i)\in\mathbf{F}_p^i}R(x+t_{i+1}-\sum_{k=1}^i a_kt_k)}.$$ Let $G_1$ be the group generated by all the $f_i$. The product of $G_1$ with $\{(x,y)\mapsto (x,\beta y)\vert \beta\in {\mathbf{K}}^*\}$ is a maximal abelian subgroup of ${\operatorname{Cr}_{2}(\mathbf{K})}$. Let $G$ be a maximal abelian subgroup of ${\operatorname{Cr}_{2}(\mathbf{K})}$. Note that if $f$ is a non-trivial element of $G$, then $G$ is the maximal abelian subgroup of $\operatorname{Cent}(f)$. If $G$ contains a loxodromic element $f$, then $G$ is included in $\operatorname{Cent}(f)$ and is virtually the cyclic group generated by $f$ by Theorem \[loxothm\]; this corresponds to the last case of the above statement. If $G$ contains a Halphen twist, then by Theorem \[halphenthm\] it is virtually a free abelian group of rank $\leq 8$ which preserves fibrewise an elliptic fibration; this corresponds to the sixth case. Assume that $G$ contains a base-wandering [Jonquières ]{}twist $f$. Theorem \[mainthm\] says that $\operatorname{Cent}(f)$ is virtually isomorphic to ${\mathbf{K}}^*\times {\mathbf{Z}}$, ${\mathbf{K}}\times {\mathbf{Z}}$ or ${\mathbf{Z}}$. Thus the same is true for $G$. This correponds to the fourth and the fifth case. Assume that $G$ contains a non-base-wandering [Jonquières ]{}twist $f$. Theorem \[jonqzthm\] says that $\operatorname{Cent}(f)$ is virtually isomorphic to an abelian subgroup of $\operatorname{PGL}_2({\mathbf{K}}(x))$, so the same is true for $G$. This is the third case. In the rest of the proof we assume that *$G$ contains only elliptic elements*. Note that $G$ is not necessarily an elliptic subgroup because it may not be finitely generated. Assume that ${\operatorname{char}(\mathbf{K})}=0$ and $G$ contains an element $f:(x,y)\mapsto (\alpha x,y+1)$ with $\alpha\in{\mathbf{K}}^*$. By Theorem \[ellipticthm\] we have $$\operatorname{Cent}(f)=\{(x,y)\dashrightarrow(\eta(x),y+R(x))\vert \eta\in {\operatorname{PGL}_{2}(\mathbf{K})}, \eta(\alpha x)=\alpha \eta(x), R\in {\mathbf{K}}(x), R(\alpha x)=R(x).\}$$ If $\alpha$ has infinite order, then $G=\operatorname{Cent}(f)=\{(x,y)\dashrightarrow(\gamma x,y+v)\vert \gamma\in{\mathbf{K}}^*,v\in{\mathbf{K}}\}$ and we are in the first case. Assume at first that $G$ has an element $g$ with an infinite action on the base of the rational fibration $(x,y)\mapsto x$. If the action of $g$ on the base is conjugate to $x\mapsto \beta x$ with $\beta\in{\mathbf{K}}^*$, then up to conjugation in ${\operatorname{Jonq}(\mathbf{K})}$ we can suppose that $g$ is just our initial element $f:(x,y)\dashrightarrow (\alpha x,y+1)$ (see Proposition \[ellipticnormalform\]), so that $G$ is isomorphic to ${\mathbf{K}}^*\times {\mathbf{K}}$. If the the action of $g$ on the base is conjugate to $x\mapsto x+1$, then by choosing an appropriate coordinate $x$, the two elements $f$ and $g$ are respectively $(x,y)\mapsto (x+1,y+R(x))$ and $(x,y)\mapsto (x,y+1)$ where $R$ is a polynomial by Lemma \[Risapolynomial\]. We can conjugate $g$ and $f$, simultaneously by $(x,y)\dashrightarrow (x,y+S(x))$ for some $S\in{\mathbf{K}}[x]$, to $(x,y)\mapsto (x+1,y)$ and $(x,y)\mapsto (x,y+1)$. Then we have $$G=\operatorname{Cent}(f)\bigcap \operatorname{Cent}(g)=\{(x,y)\mapsto (x+u,y+v)\vert u,v\in {\mathbf{K}}\}.$$ We are still under the hypothesis that ${\operatorname{char}(\mathbf{K})}=0$ and $G$ contains an element $f:(x,y)\mapsto (\alpha x,y+1)$ with $\alpha\in{\mathbf{K}}^*$. Assume now that no element of $G$ has an infinite action on the base of the rational fibration $(x,y)\mapsto x$. Then the description of $\operatorname{Cent}(f)$ implies that $G$ is a subgroup of $$\{(x,y)\mapsto(\delta x, y+R(x))\vert \delta\in{\mathbf{K}}^*, R\in{\mathbf{K}}(x)\}.$$ Consider the projection $\pi:G\rightarrow {\operatorname{PGL}_{2}(\mathbf{K})}$ which records the action on the base. Denote by $G_0$ the kernel of $\pi$ and by $G_b$ the image of $\pi$. We identify $G_b$ as a subgroup of the multiplicative group of roots of unity of ${\mathbf{K}}$. We want to prove that $G_b$ is finite so that $G$ is virtually contained in ${\operatorname{Jonq}_0(\mathbf{K})}=\operatorname{PGL}_2({\mathbf{K}}(x))$. Assume that $G_b$ is an infinite subgroup of the group of roots of unity. We first claim that $G_0$ is isomorphic to ${\mathbf{K}}$. Let $h:(x,y)\mapsto (x,y+R(x)), R\in{\mathbf{K}}(x)$ be an element of $G_0$ and $g:(x,y)\mapsto (\beta x,y+S(x)), S\in{\mathbf{K}}(x)$ be an element of $G$. The commutation relation $f\circ g=g\circ f$ implies $R(x)=R(\beta x)$. Here $\beta$ can be any element of the infinite group $G_b$. This implies that $R$ is constant, which proves the claim. Let $H_{\gamma}$ be a finite subgroup of $G_b$, it is a cyclic group generated by $x\mapsto \gamma x$ for some $\gamma\in{\mathbf{K}}^*$. Let $g:(x,y)\mapsto (\gamma x,y+R(x))$ be an element of $G$ such that $\pi(g)$ is $x\mapsto \gamma x$. By Lemma \[Risapolynomial\] $R$ is a polynomial. We can conjugate $g$ by an element of the form $(x,y)\mapsto (x,y+P(x)), P\in{\mathbf{K}}[x]$ to $(x,y)\mapsto (\gamma x,y)$ and the polynomial $P$ is unique up to addition by a constant. In fact, the conjugation by $(x,y)\mapsto (x,y+P(x))$ sends the subgroup $\pi^{-1}(H_{\gamma})$ of $G$ into $\{(x,y)\mapsto (\delta x,y+t), t\in{\mathbf{K}}\}$ because any element $h$ of $\pi^{-1}(H_{\gamma})$ is equal to $g^n\circ g_0$ for some $n\in{\mathbf{Z}}$ and $g_0\in G_0$. The unicity of $P$ implies that, if we take a finite subgroup $H_{\nu}$ which contains strictly $H_{\gamma}$, then the conjugation by $(x,y)\mapsto (x,y+P(x))$ still sends the subgroup $\pi^{-1}(H_{\nu})$ into $\{(x,y)\mapsto (\delta x,y+t), t\in{\mathbf{K}}\}$. This further implies that the conjugation by $(x,y)\mapsto (x,y+P(x))$ sends the whole group $G$ into $\{(x,y)\mapsto (\delta x,y+t), t\in{\mathbf{K}}\}$. Then by the maximality of $G$, it is isomorphic to ${\mathbf{K}}^*\times {\mathbf{K}}$ and we are in the first case of the statement. Note that we have made the hypothesis that $G_b$ is torsion, so here ${\mathbf{K}}$ must be the algebraic closure of a finite field. Assume that $G$ contains an element $f:(x,y)\mapsto(\alpha x,\beta y)$ where $\alpha,\beta\in{\mathbf{K}}^*$ and $\beta$ has infinite order. If $\alpha$ also has infinite order, then Theorem \[ellipticthm\] implies immediately that $G=\operatorname{Cent}(f)$ is isomorphic to ${\mathbf{K}}^*\times {\mathbf{K}}^*$ and we are in the first case. Assume that $\alpha$ has finite order but $G$ contains an element $f_1:(x,y)\dashrightarrow(\alpha_1 x, yR(x))$ where $R\in{\mathbf{K}}(x)$ and $\alpha_1\in{\mathbf{K}}^*$ has infinite order. By Corollary \[ellipticsubgroupconjugation\] the two elements $f$ and $f_1$ are simultaneously conjugate to $(x,y)\mapsto(\alpha x, \beta y)$ and $(x,y)\mapsto(\alpha_1 x,ry)$ with $r\in{\mathbf{K}}^*$. Thus, Theorem \[ellipticthm\], when applied respectively to $f$ and $f_1$, shows that $G=\operatorname{Cent}(f)\bigcap \operatorname{Cent}(f_1)$ is isomorphic to the diagonal group ${\mathbf{K}}^*\times{\mathbf{K}}^*$. Hence we are in the first case. According to the classification of normal forms of elliptic elements of infinite order (see Proposition \[ellipticnormalform\]), the only remaining cases are the two following: 1) $G$ contains an element $f:(x,y)\mapsto(\alpha x,\beta y)$ where $\alpha\in{\mathbf{K}}^*$ has finite order and $\beta\in{\mathbf{K}}^*$ has infinite order but $G$ contains no elements $(x,y)\dashrightarrow(\alpha_1 x, yR(x))$ with $\alpha_1$ of infinite order; 2) ${\operatorname{char}(\mathbf{K})}=p>0$ and $G$ contains an element $f:(x,y)\mapsto(x+1,\beta y)$ with $\beta\in{\mathbf{K}}^*$ of infinite order. In both cases $\operatorname{Cent}(f)$ is a subgroup of the [Jonquières ]{}group by Theorem \[ellipticthm\]. Denote by $\pi$ the projection of $G$ into ${\operatorname{PGL}_{2}(\mathbf{K})}$. If $\pi(G)$ is finite then we are in the third case of Theorem \[abmaxthm\]. So we assume that $\pi(G)$ is infinite. Then $\pi(G)$ is isomorphic to an infinite subgroup of the group of roots of unity or an infinite subgroup of ${\mathbf{K}}$, and it is an infinite torsion abelian group. We want to show that we are in the second case of Theorem \[abmaxthm\]. By Lemma \[telescopic\], each element of $G$ is of the form $(x,y)\dashrightarrow(\eta(x),y\frac{rS(x)}{S(\eta(x))})$ with $\eta\in{\operatorname{PGL}_{2}(\mathbf{K})},r\in{\mathbf{K}}^*,S\in{\mathbf{K}}(x)$. If $(x,y)\dashrightarrow(\eta(x),y\frac{rS(x)}{S(\eta(x))})$ is an element of $G$, then $(x,y)\dashrightarrow(\eta(x),y\frac{S(x)}{S(\eta(x))})$ is also an element of $G$ because it commutes with every other element. However the later has the same order in $G$ as $\eta$ in ${\operatorname{PGL}_{2}(\mathbf{K})}$. This means that $G$ has a subgroup isomorphic to $\pi(G)$, so that $G$ is isomorphic to the product of this subgroup with the kernel of $\pi$. To finish the proof, it suffices to show that the kernel of $\pi$ is $\{(x,y)\mapsto (x,\beta y)\vert \beta\in {\mathbf{K}}^*\}$. This is because $(x,y)\mapsto (x,\beta y)$ are the only possible elliptic elements by Lemma \[telescopic\]. ShengYuan Zhao\ Institut de Recherche Mathématique de Rennes\ Université de Rennes 1\ 263 avenue du Général Leclerc, CS 74205\ F-35042 RENNES Cédex\ *e-mail:* `[email protected]`\
--- abstract: | The asymptotic analysis of a linear high-field Wigner-BGK equation is developped by a modified Chapman-Enskog procedure. By an expansion of the unknown Wigner function in powers of the Knudsen number $\epsilon$, evolution equations are derived for the terms of zeroth and first order in $\epsilon$. In particular, it is obtained a quantum drift-diffusion equation for the position density, which is corrected by field-dependent terms of order $\epsilon$. Well-posedness and regularity of the approximate problems are established, and it is proved that the difference between exact and asymptotic solutions is of order $\epsilon ^2$, uniformly in time and for arbitrary initial data.\ \ Key words: Asymptotic analysis, quantum drift-diffusion model, Wigner equation, open quantum systems, singularly perturbed parabolic equations. author: - | \ [**Chiara Manzini**]{} and [**Giovanni Frosali**]{}\ Dipartimento di Matematica “G.Sansone"\ Università di Firenze - Via S.Marta 3\ I-50139 Firenze, Italy\ title: '**Rigorous drift-diffusion asymptotics of a high-field quantum transport equation**' --- Introduction ============ Quantum mechanics has recently proved an essential tool for modeling the new generation of nanodevices [@markringhsch]. However, the adoption of quantum models requires a delicate compromise with quantum statistics principles. Hamiltonian dynamics is described at the quantum level, either in terms of wave-functions (via Schrödinger-Poisson-systems), or of density-matrix operators (via von Neumann equation). For different reasons, both formulations are not suitable for simulations: precisely, the wave-function approach can not be extended to picture dissipative dynamics of open quantum systems, while the density matrix approach is not appropriate to describe finite position domains, due to its non-local character. For the same reasons, it is instead convenient to employ (Wigner) quasi-distribution functions [@arnoldjuengel; @Wig32]. Nevertheless, a phase-space description of a multi-dimensional dynamics presents well-known computational drawbacks. On the other hand, quantum hydrodynamic models seem to be a promising tool both from the numerical and the analytical point of view [@juengel; @juengelpinnau1]. Similarly, in semi-classical semiconductor theory, the interest of modelists has shifted from Boltzmann equation to hydrodynamic systems, and they have been widely studied both for a physical validation and from an analytical and numerical point of view (cf. [@anileromano] and the references therein). A rigorous derivation of quantum hydrodynamic models from more fundamental ones, in either Schrödinger or Wigner formulation, is an open and analytically demanding problem [@DeMeRi05; @gasmar; @JueMa]. This is the motivation of the present paper.\ The preliminary step for passing from the kinetic picture to a macroscopic one consists in including dissipative mechanisms in the evolution model, for example, the interaction of the quantum system with the environment. In the weak coupling limit, a Markovian dynamics can still be adopted, and the description via an (operatorial) evolution equation in Lindblad form is considered quantum-physically correct [@Lindblad]. From this class of evolution equations, kinetic models of open systems can be derived via Wigner transform. In Section 2 we shall briefly review the most popular Wigner models of irreversible dynamics.\ In this paper, we consider the case of an open quantum system in a high-field regime, more precisely, of an electron ensemble subject to an external potential, whose effect is comparable with the interaction with the ion crystal. Including high-field effects has great relevance in semiconductor simulation. A macroscopic model of this evolution is expected to contain field-dependent transport parameters, that are tipically deduced via fitting procedures. We refer the reader to for an updated review of derivations of [semi-classical]{} high-field drift-diffusion models by diverse limit procedures: in particular, in [@DeJu], from an energy-transport model, are obtained explicit field-dependent mobilities. On the contrary, in [@benab], a high-field drift-diffusion model with non-explicit field-dependent coefficients is derived, as the limit of a Spherical Harmonics Expansion of semi-classical Boltzmann equation.\ We present a [rigorous]{} derivation of a Quantum Drift-Diffusion (QDD) equation with [explicit]{} field-dependent mobility and diffusion coefficient. We shall start from the Wigner equation with an additional linear BGK term, modeling the interaction with the environment, and then adapt the equation to the high-field case, by rescaling it in terms of the Knudsen number ${\epsilon}$. Thus, our contribution is the quantum counterpart of [@poupaud]. We recall that, in [@GardRi], the starting point is the Wigner-BGK equation as well, but collisions are considered to be the strongest mechanism during the evolution (moderately high-field regime), and the relaxation term is derived via a Chapman-Enskog procedure. In our case, the additional relaxation term is instead an $\mathcal{O}(\hbar^2)-$approximation of the Wigner-transformed relaxation term in operatorial formulation (cf. Section 2). Moreover, we perform an asymptotic expansion of the unknown Wigner function in terms of ${\epsilon}$, according to a [modified]{} Chapman-Enskog procedure introduced in [@banasiakmika95]. This method has been applied to many kinetic models and constitutes a valuable tool for a [rigorous]{} asymptotic derivation of macroscopic models (cf. Section 5). We substitute the Wigner unknown in the originary evolution problem, with the expansion of order ${\epsilon}^2$, and we get an approximated problem: in particular, an equation with unknown the electron position-density. This equation is precisely the QDD equation corrected by the $\mathcal{O}(\hbar^2)$-Bohmian term of order ${\epsilon}$, and by field-dependent terms, of order ${\epsilon}$ as well. These terms contain the same field-dependent coefficients obtained in the semi-classical case [@DeJu; @poupaud].\ The well-posedness of the $\mathcal{O}({\epsilon}^2)$-approximated problem is discussed in Sections 7 and 8, and finally, in Section 9, we prove that the difference between the solutions of the originary and of the approximated evolution problems is also of order ${\epsilon}^2$. In conclusion, with the present analysis we obtain a QDD equation with field-dependent mobility and diffusion coefficients and we prove rigorously that, up to a certain degree of accuracy, it constitutes a model of quantum transport in the high-field case. From the analytical point of view, this equation is a second-order parabolic PDE with [ non-homogeneous]{} coefficients. In particular, it belongs to the class of singularly perturbed equations; accordingly, the well-posedness result, together with the regularity estimates derived in Section 8, are complementary to the discussion in [@banasiakAAM] about the same class of equations with constant coefficients. A counterpart of our analysis is the well-posedness study of the quantum drift-diffusion equation, in the fourth-order formulation obtained via a “classical-equilibrium” approximation [@juengelpinnau1]. We remark that the asymptotic procedure used here presents analogies with the Chapman-Enskog one in kinetic theory; nevertheless, it is well-known that the latter does not deal with the “initial layer” problem, namely, the instants close to the initial one are excluded from the analysis, due to the rapid changes of the solution [@poupaud]. In the present approach instead, the initial layer problem is solved at once. Wigner-BGK equations ==================== Let us consider a quantum system with $d$ degrees of freedom, evolving under the effect of an external potential $V=V(x), \,x\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d$. The Wigner equation with unknown the quasi-distribution function $w=w(x,v,t), (x,v)\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{2d}, \, t>0 $, provides a kinetic description of the evolution of the system. It reads $$\label{eq:Wig} \frac{\partial w}{\partial{t}} + v\cdot\nabla_xw - \Theta[V]w \;\: = \;\: 0,\quad (x,v)\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{2d}, \quad t>0 \,,$$ with the pseudo-differential operator $\Theta[V]$ defined by $$\begin{aligned} \label{eq:theta} (\Theta[V]w)(x,v,t) & = & \frac{i}{(2\pi)^d} \int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d}\!\int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d}\delta V(x,\eta)w(x,v^{\prime},t)e^{i(v-v^{\prime})\cdot\eta}\,dv^{\prime}\,d\eta \nonumber\\ & = & \frac{i}{(2\pi)^{d/2}}\int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d}\delta V(x,\eta){\cal F} w(x,\eta,t)e^{iv\cdot\eta}\,d\eta\,,\end{aligned}$$ where $$\delta V(x,\eta) := \frac{1}{\hbar}\left[ V\left(x+\frac{\hbar\eta}{2{m}}\right) - V\left(x-\frac{\hbar\eta}{2{m}}\right)\right]$$ and ${\cal F} f (\eta)\equiv [{\cal F}_ {v\to \eta}f](\eta)$ denotes the Fourier transform of $w$ from $v$ to $\eta$. In the Fourier-transformed space ${{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d_{x}\times{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d_{\eta}$ the operator $\Theta[V]$ is the multiplication operator by the function $i\,\delta V$; in symbols, $$\label{eq:product} {\cal F}\left(\Theta[V]w\right)(x,\eta) = i\,\delta V(x,\eta){\cal F} w(x,\eta)\,.$$ Eq.  corresponds via Wigner-transform to the von Neumann equation describing the conservative dynamics of an [isolated]{} quantum system [@Wig32]. Successive modifications of the Wigner model have been proposed to picture an irreversible interaction of the system with the environment. In [@FroMaRi] a scattering term is derived by a weak-coupling limit; however, due to its non-locality, it is not suitable for simulations and for mathematical analysis. A second possibility is an additional diffusive term, as in the quantum counterpart of Fokker-Planck (FP) equation of classical kinetic theory [@CaLe] (cf. [@CaErFro] for the latest derivation and [@ADM06; @ADM06b] for the latest well-posedness results). Unlike the Wigner equation with the scattering term, the quantum FP equation is the Wigner-transformed version of a Markovian master equation in Lindblad form, namely, it is the kinetic version of a quantum-physically correct model [@arnoldsparber]. The shape of the drift-diffusion equations corresponding to the low-field, respectively high-field, scaling of the classical, respectively quantum, FP equations are presented in [@arnold_limit]. Another possibility is to insert a BGK operator, either linear or non-linear, like in [@bonilla], meaning that after a time $1/\nu$ the system relaxes to a prescribed state $w_{\mathrm{eq}}$; namely, $$\label{eq:WigBGK} \frac{\partial w}{\partial{t}} + v\cdot\nabla_xw - \Theta[V]w \;\: = \;\: - \nu(w-w_{\mathrm{eq}})\,,\quad (x,v)\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{2d}, \quad t>0.$$ In the recent literature [@DeMeRi05; @Gard94; @GardRi; @JueMa], diverse relaxation-time states $w_{\mathrm{eq}}$ have been proposed.\ The standard picture is that the system converges to a state of thermodynamical equilibrium with the surrounding environment at temperature $T$. The operator that individuates the statistical equilibrium state at (constant) temperature $T=1/k\beta\,$ ($k$ is the Boltzmann constant) is $ \mathrm{e}^{-\beta H}, $ $H$ being the energy operator associated to the system. The von Neumann equation modified by a relaxation-time term containing $\mathrm{e}^{-\beta H}$ is [in Lindblad form]{} [@arnoldrelax]. Accordingly, a Wigner-BGK model, being the Wigner-transformed version of that equation (i.e., containing the Wigner-transformed of $\mathrm{e}^{-\beta H}$ as relaxation-time state), formally belongs to the class of quantum-physically correct kinetic models. In his pioneer article [@Wig32], E. Wigner applies an expansion in terms of $\hbar$ to the Wigner function corresponding to the operator $\mathrm{e}^{-\beta H}$, and obtains the classical equilibrium distribution function on the phase space with correction of non-odd order in $\hbar$: $$\begin{gathered} \label{eq:wig_eq} w_{\mathrm{W}}(x,v):=\left(\frac{m}{2\pi\hbar }\right)^{d}\!\! e^{-\beta {\cal E}} \\ \times\left\{1+{\hbar^2}\frac{\beta^2}{24}\left[-\frac{3}{m} \Delta V+\frac{\beta}{{m}}|\nabla V|^2 + \beta \sum_{r,s=1}^dv_rv_s\frac{\partial^2 V}{\partial x_r\partial x_s}\right]+{\cal O}(\hbar^4)\right\}\,,\end{gathered}$$ where ${\cal E}(x,v):= mv^2/2+V(x)$ is the total energy of the system. Let us call $w_{\mathrm{eq}}$ its [local (in time and space)]{} version, defined by $$w_{\mathrm{eq}}(x,v,t)\:\;:=\:\; C(x,t)\ w_{\mathrm{W}}(x,v)\,,$$ with $C$ to be chosen. By assuming$$\label{eq:constraint} \int w_{\mathrm{eq}}(x,v,t)\,dv \:\;=\:\; \int w(x,v,t)\,dv \:\;=:\:\; n[w](x,t)\equiv n(x,t)\,,$$ and since, by the direct computation, $$\int\! w_{\mathrm{W}}(x,v)\,dv=\left(\frac{m}{2\pi\hbar^2\beta}\right)^{d/2}\!\!\!\!{e^{-\beta V}} \left\{1+\hbar^2\frac{\beta^2}{12{m}}\left[ - \Delta V+\frac{\beta}{2}|\nabla V|^2\right]+{\cal O}(\hbar^4)\right\},$$ the local Wigner thermal equilibrium function $w_{\mathrm{eq}}$ equals $$\begin{gathered} \label{eq:wig_eq_approx} w_{\mathrm{eq}}(x,v,t)=n(x,t)\left(\frac{\beta m}{2\pi}\right)^{d/2} e^{-\beta mv^2/2} \\ \times\left\{1+\hbar^2\frac{\beta^2}{24}\left[-\frac{1}{m}\Delta V+\beta \sum_{r,s=1}^{d}v_rv_s\frac{\partial^2 V}{\partial x_rx_s}\right]+{\cal O}(\hbar^4)\right\}\,.\end{gathered}$$ In can be recognized the classical (normalized) Maxwellian $$\label{eq:Maxw} F(v):=\left(\frac{\beta m}{2\pi}\right)^{d/2}\!\!{e^{-{\beta m v^2}/{2}}}\,,$$ parametrized by the density $n$ and the constant temperature $1/k \beta$, with an additional correction term of order $\hbar^2$. We shall consider the expression as the ${\cal O}(\hbar^2)$-approximation of the Wigner function associated to the state to which the quantum system shall approach.\ In [@DeRi] is presented an alternative strategy to individuate the relaxation-time state, that is the extension to the quantum case of Levermore’s one for classical kinetic equations ([@Levermore], cf. [@anileromano] for semi-classical equations). It consists in tackling a constrained minimization problem for the relative entropy of the quantum system under consideration, with respect to the environment. In the quantum case the procedure is performed at the operatorial level, due to the non-local definition of the entropy, in terms of the operators describing the states of the quantum system. However, the constraints for the minimization procedure are considered at the kinetic level. Thus, the Wigner transform ${\cal W}$ is used intensively to pass from the operatorial formulation to the kinetic one, any time it is required by the procedure. Due to that, the expression of the minimizer of the entropy formally derived in [@DeMeRi05] is non-explicit. Nevertheless, in [@DeMeRi05], is formally proved that ${\cal W}\{\exp {\cal W}^{-1}f\}=\exp f + {\cal O}(\hbar^2)$ with $f$ defined on the phase-space. Accordingly, the (formal) minimizer reads $$\begin{gathered} \label{eq:wig_deg} w_A(x,v,t)\:\;:=\:\; e^{(A-\beta mv^2/2)} \\ \times \left\{1+{\hbar^2}\frac{\beta^2}{8}\left[+\frac{1}{m} \Delta A+\frac{\beta}{3{m}}|\nabla A|^2 +\frac{\beta}{3}\sum_{r,s=1}^dv_rv_s\frac{\partial^2 A}{\partial x_r\partial x_s}\right]+{\cal O}(\hbar^4)\right\}\end{gathered}$$ with $A=A(x,t)$ Lagrange multiplier used for the constrained minimization procedure, i.e. $$\int w_{A}(x,v,t)\,dv \:\;=\:\; n(x,t)\,.$$ By comparison of the expression with , it can be easily seen that they coincide if one identifies the Lagrange multiplier $A$ with $-\beta V$. In [@JueMa] it is indeed proved that $A=-\beta V + {\cal O}(\hbar^2)$ holds. \[remark:L2\] *It is crucial to recall that the correspondence via Wigner-transform of the operatorial and the kinetic formulations is merely formal, unless certain assumptions are posed both on the Wigner functions and on the operators [@lions]. On this point depends the analytical difficulty in stating rigorously the well-posedness of the strategy of derivation in [@DeMeRi05]. For the same reason, the analysis of Wigner equations is set in the Hilbert space $L^2$, since the necessary condition for the rigorous correspondence is satisfied [@lions] (cf. [@ADM06b], e.g.).\ * As a consequence of the previous discussion, in the present article we shall adopt the Wigner-BGK equation containing on the right-hand side as the model of the open quantum system evolution. In particular, we remark that we shall consider the operator on the right-hand side as an ${\cal O}(\hbar^2)$-approximation, in the kinetic framework, of the dissipative dynamics induced by the interaction with the environment. The high-field Wigner-BGK equation {#formulation} ================================== Our aim is to describe an open quantum system subject to a strong external potential; in particular, the action of the potential is to be considered comparable with the interaction with the environment. In order to adapt the Wigner-BGK equation to this specific case, we rewrite it by using dimensionless variables and, for this purpose, we introduce the time-scales of the action of the external potential and of the interaction with the environment. Let us call $t_V$ the potential characteristic time and $t_C$ the mean free time between interactions of the system with the background. Then we introduce $x'=x/x_0$, $v'=v/v_0$, $t'=t/t_0$, with $x_0, v_0, t_0$ characteristic quantities, and we call $w'=w(x',v',t')$ the rescaled Wigner function (observe that we can indeed neglect to rescale the Wigner function). Thus, we obtain $$\frac{x_0}{v_0t_0}\frac{\partial}{\partial{t}}w + v\cdot\nabla_xw - \frac{x_0}{v_0 t_V}\Theta[V]w \;\: = \;\: - \frac{x_0}{v_0 t_C} \nu (w-w_{\mathrm{eq}})\,,\quad t>0,\quad (x,v)\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{2d},$$ where we have omitted the prime everywhere. If we introduce the relation $x_0=v_0 t_0$, we obtain $$\frac{\partial}{\partial{t}}w + v\cdot\nabla_xw - \frac{t_0}{t_V}\Theta[V]w \;\: = \;\: - \frac{t_0}{t_C} \nu \left( w-w_{\mathrm{eq}} \right) \,,\quad t>0,\quad (x,v)\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{2d}.$$ In the following we assume that the times $t_V$ and $t_C$ are comparable, in the sense $$\label{eq:scaling} \frac{t_V}{t_0} \approx \frac{t_C}{t_0} \approx \epsilon \,,$$ where $\epsilon:={l}/{x_0}$ is the Knudsen number, since $l:=v_0 t_C$ is the characteristic length corresponding to the classical mean free path. This corresponds to say that the external potential and the interactions coexist during the evolution. In particular, ${\epsilon}\approx 0$ corresponds to an evolution in which the effect of the interactions is dominant on the transport ($t_C <\!\!< t_0 $ or equivalently $l <\!\!< x_0$). However, at this time the action of the external potential has the same strength, due to the assumption ($t_V <\!\!< t_0 $). In fact, the resulting equation is $$\label{eq:Wigeq_adim} {\epsilon}\frac{\partial w}{\partial{t}} + {\epsilon}v\cdot\nabla_xw - \Theta[V]\,w \;\: = \;\: - \nu \left(w-w_{\mathrm{eq}}\right) \,,\quad t>0,\quad (x,v)\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{2d}.$$ We recall that it is the quantum counterpart of the one studied by F. Poupaud in [@poupaud].\ Now we put in abstract form. As motivated in Remark \[remark:L2\], a suitable setting for problems in Wigner formulation is the Hilbert space $L^2({{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}}^{2d})$. However, in order to give a rigorous sense to the expression $$\label{eq:density} n(x)\:\;:=\:\;\int w(x,v)\,dv\,, \quad \forall x\in{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d,$$ which enters the equation via the definition of $w_{\mathrm{eq}}$, we introduce the subspace $X_k:=L^2({{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}}^{2d},(1+|v|^{2k})\,dx\,dv; {{\ensuremath{\mathrm{I}\!\mathrm{R}}}})$, with $k\in {{\ensuremath{\mathrm{I}\!\mathrm{N}}}}$, endowed with the norm $$\|w\|_{X_k}^2 = \int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{2d}}\!\!\! |w(x,v)|^2 (1+|v|^{2k})\, dx \,dv\,.$$ Let us call $X_k^v$ the Hilbert space $L^2({{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}}^{d},(1+|v|^{2k})\, dv; {{\ensuremath{\mathrm{I}\!\mathrm{R}}}})$ and $H_k^m$ the Sobolev space $H^m_x \otimes X_k^v$. The weight $k$ has to be chosen according to the space dimension $d$: we call $d$-admissible $$k\in {{\ensuremath{\mathrm{I}\!\mathrm{N}}}}\quad \hbox{such that}\; 2k>d\,.$$ The definition is well-posed for all $w\in X_k$ with $d$-admissible $k$, since $$\left|\int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{d}}\!\!w(x,v)\, dv\right| \leq C(d,k) \left(\int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{d}}\!\!|w(x,v)|^2(1+|v|^k)^2\, dv\right)^{1/2}\!,\quad \forall\, x\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d$$ by Hölder inequality (cf. [@MB]). We define the streaming operator $S$ by $${S}w = -v \cdotp \nabla_x w \;,\; D(S)=\left\{w \in { X}_k\,|\, S \,w \in { X}_k \right\}\,,$$ and the operators $$\label{eq:abstractoperators} {\mathcal A}w:=\Theta[V]w,\qquad{{\cal C}}w :=- (\nu \, w-{{\Omega\,}}w)\,, \quad \forall\, w\in X_k$$ with the operator $\Omega$ defined by $${{\Omega\,}}w (x,v):= \nu F(v) \left\{1+\hbar^2\frac{\beta^2}{24}\left[-\frac{1}{m}\Delta V +{\beta}\sum_{r,s=1}^{d}v_rv_s\frac{\partial^2 V}{\partial x_rx_s}\right]\right\}\int\!w (x,v^{\prime})\,dv^{\prime}\,.$$ The function $F(v)$ is the normalized Maxwellian, given by . Observe that we substitute the function $w_{\mathrm{eq}}$ defined in  with the operator $\Omega w$, that differs from $w_{\mathrm{eq}}$ by terms of order $\hbar^4$. Let us call $F^{(2)}$ the $O(\hbar^2)-$coefficient in the above definition of $\Omega$ $$F^{(2)}(x,v)\equiv F^{(2)}[V](x,v)=\frac{\beta^2}{24}\left[-\frac{1}{m}\Delta V +{\beta}\sum_{r,s=1}^{d}v_rv_s\frac{\partial^2 V}{\partial x_rx_s}\right]F(v)\,,$$ such that $$\Omega w (x,v)\equiv \nu\, n[w](x)\left[ F(v)+\hbar^2 F^{(2)}(x,v)\right] \,.$$ Observe that such expression for $\Omega w$ can be seen as an $O(\hbar^2)-$correction to the classical product $n(x)F(v)$.\ In conclusion, we write Eq.  in the abstract form $$\left\{\!\!\!\!\!\!\!\! \begin{array}{lcl} && {\epsilon}\,\displaystyle \frac{dw}{dt} = {\epsilon}\,{S} w + {\mathcal A}w +{{\cal C}}w ,\\[-2mm] \\ && \lim_{\,t \to 0^+} \|w(t)-w_0\|_{{ X}_k} = 0 \end{array} \right. \label{system}$$ where $w_0$ is the initial condition.\ In next Lemma we specify under which assumptions the abstract definition of the operator ${\mathcal A}+{{\cal C}}$ is well-posed. \[lemma:pseudo\] If $V\in H^{{k}}_x$ with $d$-admissible $k$ and $\Delta V \in L^{\infty}_x$, then the operator ${\mathcal A}+{{\cal C}}$ is well-defined from $X_k$ into itself, and is bounded by $$\| {\mathcal A}+{{\cal C}}\|_{{\cal B}({X}_k)}\leq C(d,k)\left[\|V\|_{H^{{k}}_x}+\nu \|\Delta V\|_{L^{\infty}_x}\|F\|_{{X}_{k+2}^v}+\nu\|F\|_{{X}_{k}^v}+\nu\right].$$ Moreover, ${\mathcal A}+{{\cal C}}$ is well-defined from $X_k^v$ into itself, and is bounded by $$\label{eq:pseudo_v} \| {\mathcal A}+{{\cal C}}\|_{{\cal B}({X}_k^v)}\leq C(d,k)\left(\|V\|_{H^{k}_x} +\nu |\Delta V(x)|\|F\|_{{X}_{k+2}^v}+ \nu\|F\|_{{X}_{k}^v}+ \nu \right)\,.\\$$ [[[**Proof**]{}]{}]{}. Here and in the following we indicate with $C$ non necessarily equal constants.\ The arguments are similar to those in [@M], so we just give a sketch of the proof. First of all, by the product shape of the pseudo-differential operator in Fourier variables (cf. ), for all $w\in {X}_k$, it holds $$\begin{aligned} \|\Theta[V]w\|_{X_k}^2&=& C\|\delta V{\cal F}w\|_{L^{2}_{x,\eta}}^2\! + C\left\|\sum_{i=1}^d\frac{\partial^k}{\partial{\eta_i}^k}\left(\delta V{\cal F}w\right)\right\|_{L^{2}_{x,\eta}}^2\\ &\leq& 2C\|V\|_{L^{\infty}_x}^2\|w\|_{L^{2}_{x,v}}^2\!\!\! + C\left\|\sum_{i=1}^d\frac{\partial^k}{\partial{\eta_i}^k}\left(\delta V{\cal F}w\right)\right\|_{L^{2}_{x,\eta}}^2,\end{aligned}$$ since $2k>d$ guarantees that $H^k_x \hookrightarrow L^{\infty}_x.$ Here, the constant $C$ is due to the Fourier transform. Then, by applying the product-formula rule and using Sobolev embeddings for the functions $V$ and ${\cal F}w\in L^2_x\otimes H^k_{\eta}$, it follows that $\|\Theta[V]\|_{{\cal B}(X_k)}\leq C \|V\|_{H^{k}_x}$. Moreover, for all $w\in X_k$ with $2k>d$, $$\begin{aligned} &&\|{{\Omega\,}}w\|_{X_k}^2 \leq\nu \int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{2d}}\!\!\!(1+|v|^{2k})\left(1+\frac{\beta^4\hbar^4}{24^2m^2}|\Delta V|^2(x)\right)|F(v)|^2 \left|\int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{d}}\!\!w(x,v^{\prime})\, dv^{\prime}\right|^2\!dx\, dv \nonumber\\ &&+\int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{2d}}\!\!\!(1+|v|^{2k})\frac{\beta^6\hbar^4}{24^2m^4} \!\left|\sum_{r,s=1}^dv_rv_s\frac{\partial^2 V(x)}{\partial x_r\partial x_s} F(v)\right|^2\! \left|\int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{d}}\!\!w(x,v^{\prime})\, dv^{\prime}\right|^2\!\!dx\, dv\nonumber \\ &&\leq\nu\,C(1+\|\Delta V\|_{L^{\infty}_x}^2) \|F\|_{X^v_k}^2 \|w\|_{X_k}^2 + \nu \,C \|\Delta V\|_{L^{\infty}_x}^2\|F\|_{X^v_{k+2}}^2\|w\|_{X_k}^2\,, \label{eq:stimaomega} $$ since $F\in X_k\,, \,\forall\,k$ . Then the estimate of $\| {{\cal C}}\|_{{\cal B}(X_k)}$ is straightforward. The estimate in $X_k^v$ can be proved analogously. ------------------------------------------------------------------------ \ We remark that the existence and uniqueness of a solution in $X_k$ of the initial value system (\[system\]) for any ${\epsilon}>0$ can be stated under the assumptions of Lemma \[lemma:pseudo\] by using arguments of semigroup theory, analogously to [@MB]. Well-posedness of the problem with $\mathbf{{\epsilon}=0}$ ========================================================== The aim of this paper is to perform an asymptotic analysis of the system , by using a Chapman-Enskog type procedure. The first step of the analysis is to solve Eq.  with ${\epsilon}=0$. This corresponds to individuate the Wigner function describing the state of the system in case the interaction of the environment and the action of the potential are dominant with respect to the transport. We remark that the function $w_{\mathrm{eq}}$ defined by describes the state to which the system relaxes under the sole interaction with the environment.\ We consider the equation $({\mathcal A}+{{\cal C}}) w=0 $ in the space $X_k$: the variable $x$ can be considered as a parameter in the analysis, thus we shall study $({\mathcal A}+{{\cal C}}) w=0 $ in the space $X^v_k$ for any fixed $x\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d$. However, with an abuse of language, we shall indicate the operators with the same letters also when $x$ is fixed. We can state the following proposition. \[prop:kernelC\] If $V\in H^{\tilde{k}}_x$ with $\tilde{k}=\max\{2,k\},$ and $d$-admissible $k$, then for a fixed $x\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d$ $$\label{eq:char_ker} {\ker}({\mathcal A}+{{\cal C}}) \: = \:\{c M(v), c\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}\}\subset X_k^v,$$ with $$\label{eq:def_M} M(x,v):=\nu {\cal F}^{-1}\left\{\frac{{\cal F}F(\eta)}{\nu - i\delta V (x,\eta)} \left(1-\frac{\beta\hbar^2}{24m^2}\sum_{r,s=1}^d\eta_r\eta_s\frac{\partial^2 V(x)}{\partial x_rx_s}\right)\right\}(x,v)\,,\;v\in{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d$$ for any fixed $x$. Moreover, for all $h\in X_k^v$, $ ({\mathcal A}+{{\cal C}}) w= h$ has a solution if and only if $$\label{eq:cond} \int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d}\!\!\!h(v)\, dv\;\: = \;\:0\,.$$ [*It can be immediately deduced by the characterization that the solution of the equation $({\mathcal A}+{{\cal C}}) w=0 $ in $X_k$ is unique, except for a factor of the sole $x$.*]{}\ [[[**Proof**]{}]{}]{}. By definition, $${\ker}({\mathcal A}+{{\cal C}}):=\{w\in X_k^v\,|\,({\nu}-\Theta[V])w={{\Omega\,}}w\}.$$ For all $h\in X_k^v$, the Fourier-transformed version of $({\nu}-\Theta[V])w=h$ reads $(\nu - i\delta V){\cal F} w= {\cal F} h$. Thus, $$\label{eq:inverseTheta} w(v)=({\nu}-\Theta[V])^{-1}h(v):={\cal F}^{-1}\left(\frac{{\cal F}h(\eta)}{\nu - i\delta V (\eta)}\right)(v)$$ is the unique solution; equivalently, the operator $(\nu-\Theta[V])$ is invertible in $X_k^v$ with bounded inverse, defined by . Precisely, $$\begin{aligned} \|w\|_{X_k^v}^2\! \!\!\!\!&=&\!\!\!\! \|({\nu}-\Theta[V])^{-1}h\|_{X_k^v}^2=C\!\!\int\!\! \frac{|{\cal F} h(\eta)|^2} {\nu^2 +|\delta V (\eta)|^2}\, d\eta + C \sum_{r=1}^d\int\! \left |\frac{\partial^k}{\partial{\eta_r^k}} \frac{{\cal F} h(\eta)}{\nu - i\delta V (\eta)}\right|^2\! d\eta\\[2mm] &\leq&\!\!\!\!\frac{C}{\nu^2}\|h\|_{L^{2}_{v}}^2+C\sum_{r=1}^d\int\! \left |\frac{\partial^k}{\partial{\eta_r^k}}\, \frac{{\cal F}h(\eta)(\nu+ i\delta V (\eta))}{\nu^2 +(\delta V)^2 (\eta)}\right|^2d\eta\,,\end{aligned}$$ then, by applying product formula, it can be checked that, if $2k>d$, $$\label{eq:invTheta} \|({\nu}-\Theta[V])^{-1}\|_{{\cal B}(X_k^v)}\leq C(1+\|V\|_{H^{k}_x})\,.$$ Then, $$\begin{gathered} \label{eq:eq} w\in \ker({\mathcal A}+{{\cal C}})\, \Leftrightarrow \, w=({\nu}-\Theta[V])^{-1}{{\Omega\,}}w\,\\[2mm] \Leftrightarrow\, w = \nu n[w]({\nu}-\Theta[V])^{-1}(F+\hbar^2F^{(2)}[V])\,\Leftrightarrow\, w=n[w]M(v)\,,\end{gathered}$$ by definition of the operators $({\nu}-\Theta[V])^{-1}$ and ${{\Omega\,}}$. From this follows the characterization of $\ker({\mathcal A}+{{\cal C}})$, with the function $M$ defined by $$M(x,v):=\nu ({\nu}-\Theta[V])^{-1}(F(v)+\hbar^2F^{(2)}[V](x,v))\,\quad \forall\, v\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d,$$ with the fixed $x\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d$. Since $F+\hbar^2F^{(2)}[V]\in X_k$ for all $k\in {{\ensuremath{\mathrm{I}\!\mathrm{N}}}}$, provided $\Delta V\in L^2_x$; then, due to the assumption on $V$ and to , $M\in X_k^v$ if $2k>d$. For all $h\in X_k$, solving $ ({\mathcal A}+{{\cal C}}) w= h$ is equivalent to $ ({\cal I}-({\nu}-\Theta[V])^{-1}{{\Omega\,}})w=-({\nu}-\Theta[V])^{-1}h$. Moreover, by the equivalence , $\ker({\mathcal A}+{{\cal C}})= \ker({\cal I}-({\nu}-\Theta[V])^{-1}{{\Omega\,}})$. Since $\ker({\mathcal A}+{{\cal C}})\neq \{0\}$ , the operator ${\cal I}-({\nu}-\Theta[V])^{-1}{{\Omega\,}}$ is not injective. If the operator $({\nu}-\Theta[V])^{-1}{{\Omega\,}}$ is compact, by the Fredholm alternative, this is equivalent to $R(({\nu}-\Theta[V])^{-1}{{\Omega\,}})\neq X_k^v$. The equation $({\cal I}-({\nu}-\Theta[V])^{-1}{{\Omega\,}})w=M$ has indeed no solution, since $$\int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d}\!\!\!({\cal I}-({\nu}-\Theta[V])^{-1}{{\Omega\,}})w(v)\, dv\;\: = \;\:0\,,\quad\forall\, w\in X_k^v\,,$$ (by the definition of the operator $({\nu}-\Theta[V])^{-1}{{\Omega\,}}$), while, instead, $\int M(v) dv=\int F(v) dv=1.$ Analogously, for all $u\in \ker({\mathcal A}+{{\cal C}}), ({\cal I}-({\nu}-\Theta[V])^{-1}{{\Omega\,}})w=u$ has no solution.\ In conclusion, if we show that $({\nu}-\Theta[V])^{-1}{{\Omega\,}}$ is a compact operator, then we can conclude by the Fredholm alternative that $({\cal I}-({\nu}-\Theta[V])^{-1}{{\Omega\,}})w=h$ has a solution iff $\int h(v)\, dv = 0$. Analogously to Lemma 1 in [@poupaud], it can be constructed by Rellich-Kondrachov theorem, a sequence of bounded finite rank operators converging to $({\nu}-\Theta[V])^{-1}{{\Omega\,}}$. Thus the thesis follows. ------------------------------------------------------------------------ Finally, let us compute the first and the second moments of the function $M$: \[prop:M\_prop\] Let $V\in H^{k+2}_x$ with $d$-admissible $k$. Then, the function $M$ defined by satisfies $$\begin{aligned} \label{eq:eq_bis} ({\cal A}+{\cal C})w=0 &\Leftrightarrow& w=n[w]M\quad \hbox{\rm with}\quad\int\! M(x,v)\,dv = 1 \,,\\ \label{eq:first_mom} \int\! vM(x,v)\,dv &=& -\frac{1}{\nu m}\nabla V(x)\,,\\ \label{eq:second_mom} \int\! v\otimes v M(x,v)\,dv &=&\frac{{\mathcal I}}{\beta m}+\frac{2}{\nu^2m^2}{\nabla V\otimes \nabla V} +\frac{\beta\hbar^2}{12m^2}{\nabla \otimes \nabla V}\,.\end{aligned}$$ [**Proof**]{}. follows by . Moreover, since $V\in H^{k+2}_x$ with $2k>d$, then the function $M$ $$M(x,v)=\nu{\cal F}^{-1}\left(\frac{{\cal F} (F+\hbar^2F^{(2)})}{\nu-i\delta V (x,\eta)}\right)(x,v)\,,$$ belongs to $X_{k+2}.$ By calculus rules in the Fourier space, since $F$ is smooth, it holds $$\int\! vM(x,v)\,dv \;\:=\;\: i\nu \left[ \nabla_{\eta}\left(\frac{{\cal F} (F+\hbar^2F^{(2)})}{\nu-i\delta V (x,\eta)}\right)\right](x,0).$$ By performing the derivative and then taking into account that $${\cal F}(F+\hbar^2F^{(2)})(x,0) ={\cal F} F(0)=1, \qquad \nabla_{\eta}{\cal F}(F+\hbar^2F^{(2)})(x,0) =0\,,$$ and that $(\nabla_{\eta}\delta V)(x,0)=\nabla_{x}V(x)/m$, one gets .\ Analogously, the second moments of $M$ are well defined, and, by calculus rules, it holds $$\begin{aligned} \int\!v_iv_jM(x,v)\, dv&=&-\nu\left[\frac{\partial^2}{\partial \eta_i\partial\eta_j}\frac{{\cal F} (F+\hbar^2F^{(2)})}{\nu-i\delta V}\right](x,0)\,,\quad \forall\, i,j=1,\ldots d\,,\end{aligned}$$ and $$\begin{aligned} -\frac{\partial^2}{\partial \eta_i\partial\eta_j}\left(\frac{{\cal F} F}{\nu-i\delta V}\right)(x,0)&=& -\frac{1}{\nu}\left(\frac{\partial^2{\cal F} F}{\partial \eta_i\partial\eta_j}\right)(x,0) + \frac{2}{\nu^3}\left(\frac{\partial\delta V }{\partial\eta_i}\frac{\partial\delta V }{\partial\eta_j}\right)(x,0)\\[1.5mm] &=&\frac{1}{\nu\beta m}+\frac{2}{\nu^3m^2}\frac{\partial V(x)}{\partial x_i}\frac{\partial V(x)}{\partial x_j}\,,\\[2mm] -\frac{\partial^2}{\partial \eta_i\eta_j}\left(\frac{{\cal F} F^{(2)}}{\nu-i\delta V}\right)(x,0) &=& -\frac{1}{\nu}\left(\frac{\partial^2{\cal F} F^{(2)}}{\partial \eta_i\partial\eta_j}\right)(x,0) \;\:=\;\:\frac{\beta}{12m^2\nu}\frac{\partial^2V(x) }{\partial x_i\partial x_j}\,.\end{aligned}$$ Thus the thesis follows. ------------------------------------------------------------------------ \[rem:moments\] *[ The state that describes the system under the effect of the interaction with the environment and of the strong potential is described by the function $n M$, with $M$ defined by . The fluid velocity relative to such state is non-null and given by . In contrast, the velocity of the system in the state $w_{\mathrm eq}$ defined by (i.e., when it is subject to the sole influence of the environment), is $\int v\, w_{\mathrm eq}\, dv=0$, as expected since it is an equilibrium state. Moreover, the expression of the second moment tensor has to be compared with $$\int v\otimes v \,w_{\mathrm eq} \,dv=n\left(\frac{ {\mathcal I}}{\beta m}+\frac{\beta\hbar^2 }{12m^2} {\nabla \otimes \nabla V}\right).$$ They differ by the second summand in that is to be referred to the strong-field assumption (cf. [@poupaud]). ]{}\ * As a consequence of Proposition \[prop:kernelC\], the following subspace is well-defined $$\left(X_k\right)_M \::=\:\{ \alpha(x)M(x,v), \alpha\in L^2_x\}\subset X_k\,,$$ which coincides with $\ker (\cal{A}+\cal{C})$ when $\cal{A}+\cal{C}$ is considered as an operator on $X_k$. Accordingly, we can decompose the space $X_k$ as $$\label{eq:decomposition} X_k = \left(X_k\right)_M \oplus \left(X_k\right)^0$$ with $$\left(X_k\right)^0:=\left\{ w\in X_k \left| \int\!\! w(x,v)\,dv = 0 \right.\right\}\,,$$ and define the corresponding spectral projection $\cal P$ from $X_k$ into $\left(X_k\right)_M$, by $${\cal P}w := M \int_{{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d_v}\!\! w(x,v)\,dv\,, \nonumber$$ and ${\cal Q} := {\cal I} - {\cal P}$. The following corollary is still a preliminary result for our asymptotic procedure. \[lemma1\] Let $V\in H^{k}_x$ with $d$-admissible $k$. Then, the operator ${\cal Q(A+C)Q} $ is an isomorphism of $(X_k)^0$ onto itself, with $$\label{eq:normQACQ} \| {\mathcal A}+{{\cal C}}\|_{{\cal B}({X}_k)}\leq C(d,k)\left(\|V\|_{H^{{k}}_x}+\nu\right).$$ If, in addition, $V\in H^{k+j}_x$ with $j>0$, then ${\cal Q(A+C)Q} $ is an isomorphism of $(H^j_k)^0$ onto itself, with $$\label{eq:normQACQH} \| {\mathcal A}+{{\cal C}}\|_{{\cal B}(H^j_k)}\leq C(d,k,j)\left(\|V\|_{H^{{k+j}}_x}+\nu\right).$$ [**Proof.**]{} The operator ${\cal Q(A+C)Q}$, when considered as an operator acting on $(X_k)^0$, reduces to $$\label{eq:qacq} {\cal Q(A+C)Q} u = \Theta[V]u - \nu u, \quad \forall \, u \in (X_k)^0.$$ Then the thesis follows from Lemma \[lemma:pseudo\], Prop. \[prop:kernelC\], and by the skew-simmetry of the pseudo-differential operator. The second statement and estimate can be proved analogously. ------------------------------------------------------------------------ \ The asymptotic expansion {#sec_cae} ======================== According to the decomposition of the space $X_k$, every function $w \in X_k$ can be written as $w={\cal P}w+{\cal Q}w$, with ${\cal P}w\in(X_k)_M$ and ${\cal Q}w\in(X_k)^0$. Let us call $ {\varphi}:= {\cal P}w $ and $\psi:={\cal Q}w $. Observe that, for all $w\in X_k, \int {\cal P}w(x,v)\, dv = n[w](x)$, while $\int {\cal Q}w(x,v)\, dv = 0$, that is, we separate the part of $w$ that contributes to the density $n[w]$ from the other one. Precisely, it holds ${\cal P}w= n[w]M$, by definition.\ Applying formally the projection $\cal P$, respectively $\cal Q,$ to the Wigner-BGK equation (\[system\]) with unknown $w$, we obtain the following system of equations with unknown ${\varphi}$ and $\psi$ $$\left\{ \begin{array}{rcl} \displaystyle{\frac{{\partial}{\varphi}}{{\partial}t}} &=& {{\cal P}S{\cal P}}{\varphi}+ {{\cal P}S{\cal Q}}\psi \\[3mm] \displaystyle{\frac{{\partial}\psi}{{\partial}t}} &=& {{\cal Q}S{\cal P}}{\varphi}+ {{\cal Q}S{\cal Q}}\psi + \displaystyle \frac{1}{{\epsilon}} {\cal Q}({\cal A}+{\cal C}){\cal Q} \psi \end{array} \right. \label{sispro}$$ where we used $({\cal A}+{\cal C}){{\cal P}}{\varphi}=0$ and ${{{\cal P}}({\cal A}+{\cal C}) {{\cal Q}}} \psi=0$, together with the initial conditions $${\varphi}(0) = {\varphi}_0 = {{\cal P}}w_0\,, \qquad \psi(0)= \psi_0 = {{\cal Q}}w_0. \label{icpro}$$ System consists of an evolution problem with unknown functions ${\varphi}= n[w]M$ and $\psi$, and it is supplemented by the initial conditions . It is a reformulation of (\[system\]).\ Since we expect the solution $w$ to be subject to rapid changes for small times, we split the functions ${\varphi}$ and $\psi$ into the sums of the “bulk” parts ${\bar{\varphi}}$ and $\bar \psi$ and of the “initial layer” parts $\tilde {\varphi}$ and $\tilde \psi$, $${\varphi}(t) ={\bar \varphi}(t) + \tilde {\varphi}\left(\frac{t}{{\epsilon}}\right)\,,\qquad \psi(t)=\bar \psi(t)+ \tilde \psi\left(\frac{t}{{\epsilon}}\right). \nonumber$$ The bulk part $\bar{\varphi}$ is left [*unexpanded*]{} and the other parts are expanded in terms of ${\epsilon}$ as follows $$\begin{aligned} \label{expansion} {\tilde \varphi}({\tau}) &=& {\tilde \varphi}_0({\tau}) + {\epsilon}{\tilde \varphi}_1({\tau}) + {\epsilon}^2 {\tilde \varphi}_2({\tau}) +\ldots \nonumber\\[2mm] \bar \psi (t) &=& {\bar \psi}_0(t) + {\epsilon}{\bar \psi}_1(t)+ {\epsilon}^2 {\bar \psi}_2(t) + \ldots \\[2mm] {\tilde \psi}({\tau})&=& {\tilde \psi}_0({\tau}) + {\epsilon}{\tilde \psi}_1({\tau})+ {\epsilon}^2 {\tilde \psi}_2({\tau}) + \ldots ,\nonumber\end{aligned}$$ with ${\tau}={t}/{{\epsilon}}.$ Accordingly, Eqs.  for the bulk part terms of the expansion up to the order ${\epsilon}^2$ become $$\left\{ \begin{array}{rcl} \displaystyle{ \frac{{\partial}\bar {\varphi}}{{\partial}t}} &=& {{\cal P}S{\cal P}}\bar {\varphi}+ {{\cal P}S{\cal Q}}\bar\psi_0 + {\epsilon}{{\cal P}S{\cal Q}}\bar\psi_1 \\[2mm] 0&=&{\cal Q}({\cal A}+{\cal C}){\cal Q}\bar \psi_0\\[2mm] 0&=&{{\cal Q}S{\cal P}}\bar{\varphi}+{\cal Q}({\cal A}+{\cal C}){\cal Q} \bar\psi_1\, \end{array} \right. \label{sisprobis}$$ while the equations for the initial layer parts read $$\left\{ \begin{array}{rcl} \displaystyle{\frac{{\partial}\tilde {\varphi}_0}{{\partial}{\tau}}}&=& 0,\\[2mm] \displaystyle{\frac{{\partial}\tilde {\varphi}_1}{{\partial}{\tau}}}&=& {\cal P}S{\cal Q} \tilde \psi_0 ({\tau})\\[2mm] \displaystyle{\frac{{\partial}\tilde \psi_0}{{\partial}{\tau}}}&=& {\cal Q(A+C) Q} \tilde \psi_0({\tau})\\[2mm] \displaystyle{\frac{{\partial}\tilde \psi_1}{{\partial}{\tau}}}&=& {\cal Q(A+C) Q} \tilde \psi_1({\tau})+{\cal Q}S{\cal Q} \tilde \psi_0 ({\tau}) \end{array} \right. \label{initial}$$ and the initial conditions yield $$\left\{ \begin{array}{rcl} {\bar \varphi}(0)+ \tilde {\varphi}_0(0)+ {\epsilon}\tilde {\varphi}_1(0)&=&{\varphi}_0 \\ \bar \psi_0(0)+ \tilde \psi_0(0)&=&\psi_0 \\ \bar \psi_1(0) + \tilde \psi_1(0) &=&0 \, . \end{array} \right. \label{cisy}$$ System , together with -, is an $\mathcal{O}({\epsilon}^2)$-approximated version of with , once the expansion has been introduced. In fact, the equations in can be decoupled: by Corollary \[lemma1\], the operator ${\cal Q}({\cal A}+{\cal C}){\cal Q}$ is invertible in $(X_k)^0$, thus $$\begin{aligned} \label{eq:psibar0} {\bar \psi}_0 &\equiv& 0 \\ \label{eq:psibar1} {\bar \psi}_1 &=& -({\cal Q}({\cal A}+{\cal C}) {{\cal Q}})^{-1} {{\cal Q}S{\cal P}}{\bar \varphi}\,, $$ which implies $$\label{diffu1} \displaystyle{\frac{{\partial}{\bar \varphi}}{{\partial}t}} = {{\cal P}S{\cal P}}{\bar \varphi}- {\epsilon}{{\cal P}S{\cal Q}}({\cal Q}({\cal A}+{\cal C}) {{\cal Q}})^{-1} {{\cal Q}S{\cal P}}{\bar \varphi}\,.$$ Thus, system reduces to the system -, with unknown functions ${\bar \varphi}(x,v,t)= n(x,t) \ M(x,v)$ and ${\bar \psi}_1$. Next section shall be dedicated to reformulate Eq.  as an equation with unknown $n$. The analysis of system , with unknown ${\tilde \varphi}$ and ${\tilde \psi}$ and initial conditions , is postponed to Section \[sec:initiallayer\]: it shall provide an appropriate initial condition for Eq. . Finally, in Sections 7 and 8 we shall establish a well-posedness result for the approximated problem. In our main theorem (cf. Thm. \[maintheorem\]), we shall prove that the solution $\varphi+\psi$ of equations indeed differs from $[{\bar \varphi}(t)+ {\tilde \varphi}_0({\tau}) + {\epsilon}{\tilde \varphi}_1({\tau})]+[{\bar \psi}_0(t) + {\epsilon}{\bar \psi}_1(t)+ {\tilde \psi}_0({\tau}) + {\epsilon}{\tilde \psi}_1({\tau}) ]$, satisfying the approximated problem -, by a term of order ${\epsilon}^2$. The high-field quantum drift-diffusion equation {#strongfield} =============================================== The aim of the present section is the reformulation of the abstract equation as an equation with unknown $n$. \[lemma:ourQDD\] Let $V\in H^{k+2}_x$ with $d$-admissible $k$. Eq.  with unknown ${\bar{\varphi}}(x,v,t) = n(x,t) M(x,v)$ can be rewritten as an evolution equation with unknown $n(x,t)$ of the form $$\begin{aligned} \label{eq:ourQDD} \frac{\partial n}{\partial t} &-&\frac{1}{\nu m}\nabla \cdotp (n\nabla V) -\frac{{\epsilon}}{\nu\beta m} \nabla \cdotp \nabla n\nonumber\\ &-&\frac{{\epsilon}}{\nu^3m^2} \left [\nabla \cdotp (n (\nabla \otimes \nabla) V\nabla V) + \nabla \cdotp\nabla \cdotp (n{\nabla V \otimes \nabla V}) \right]\nonumber\\ &-&\frac{{\epsilon}\beta\hbar^2}{12\nu m^2}\nabla \cdotp \nabla \cdotp\left(n \nabla \otimes \nabla V \right)=0\end{aligned}$$ *[ The first line of consists of the terms of the classical DD equation. The second line is peculiar of the strong-field assumption, being a correction of order ${\epsilon}$, and consists of the additional term $$\frac{1}{\nu}\frac{\nabla V\otimes \nabla V}{\nu^2m^2}$$ in the pressure tensor, and of the term $$\frac{1}{\nu} \left(\frac{\nabla\otimes\nabla) V \nabla V} {\nu^2m^2}\right)$$ in the drift term. Both terms are quadratic in the potential $V$. The second line can also be written as $$-\frac{{\epsilon}}{\nu^3m^2} \nabla \cdotp [\nabla V \otimes \nabla V \nabla n + n \left(2\nabla\otimes\nabla V\nabla V + \Delta V\nabla V\right)]\,.$$ This expression is the same obtained in [@poupaud] from the semi-classical Boltzmann equation with high-field scaling. The last line is the quantum pressure term (cf. [@Gard94; @gasmar]). ]{}\ * The proof requires the following preliminary lemmata. \[lemma:D2\] Let $V\in H^{k+2}_x$ with $d$-admissible $k$. Then the equation $$\label{eqD2} ({\cal A}+{{\cal C}}) w= M\left(-v + \int\!\! v M\, dv \right)\,,$$ admits a unique solution $(D_2)_i \in (X_{k+1})^0\,, \forall\, i=1\,,\ldots d$. Moreover, let ${\sf D}$ be the matrix defined by $${\sf D}_{ij} (x):= \int\! v_i (D_2)_j(x,v)dv,$$ then $${\sf D}(x)=\frac{1}{\nu}\left(\frac{{\mathcal I}}{\beta m}+\frac{1}{\nu^2m^2}{\nabla V\otimes \nabla V} +\frac{\beta\hbar^2}{12m^2}\nabla\otimes\nabla V\right)(x)\,. \label{diftensor}$$ [**Proof**]{}. Since the right-hand side of Eq.  belongs to $(X_{k+1})^0$, it satisfies the compatibility condition and there exist $(D_2)_i \in (X_{k+1})^0\,, \forall i=1\,,\ldots d$ satisfying . More explicitly, $D_2$ solves $$\label{eq:eqD2} (\Theta[V]-\nu) D_2(x,v) = -M(x,v)\left[ v+ \frac{\nabla V(x)}{\nu m} \right],$$ since $\int D_2(v)dv = 0$ and by . Multiplying the left-hand side of Eq.  by $v+\nabla V/(\nu m)$ and integrating over ${{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d}$ we obtain $$\int\! \left(v+\frac{\nabla V(x)}{\nu m}\right)\otimes\left[ \Theta[V]-\nu \right ] D_2(x,v)\, dv = - \nu\int\!\! v_i (D_2)_j(x,v)\,dv\,,$$ by using the skew-simmetry and $(D_2)_i\in (X_{k+1})^0.$ Thus Eq.  gives $$\begin{aligned} \label{eq:matrix} \nu \, {\sf D}_{ij} (x) &=& \int\! \Big(v+\frac{\nabla V(x)}{\nu m}\Big)\otimes \Big(v+\frac{\nabla V(x)}{\nu m}\Big)M(x,v)\,dv\\ &=& \int\! v\otimes vM(x,v)\,dv- \frac{1}{\nu^2 m^2}\nabla V(x)\otimes\nabla V(x)\,.\nonumber\end{aligned}$$ From , the thesis follows. ------------------------------------------------------------------------ [*By considering the expression for the fluid velocity, we can recognize in the classical definition of the pressure tensor in terms of $M$. This is to be expected, since the function $M$ is the solution of the the evolution problem with ${\epsilon}=0$. Thus, the term with diffusion tensor ${\sf D}$ is what we expected to find as correction of first order in ${\epsilon}$. By , it consists of the standard temperature and quantum pressure tensors, and of the additional tensor ${1}/({\nu^3 m^2})\nabla V(x)\otimes\nabla V(x)$, to be referred to the strong-field assumption (cf. Remark \[rem:moments\]).* ]{}\ \[lemma:D1\] Let $V\in H^{k+2}_x$ with $d$-admissible $k$. The following equation $$\label{eqD1} ({\cal A}+{{\cal C}})w=-v\cdotp \nabla_x M+M\int\!\! v \cdotp \nabla_xM\, dv\,,$$ admits a unique solution $D_1 \in (X_{k+1})^0\,.$ Moreover, the vector $W$ defined by $${\sf W}(x):=\int v D_1(x,v)\,dv$$ can be calculated explicitly $$\label{eq:W} {\sf W}(x) = \frac{1}{\nu}\left(2\frac{\nabla\otimes \nabla V}{\nu^2 m^2}\nabla V(x) +\frac{\Delta V \nabla V}{{\nu^2 m^2}} +\frac{\beta\hbar^2}{12m^2} \nabla \cdotp \nabla\otimes \nabla V\right)(x) \,.$$ [**Proof**]{}. Under the regularity assumptions on $V$, $M\in H_{k+1}^1:= H^1_x\otimes X_{k+1}^v$ and the right-hand side of Eq.  belongs to $(X_{k+1})^0$, thus it exists $D_1\in (X_{k+1})^0$ solving $$(\Theta[V]-\nu) D_1(x,v) = -v\cdotp\nabla_x M(x,v)- M(x,v)\nabla \cdotp\frac{\nabla V(x)}{\nu m}\,,$$ which is equivalent to , since $\int D_1(v)\,dv = 0$. Multiplying by $v+{\nabla V}/{\nu m}$ and integrating over ${{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d}$ we obtain $$\begin{aligned} \label{eq:vD_2} \nu\int\!\! v D_1(x,v)\,dv &=& \int\!\! \left(v +\frac{\nabla V(x)}{\nu m}\right)\nabla_x \cdotp\left(v +\frac{\nabla V(x)}{\nu m}\right)M(x,v)\, dv \nonumber\\ \label{eq:vD_2bis} &=& \nabla_x\cdotp\int\!\!v \otimes v M(x,v)\, dv - \frac{1}{\nu^2m^2}\Delta V(x)\nabla V(x)\,.\end{aligned}$$ The thesis follows directly from and $$\nabla \cdotp (\nabla V\otimes\nabla V)= \Delta V\nabla V+\left(\nabla\otimes\nabla\right)V\nabla V\,.$$ ------------------------------------------------------------------------ [**Proof of Lemma 6.1**]{} First of all, let us write the explicit expression of the operators appearing in Eq. . Observe that $V\in H_x^{k+2}$ implies $M\in H_{k+1}^1$. By definition, $$\label{eq:PSP} {{\cal P}S{\cal P}}{\bar \varphi}= -M \int\!\! v\cdotp \nabla_x (n M)\, dv =-\ M\nabla_x \cdotp\left( n \int\!\! v M\, dv \right)\,, \nonumber$$ and $({\cal P}S{\cal Q } g)=({\cal P}Sg)- ({\cal P}S{\cal P} g)$, i.e., explicitly, $$\label{eq:psq} ({\cal P}S{\cal Q } g)=- M \left(\int\!\! v\cdotp \nabla_x g\,dv - \int\!\!\nabla_x g\,dv\,\cdotp \int\!\! v\, M\,dv - \int\!\!g\,dv\nabla_x \cdotp \int\!\! v M dv \right)\,.$$ Moreover ${{\cal Q}S{\cal P}}{\bar \varphi}=( S {\cal P}- {{\cal P}S{\cal P}}){\bar \varphi}$, i.e., explicitly, $$\label{eq:QSP} {{\cal Q}S{\cal P}}{\bar \varphi}= n\left[-v\cdotp \nabla_x M+M\int\!\! v \cdotp \nabla_xM dv \right] +\nabla_x n\cdotp \left[ M\left(-v+ \int\!\! v M dv \right) \right] .$$ By Lemmata \[lemma:D2\] and \[lemma:D1\], $D_2(x,v)\equiv(D_2)_i(x,v)$ and $D_1(x,v)$ are solutions with $(D_2)_i, D_1\in\left(X_{k+1}\right)^0$ of Eqs.  and , respectively. Then, by some manipulations, $$\label{eq:standard} {{\cal P}S{\cal Q}}({{\cal Q}}({\cal A}+{{\cal C}}){{\cal Q}})^{-1} {{\cal Q}S{\cal P}}{\bar \varphi}= {\cal P}S (D_2 \cdotp\nabla n) + {\cal P}S (D_1 n)\,,$$ where the right-hand side can be written explicitly as $$\begin{aligned} {\cal P}S (D_2 \cdotp\nabla n) \;=& \! -M\int v\cdotp \nabla_x( D_2 \cdotp \nabla n)\,dv \!&=\;-M\nabla_x\cdotp \left[\left( \int\!\!v \otimes D_2 \,dv \right)\cdotp\nabla n\right]\,,\\ {\cal P}S (D_1 n) \:=&\! -M\int v\cdotp \nabla_x( D_1 n)\,dv\!&=\;-M \nabla_x\cdotp \left[n\int\!\!v D_1 \,dv\right]\,.\end{aligned}$$ Hence, by simplifying the common factor $M$, Eq. (\[diffu1\]) reads $$\begin{aligned} \frac{{\partial}n}{{\partial}t} =&-& \nabla_x \cdotp \left( n\int\!\! v M dv \right)+\epsilon \nabla_x \cdotp \left({\sf D}\cdotp\nabla n +n{\sf W}\right)\,, $$ and the thesis follows by using and . ------------------------------------------------------------------------ \ As a consequence of , Eq.  defining the other non-zero term of the bulk part expansion, $\bar \psi_1$, can be rewritten as $$\label{eq:J_1} \bar \psi_1(x,t)= - \left\{D_2 \cdotp \nabla n + D_1 n\right\}\,.$$ The explicit version of this expression shall be given in Eq. . Rigorous results: the initial layer part {#sec:initiallayer} ======================================== The aim of the present section is to prove existence and regularity of the solutions of Eqs. , together with the initial conditions (\[cisy\]). The first equation in yields $${\tilde \varphi}_0({\tau}) \equiv 0,$$ since we expect that $\lim_{\tau\to \infty} {\tilde \varphi}_0({\tau})=0.$ The equation for $\tilde{\psi}_0$ with the appropriate initial condition coming from -, is $$\left\{ \begin{array}{ccl} \displaystyle \frac{{\partial}{\tilde \psi}_0}{{\partial}\tau} &=& {\cal Q(A+C)Q} \tilde{\psi}_0\\ \, \rule{0mm}{5mm}\tilde{\psi}_0 (0) &=& \psi_0\,. \end{array} \right. \label{psitilde0}$$ We recall that the operator ${\cal Q(A+C)Q}$ on $(X_k)^0$ reduces to $${\cal Q(A+C)Q} w = \Theta[V]w - \nu w, \quad \forall \, w \in (X_k)^0\,,$$ (cf. ). By the product shape in Fourier-variables of the pseudo-differential operator (cf. ), it is more convenient to consider the equation for ${\cal F}\tilde{\psi}_0$, that looks like $$\frac{\partial}{\partial{\tau}}{\cal F}\tilde{\psi}_0 (x,\eta, \tau)= (i\,\delta V(x,\eta)-\nu) {\cal F} \tilde{\psi}_0(x,\eta,\tau)\,.$$ Thus, we define, for all $w\in L^2({{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^6;{{\ensuremath{\mathrm{I}\!\mathrm{R}}}}),$ the semigroup $G(\tau)$ $$\begin{aligned} \label{eq:FourierG} G(\tau)w(x,v)&:=&{\cal F}^{-1}\left(e^{(i\,\delta V(x,\eta)-\nu)\tau} {\cal F} w(x,\eta)\right)\\ &=&e^{-\nu\tau}{\cal F}^{-1}\left(e^{i\,\delta V(x,\eta)\tau}{\cal F} w(x,\eta)\right)\,,\quad \forall\, \tau\geq 0\,.\nonumber\end{aligned}$$ The function ${\tilde \psi}_0(\tau)\equiv G(\tau)\psi_0$ formally satisfies system . Moreover, \[lemma2\] If $w \in (X_k)^0$ and $V\in H_x^{k}$ with $d$-admissible $k$, then there exist ${0<\nu_k<1},$ and a constant $C(\|V\|_{H_x^k})>0$, such that $$\label{eq:decay} \|G(\tau)w\|_{X_k} \leq C (\|V\|_{H_x^{k}}) \,e^{-\nu_k\tau}\|w\|_{X_k}\,.$$ If, in addition, $w \in (H_k^j)^0$ and $V\in H_x^{k+j}$, then $$\label{eq:decayH} \|G(\tau)w\|_{H_k^j} \leq C (\|V\|_{H_x^{k+j}}) \,e^{-\nu_{k+j}\tau}\|w\|_{H_k^{j}}\,,$$ with appropriate $C (\|V\|_{H_x^{k+j}})>0,$ and $0<\nu_{k+j}<1.$ Eq.  defines a strongly continuous semigroup on $(X_k)^0$ (respectively on $(H_k^j)^0$). [[[**Proof**]{}]{}]{}. By definition we have $$\begin{aligned} \|G(\tau)w\|_{X_k}&\leq& C e^{-\nu\tau}\!\left(\|e^{i\,\delta V(x,\eta)\tau} {\cal F} w(x,\eta)\|_{L^2_{x,\eta}}+\|\nabla^k_{\eta}(e^{i\,\delta V(x,\eta)\tau} {\cal F} w(x,\eta)\|_{L^2_{x,\eta}}\right) \\ &\leq& C e^{-\nu\tau}\!\left(\|w\|_{L^2_{x,v}}+P_k(\tau \|V\|_{H_x^{k}})\|w\|_{X_k}\right)\\ &\leq& e^{-\nu_k\tau}\max_{\tau\geq 0}\{e^{-(\nu-\nu_k)\tau}P_k(\tau \|V\|_{H_x^{k}})\}\|w\|_{X_k}\,,\end{aligned}$$ where $0<\nu_k<\nu$ and $P_k$ is a polynomial of degree $k$. The estimate can be proved analogously. The last assertion follows immediately by applying Hille-Yosida Thm., thanks to (respectively ). ------------------------------------------------------------------------ \ With Lemmata \[lemma1\] and \[lemma2\] we can prove the following proposition. \[prop:estimates\_initial\] If $w_0\in H_{k+1}^1$ and $V\in H_x^{k+2}$, with $d$-admissible $k$, then all terms of the initial layer expansion are well-defined and satisfy the following estimates: $$\begin{aligned} \label{eq:tilde_psi_0} \|\tilde\psi_0(\tau)\|_{X_k} &\leq &{\rm M}_1e^{-\nu_{k}\tau}\|w_0\|_{X_{k}}\,,\\ \label{eq:tilde_zf_1} \|\tilde{\varphi}_1(\tau)\|_{X_k} &\leq &{\rm M}_2e^{-\nu_{k+2}\tau}\|w_0\|_{H^1_{k+1}}\,,\\ \label{eq:tilde_psi_1} \|\tilde\psi_1(\tau)\|_{X_k} &\leq& {\rm M}_3e^{-\nu_{k+2}\tau} \|w_0\|_{H^1_{k+1}}\,,\end{aligned}$$ for some constants ${\rm M}_1$, ${\rm M}_2$ and ${\rm M}_3$(depending on norms of $V$). [**Proof**]{}. The unique solution of system is $$\tilde\psi_0({\tau}) = G({\tau})\psi_0\,,$$ and follows immediately from (\[eq:decay\]) since $\psi_0={\cal P}w_0\in X_k$. Now we shall consider, among Eqs. , the following one: $$\frac{{\partial}\tilde {\varphi}_1}{{\partial}{\tau}}({\tau})= {{\cal P}S{\cal Q}}\tilde \psi_0 ({\tau})\,.$$ The right hand side is well-defined by considering the definition of the operator ${\cal P}S{\cal Q}$ (cf. ), together with Lemma \[lemma2\], since $\psi_0 \in (H^1_{k+1})^0$ and $V\in H^{k+2}_x$. By integrating with respect to $\tau$ and considering $\lim_{\tau\to \infty} \tilde {\varphi}_1({\tau})=0$, we obtain $$\begin{aligned} \tilde{\varphi}_1(\tau) &=& -\int_{\tau}^{\infty}{{\cal P}S{\cal Q}}\tilde\psi_0(s)\,ds \\ &=&-\int_{\tau}^{\infty}{{\cal P}S{\cal Q}}[{\cal Q (A+C) Q}]^{-1}[{\cal Q(A+C)Q} ] G(s)\psi_0\,ds =\\ &=&-{{\cal P}S{\cal Q}}[{\cal Q(A+C)Q}]^{-1}\int_{\tau}^{\infty}\!\!{\cal Q(A+C)Q} G(s)\psi_0\,ds.\end{aligned}$$ Last integral is well-defined since the integrand ${\cal Q(A+C)Q} G(s)\psi_0 $ is equal to $G(s){\cal Q(A+C)Q}\psi_0$, which is continuous in $H^1_{k+1}$ (by Lemma \[lemma2\]). Moreover ${{\cal P}S{\cal Q}}$ $[{\cal Q(A+C)Q}]^{-1} \in {\cal L}(H^1_{k+1},X_k)$ (by Corollary \[lemma1\]), then it can be taken outside the integral. Since $${\cal Q(A+C)Q} G(s)\psi_0 = \frac{{\partial}G(s)\psi_0}{{\partial}s}\,,$$ thanks to the exponential decay of $G$ in $H^1_{k+1}$ and the continuity of the operator ${{\cal P}S{\cal Q}}[{\cal Q(A+C)Q}]^{-1}$, we obtain $$\label{phipre} \tilde{\varphi}_1(\tau) = {{\cal P}S{\cal Q}}[{\cal Q(A+C)Q}]^{-1}G(\tau)\psi_0\,,$$ and, in particular, $$\tilde{\varphi}_1(0) = {{\cal P}S{\cal Q}}[{\cal Q(A+C)Q}]^{-1}\psi_0,$$ which provides the initial datum. Then, (\[eq:tilde\_zf\_1\]) follows from the estimate $$\begin{aligned} \label{eq:tilde_zf_1proof} \|\tilde{\varphi}_1(\tau)\|_{X_k} &\leq & |\|{{\cal P}S{\cal Q}}[{\cal Q(A+C)Q}]^{-1}\|| \; \|G(\tau)\psi_0\|_{H^1_{k+1}} \leq\\ &\leq&{\rm M}_2e^{-\nu_{k+2}\tau}\|\psi_0\|_{H^1_{k+1}}\,,\nonumber\end{aligned}$$ where $|\|\cdot\| |$ denotes the norm in ${\cal L}(H_{k+1}^1,X_k)$. Finally, we prove that the equation $$\frac{{\partial}\tilde \psi_1}{{\partial}{\tau}}({\tau})= {\cal Q(A+C)Q}\tilde \psi_1({\tau})+{\cal Q}S{\cal Q} \tilde \psi_0 ({\tau})$$ is classically solvable. The initial condition for $\tilde{\psi}_1$ can be obtained from Eqs.  , together with Eq.  for $\bar \psi_1$, $$\tilde \psi_1(0)= - \bar \psi_1(0) = [{\cal Q}({\cal A}+{\cal C}) {{\cal Q}}]^{-1} {{\cal Q}S{\cal P}}{\bar \varphi}(0)\,,$$ and by considering $${\bar \varphi}(0) = {\varphi}_0 - {\tilde \varphi}_0(0) - {\epsilon}{\tilde \varphi}_1(0) = {\varphi}_0 - {\epsilon}{{\cal P}S{\cal Q}}[{\cal Q(A+C)Q}]^{-1} \psi_0. \label{difiv}$$ Since $\tilde \psi_1(0)$ is by itself a correction of order ${\epsilon}$, we neglect the term of order ${\epsilon}$ in the expression for ${\bar \varphi}(0)$, and it yields $$\tilde \psi_1(0)= [{\cal Q}({\cal A}+{\cal C}) {{\cal Q}}]^{-1} {{\cal Q}S{\cal P}}{\varphi}_0\,.$$ By Lemma \[lemma2\], $G$ is a semigroup on $(H^1_{k+1})^0$ and, thanks to the assumption on $w_0$, $\psi_0$ is in the domain of ${\cal Q(A+C)Q}$ when it is defined in $D(S)\cap(H^1_{k+1})^0$. Therefore $\tilde\psi_0(\tau) = G(\tau)\psi_0$ is differentiable on $[0,\infty[$ in $X_{k+1}$ so that the inhomogeneous term ${{\cal Q}S{\cal Q}}\tilde\psi_0(\tau)$ is differentiable on $[0, \infty[$ in $X_k$. This, together with $\tilde\psi_1(0) = ({\cal Q(A+C)Q})^{-1}{{\cal Q}S{\cal P}}{\varphi}_0 \in D({\cal Q(A+C)Q})$, shows that $$\label{eq:tildepsidef} \tilde\psi_1(\tau) = G(\tau)\tilde \psi_1(0) + \int_{0}^{\tau}G(\tau - \sigma){{\cal Q}S{\cal Q}}G(\sigma)\psi_0d\sigma$$ is a classical solution. The estimate follows from $[{\cal Q(A+C)Q}]^{-1} {{\cal Q}S{\cal P}}\in {\cal L}(H^1_k,X_k)$ and from : $$\begin{aligned} \|\tilde\psi_1(\tau)\|_{X_{k}} &\leq& {\rm K}_1e^{-\nu_k\tau}\|{\varphi}_0\|_{X_{k}} + {\rm K}_2 e^{-\nu_k\tau}\int_{0}^{\tau}e^{(\nu_k-\nu_{k+2})\sigma}\|\psi_0\|_{H^1_{k+1}} d\sigma\\ &\leq& {\rm K}_1 e^{-\nu_k\tau}\|{\varphi}_0\|_{X_{k}} + {\rm K}_3e^{-\nu_{k+2}\tau} \|\psi_0\|_{H^1_{k+1}} \:\;\leq \:\; {\rm M}_3e^{-\nu_{k+2}\tau}\|w_0\|_{H^1_{k+1}}\,. \end{aligned}$$ ------------------------------------------------------------------------ \ In order to obtain an initial value for Eq.  with unknown $\bar\varphi=n\,M$, we consider again . Let us call $n_0(x) = \int w_0(x,v)\, dv $, such that ${\varphi}_0=n_0M$ and, by dividing both sides of the expression by $M$, it yields $$\label{cx0} n(x,0) = n_0(x) + {\epsilon}\int\!\! v \cdotp \nabla_x {\cal F}^{-1}\left(\frac{{\cal F}\psi_0}{i \delta V - \nu}\right)(x,v)\, dv\,,$$ by using the explicit expression the operator $({\cal Q}({\cal A}+{\cal C}){\cal Q})^{-1}$ (cf. ). In the following we shall call $$\label{cx0b} n(x,0) = n_0(x) + {\epsilon}n_1(x)\quad \hbox{with} \quad n_1(x):=\int\!\! v \cdotp \nabla_x {\cal F}^{-1}\left(\frac{{\cal F}\psi_0}{i \delta V - \nu}\right)(x,v)\, dv\,.$$ The explicit expression for can be obtained analogously and reads $$\tilde{{\varphi}}_1(\tau) = - M\int\!\! v \cdotp \nabla_x {\cal F}^{-1}\left(\frac{{\cal F}G(\tau)\psi_0}{i \delta V - \nu}\right)\, dv\,. \nonumber$$ Well-posedness of the high-field QDD equation ============================================= In this section, we establish a well-posedness and regularity result for Eq. , with a given external potential $V$. The equation can be rewritten in divergence form as $$\label{eq:qdd1} \frac{{\partial}n}{{\partial}t} - {\cal D} n - {\cal G} n - {\cal E} n =0\,,$$ where we indicate $${\cal D} n = {{\epsilon}}\nabla \cdot ({\sf D} \nabla n)\,,\quad {\cal G} n = {{\epsilon}}\nabla \cdot ({\sf W}\,n)\,,\quad {\cal E} n = \nabla \cdot ( {\sf E}\,n)$$ with $$\begin{aligned} {\sf D}\equiv{\sf D}(x)&:=& \frac{1}{\nu}\left(\frac{{\mathcal I}}{\beta m}+\frac{1}{\nu^2m^2}{\nabla V\otimes \nabla V}+\frac{\beta\hbar^2}{12m^2}\nabla\otimes\nabla V\right)(x)\,,\\[2mm] {\sf W}\equiv{\sf W}(x) &:=& \frac{1}{\nu}\left(2\frac{\nabla\otimes \nabla V}{\nu^2 m^2}\nabla V +\frac{\Delta V \nabla V}{\nu^2 m^2} +\frac{\beta\hbar^2}{12m^2} \nabla \cdotp \nabla\otimes \nabla V\right)(x) \,,\\[2mm] {\sf E}\equiv {\sf E}(x)&:=& \frac{\nabla V(x)}{\nu m}\,,\quad \forall\,x\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d.\end{aligned}$$ \[ass:assumptiononV\] $V$ belongs to $H_x^{k+2}$ with $d$-admissible $k$ and it satisfies the following $$\exists \; c>0 \quad\hbox{s.t.}~\,{\sf D}(x) \,y \otimes y \:\;\ge\:\; c |y|^2\,,\quad \forall\,x,y\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d\,,$$ This implies that ${\cal D}$ is a uniformly elliptic differential operator. Thus we can state the following: \[prop:existence\] Let $V$ satisfy Assumption \[ass:assumptiononV\] and, in addition, $\nabla\Delta V\in W^{j-1,\infty}_x$ with $j\in {{\ensuremath{\mathrm{I}\!\mathrm{N}}}}$. Then the unique global solution $n=n(t)$ of Eq.  with $n(0)\in L^2_x$ satisfies $n(t)\in H_x^{j}$ for $t>0$, and the following estimate $$\label{eq:ana} \| n(t) \|_{H_x^{j}} \leq M_j ({\epsilon}t)^{-j/2}\| n(0) \|_{L^2_x}$$ holds with $M_j>0,$ for ${\epsilon},t\to 0^+\,.$ In the following, by $\nabla F\in L^2({{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^d_x)$ we mean $\nabla F\in (L^2_x)^d$. Moreover, we consider $0<{\epsilon}<1$ and the constants are independent of ${\epsilon}$, unless specified.\ [**Proof**]{}. By Assumption \[ass:assumptiononV\] on the potential $V$, the operator ${\cal D}$ defined on $D({\cal D})=H^2_x$ generates an analytic contraction semigroup $(T(t))_{t\geq 0}$ on $L^2_x$ (cf. Thm.VI.5.22 of [@EngelNagel]).\ Let us derive here a basic estimate we shall use intensively in the following. By Assumption \[ass:assumptiononV\], for all $u\in H^2_x$ $$\|\nabla\cdotp\nabla u\|_{L^2_x} \leq C \|{\sf D}\nabla\otimes\nabla u\|_{L^2_x}\leq \frac{C}{{\epsilon}} (\|{\cal D}u\|_{L^2_x} + \|\mathrm{div}{\sf D}\cdotp \nabla u\|_{L^2_x})\,,$$ where the second inequality follows from the definition of the operator ${\cal D}$. Moreover, by using that for all $u\in H^2_x$, $$\label{eq:GN} \|\nabla u\|_{L^2_x}\leq C_{\delta}\|u\|_{L^2_x} + \delta \|\nabla \cdotp\nabla u\|_{L^2_x}\,,\quad \forall\,\delta>0,$$ with $C_{\delta}>0$, it holds $$\|\mathrm{div}{\sf D}\cdotp \nabla u\|_{L^2_x}\leq \|\mathrm{div}{\sf D}\|_{L^{\infty}_x}(C_{\delta}\|u\|_{L^2_x} + \delta \|\nabla \cdotp\nabla u\|_{L^2_x})\,,\quad \forall\,\delta>0,$$ with $C_{\delta}>0$. Thus, $$\|\nabla\cdotp\nabla u\|_{L^2_x} \leq \frac{c}{{\epsilon}(1-\delta \|\mathrm{div}{\sf D}\|_{L^{\infty}_x}/{\epsilon})} (\|\mathrm{div}{\sf D}\|_{L^{\infty}_x}C_{\delta}\|u\|_{L^2_x}+ \|{\cal D}u\|_{L^2_x})\,,$$ and, in conclusion, for an appropriate choice of $\delta>0$, exists some constant $C>0$ such that $$\label{eq:stimabase} \|\nabla\cdotp\nabla u\|_{L^2_x} \leq \frac{C}{{\epsilon}} (\|u\|_{L^2_x}+ \|{\cal D}u\|_{L^2_x})\,.$$ The operator ${\cal G}$ can be written as ${\cal G}={\cal G}_1+{\cal G}_2$ with ${\cal G}_1 f:={\epsilon}{\sf W} \cdotp \nabla f$ defined on $H^1_x$, and ${\cal G}_2 f:={\epsilon}\nabla\cdotp {\sf W} f$, defined on $L^2_x$. The operator ${\cal G}_1$ is ${\cal D}$-bounded, i.e., for all $n\in D({\cal D})$, $$\begin{aligned} \label{eq:G1_prima} \|{\cal G}_1 n\|_{L^2_x}&\leq&{\epsilon}\| {\sf W} \|_{L^{\infty}_x}C (\|n\|_{L^2_x}+\|\nabla\cdotp\nabla n\|_{L^2_x})\\ &\leq& \|{\sf W}\|_{L^{\infty}_x} (C\|n\|_{L^2_x} +C{\|{\cal D}n\|_{L^2_x}})\nonumber\\ \label{eq:G1_terza} &\leq& b\|n\|_{L^2_x} + a{\|{\cal D}n\|_{L^2_x}}\,,\end{aligned}$$ by using and ${\epsilon}<1$. Moreover, the ${\cal D}$-bound $a_1$ defined by $$a_1:=\mathrm{inf}\,\{a\geq0\,|\, \exists \,b>0\;\hbox{s.t.~\eqref{eq:G1_terza} holds}\}$$ is zero, by substituting with . The operator ${\cal G}_2 f:={\epsilon}\nabla\cdotp {\sf W} f$, defined on $L^2_x$, is bounded. The operator ${\cal E}$ can be written as ${\cal E}={\cal E}_1+{\cal E}_2$ with ${\cal E}_1 f:=E \cdotp \nabla f$, defined on $H^1_x$, ${\cal D}$-bounded with ${\cal D}$-bound $a_2=0$, since, for all $n\in D({\cal D})$, $$\begin{aligned} \|{\cal E}_1 n\|_{L^2_x}&\leq&\| E\|_{L^{\infty}_x}(C_{\delta}\|n\|_{L^2_x}+\delta\|\nabla\cdotp\nabla n\|_{L^2_x})\nonumber\\ &\leq& C\, \|E\|_{L^{\infty}_x}\left({C_{\delta}}\|n\|_{L^2_x}+\frac{\delta}{{\epsilon}} (\|n\|_{L^2_x}+\|{\cal D}n\|_{L^2_x})\right)\,,\quad\forall\, \delta>0\,,\label{eq:E1}\end{aligned}$$ by using and . The operator ${\cal E}_2 f:= \nabla\cdotp E f$ is defined on $L^2_x$ and bounded.\ Thus, by Thm. III.2.10 of [@EngelNagel], $({\cal D}+{\cal G}+{\cal E},D({\cal D}))$ generates an analytic semigroup on $L^2_x$ that we shall indicate with $(S(t))_{t\geq 0}$. More precisely, it holds $$\label{eq:stimaanaliticity} \|({\cal D}+{\cal G}+{\cal E})^{\alpha}S(t)u\|_{L^2_x}\leq M_{\alpha} t^{-\alpha}\|u\|_{L^2_x}, \quad t\to 0^+\,,\;\forall\,\alpha \geq 0$$ with $M_{\alpha}$ independent of ${\epsilon}$, by employing Lemma III.2.6 of [@EngelNagel].\ In order to derive estimate , let us start from the following inequality $$\label{eq:stimastandard} {\epsilon}^{m/2}\|u\|_{H^m_x}\leq C\|({\cal D}+{\cal G}+{\cal E})^{m/2}u\|_{L^2_x} $$ that is yielded by similar arguments to . By combining with , we get $$\label{eq:secondastima} \|S(t) u\|_{H^j_x}\leq C{\epsilon}^{-j/2} \|({\cal D}+{\cal G}+{\cal E})^{j/2}S(t)u\|_{L^2_x}\leq C_j ({\epsilon}t)^{-j/2}\|u\|_{L^2_x}\,,$$ which holds for all $u\in L^2_x$ and for small $t$. ------------------------------------------------------------------------ \ In estimate can be easily removed the singularity with respect to $t$: Let the assumptions of the Prop.  hold. In addition, let $n(0)$ belong to $H^j_x$. Then the solution $n(t)$ belongs to $H^j_x$ for all $t>0$ and satisfies $$\label{eq:primastimabis} \|n(t)\|_{H^j_x}\leq C{\epsilon}^{-j/2} \|n(0)\|_{H^j_x}\,,$$ for ${\epsilon},t\to 0^+\,.$ [**Proof.**]{}\ The following inequality holds $$\label{eq:stimastandardbis} {\epsilon}^{m/2}\|u\|_{H^m_x}\leq C\|({\cal D}+{\cal G}+{\cal E})^{m/2}u\|_{L^2_x}\leq C \|u\|_{H^m_x}$$ for all $u\in D(({\cal D}+{\cal G}+{\cal E})^{m/2}),$ with $m\leq j$, and for ${\epsilon}\to 0^+$ (cf. ). Then, in particular, $$\begin{gathered} \label{eq:primastima} \|S(t) u\|_{H^m_x}\leq C{\epsilon}^{-m/2} \|({\cal D}+{\cal G}+{\cal E})^{m/2}S(t)u\|_{L^2_x}\\ \leq C{\epsilon}^{-m/2}\|({\cal D}+{\cal G}+{\cal E})^{m/2}u\|_{L^2_x}\leq C{\epsilon}^{-m/2}\|u\|_{H^m_x}\,,\end{gathered}$$ which holds for all $u\in D(({\cal D}+{\cal G}+{\cal E})^{m/2}),$ for small $t$ and ${\epsilon}$: the first inequality sign corresponds to the first one in Eq. , the second inequality follows by exchanging $({\cal D}+{\cal G}+{\cal E})^{m/2}$ with $S(t)$ and the third one comes from Eq. . ------------------------------------------------------------------------ \ *Observe that in the low-field case, the QDD equation looks like Eq.  with $${\sf W}(x)=\left(\frac{\beta\hbar^2}{12\nu m^2} \nabla \cdotp \nabla\otimes \nabla V\right)(x)\,,\quad{\sf D}(x)= \frac{1}{\nu}\left(\frac{{\mathcal I}}{\beta m}+\frac{\beta\hbar^2}{12m^2}\nabla\otimes\nabla V\right)(x)\,.$$ In order to establish well-posedness result and estimate for all $j\in {{\ensuremath{\mathrm{I}\!\mathrm{N}}}}$, the same assumptions of Prop. \[prop:existence\] are required on the potential $V$ and on the initial datum $n(0)$. This result is to be compared with the analysis in [@juengelpinnau1], where it is tackled the fourth-order, non-linear equation obtained by the approximation $\nabla \log n= -\beta \nabla V + {\cal O}(\hbar^2)$, cf. [@Gard94].\ * By increasing the assumptions on the initial datum, we can remove the singular behaviour of the estimate  with respect to $t$ and ${\epsilon}$. \[cor:regularity\] Let V satisfy Assumption \[ass:assumptiononV\] and $\nabla\Delta V\in W_x^{2j-1,\infty}$. Then the solution $n(t)$ of Eq.  with $n(0)\in D({\cal D}^j)$ satisfies, for ${\epsilon}, t \rightarrow 0^{+}$ $$\label{stiman1} \| n(t) \|_{H^j_x} \leq C\| n(0) \|_{H^{2j}_x} \,.$$ Moreover, the following refinement holds $$\label{stiman2} \| n(t) \|_{H^j_x} \leq C \| n(0) \|_{H^j_x}\,.$$ [**Proof**]{}. We prove the thesis in the case $j=1$. For $j>1$ the thesis follows by an induction procedure similarly to [@banasiakAAM]. Due to the regularity with respect to the variable $x$ of the solution $n(t),$ for $t>0$, we can find the evolution equation for $\nabla n$ by differentiating $$\nabla\left(\frac{\partial}{\partial t} n\right)=\frac{\partial}{\partial t}(\nabla n) = \nabla({\cal D}+{\cal G}+{\cal E})n=({\cal D}+{\cal G}+{\cal E})\nabla n - [({\cal D}+{\cal G}+{\cal E}),\nabla]n,$$ where we indicate with $[({\cal D}+{\cal G}+{\cal E}),\nabla]$ the commutator among the two operators. Since $$-[({\cal D}+{\cal G}+{\cal E}),\partial_k] = {\epsilon}\sum_{i,j} \partial_i \left( \partial_k{\sf D}_{ij} \partial_j n \right) + \sum_i \partial_i\left( \partial_k\left({\epsilon}{\sf W}_i +{\sf E}_i\right) n \right)=:({\cal D}^\prime+{\cal G}^\prime+{\cal E}^\prime) n,$$ $\nabla n$ satisfies $$\label{eq:diffnabla} \frac{\partial}{\partial t}(\nabla n)\:\;=\:\;({\cal D}+{\cal G}+{\cal E})\nabla n +({\cal D}^\prime+{\cal G}^\prime+{\cal E}^\prime) n\,.$$ The solution of the previous equation can be expressed by the Duhamel formula via the analytic semigroup $S(t)$ generated by $({\cal D}+{\cal G}+{\cal E})$, as $$\label{eq:duh1} \nabla n (t) = S(t) \nabla n (0) + \int_0^t \!\! S(t-s) ({\cal D}^\prime+{\cal G}^\prime+{\cal E}^\prime) n(s)\, ds\,.$$ Moreover we can estimate $$\begin{aligned} \|\nabla n (t) \|_{L^2_x}&\leq& C\|\nabla n (0)\|_{L^2_x} + C\int_0^t \!\! \| ({\cal D}^\prime+{\cal G}^\prime+{\cal E}^\prime) n(s)\|_{L^2_x}\, ds\nonumber\\ \label{eq:stimarefined} &\leq& C\|\nabla n(0)\|_{L^2_x} + C \int_0^t \!\!({\epsilon}\|n(s)\|_{H^2_x}+\|n(s)\|_{H^1_x})\,ds\\ &\leq& C \|n(0)\|_{H^2_x}+ C \int_0^t \!\!\|\nabla n(s)\|_{L^2_x}\,ds\,.\nonumber\end{aligned}$$ by using with $j=2$, provided $n(0)\in H^2_x$. Therefore, by Gronwall lemma, we derive . In order to prove , we apply for the function $n(t)=S(t)n(0)$ the first inequality in with $j=2$ and we obtain $$\|n(t)\|_{H^2_x}\leq\frac{C}{{\epsilon}}\|({\cal D}+{\cal G}+{\cal E})S(t)n(0)\|_{L^2_x}\,.$$ Then we use for the term $({\cal D}+{\cal G}+{\cal E})^{1/2}S(t)\left[({\cal D}+{\cal G}+{\cal E})^{1/2}n(0)\right]$ with $j=1$, and we get $$\begin{aligned} \|n(t)\|_{H^2_x}&\leq& \frac{C}{{\epsilon}}\|({\cal D}+{\cal G}+{\cal E})^{1/2}S(t)({\cal D}+{\cal G}+{\cal E})^{1/2}n(0)\|_{L^2_x}\nonumber\\ \label{eq:stiman1c} &\leq& \frac{C}{{\epsilon}} t^{-1/2}\|({\cal D}+{\cal G}+{\cal E})^{1/2}n(0)\|_{L^2_x}\nonumber\\ &\leq& \frac{C}{{\epsilon}} t^{-1/2}\|n(0)\|_{H^1_x}\,,\end{aligned}$$ where for the last inequality the estimate with $m=1$ is used. Hence it holds for all $n(0)\in {\cal D}(({\cal D}+{\cal G}+{\cal E})^{1/2})\equiv {H^1_x}$. By using in , we get in the case $j=1$. ------------------------------------------------------------------------ \ The other (non-null) term of the bulk part is ${\bar \psi}_1$, which is of first order in ${\epsilon}$. Since it satisfies $${\bar \psi}_1 = -({\cal Q}({\cal A}+{\cal C}) {{\cal Q}})^{-1} {{\cal Q}S{\cal P}}(n M)$$ (cf. Eq. ), by using the definitions and , it can be written explicitly as $$\begin{gathered} \label{eq:psi1explicit} {\bar \psi}_1 (x,v,t)= \nabla n (x,t) \cdotp\, { \cal F}^{-1}\left\{\frac{1}{i\delta V-\nu}{\cal F}\left( v M + \frac{M \nabla V}{\nu m}\right)\right\} (x,v)+\\ n (x,t)\, { \cal F}^{-1}\left\{\frac{1}{i\delta V-\nu}{\cal F}\left( v \cdotp \nabla_xM+ \frac{M \Delta V}{\nu m}\right)\right\} (x,v)\,, \,\forall \,(x,v,t)\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{2d}\times {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^+\,.\end{gathered}$$ Thus, the estimates for the solution $n$ in the previous corollary are the crucial ingredient to establish well-posedness of the definition and the behaviour with respect to time of the function ${\bar \psi}_1$ and of its derivatives.\ Another fundamental aspect is the shape of the initial datum $n(0)$ for Eq. : by , it is given by $n(0)=n_0+{\epsilon}n_1$, and the following estimate holds $$\label{eq:stimacx0} \|n(0)\|_{H^j_x}\leq \|n_0\|_{H_x^j}+{\epsilon}\|n_1\|_{H_x^{j}}\leq \|w_0\|_{H_k^j}+{\epsilon}\, C(\|V\|_{H^{k+j+2}_x})\|w_0\|_{H_{k+1}^{j+1}},$$ for all $d$-admissible $k$, by using the estimate . \[prop:estimates\_bulk\] Let $n$ be a solution of the drift-diffusion with initial value $n(0)$ given by , with $w_0\in H^4_{k+1}$, and with $V$ satisfying Assumption \[ass:assumptiononV\] and $\nabla\Delta V\in W_x^{5, \infty}$. Then ${\bar \psi}_1$ is strongly differentiable with respect to $t>0$, and for every $t>0$ it satisfies $${\bar \psi}_1(t)\in {\cal D}({\cal Q}({\cal A}+{\cal C}){\cal Q})\cap {\cal D}({\cal Q}{\cal S}{\cal Q})\,.$$ Moreover there exists a constant $M>0$ such that, for ${\epsilon}, t \to 0^+$, $$\begin{aligned} \label{stimapsi1diff0} \left\| \partial_t {\bar \psi}_1 (t) \right\|_{X_{k}}&\leq& M \| w_0 \|_{H^4_{k+1}} \,,\\ \label{stimapsi1diff} \left\| \partial_t {\bar \psi}_1 (t) \right\|_{H^1_{k}}&\leq& M (1+1/t)\| w_0 \|_{H^4_{k+1}} \,,\\ \label{stimapsi1Q} \left\| SQ {\bar \psi}_1 (t) \right\|_{H^1_k} &\leq& M \| w_0 \|_{H^4_{k+1}}\,. \end{aligned}$$ [**Proof**]{}. If we differentiate with respect to $t$ the expression , the only $t$-dependent functions are $\nabla n$ and $n$, explicitly $$\label{eq:psi1explicitdiff} \partial_t {\bar \psi}_1 (x,v,t)= \partial_t (\nabla n) (x,t) \cdotp\, A(x,v)+ \partial_t n (x,t)\,B (x,v)\,, \quad \forall \,(x,v,t)\in {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^{2d}\times {{\ensuremath{\mathrm{I}\!\mathrm{R}}}}^+$$ where the functions $A_i, B$, defined in , are sufficiently regular, because of the assumptions on $V$. The differentiability of $n$ with respect to $t$ depends on the analiticity of the semigroup $(S(t))_{t\geq 0}$. The differentiability of $\nabla n$, instead, follows from the expression : since each term is continuously differentiable in time, also $\nabla n$ is.\ Moreover, by using $\partial_t \nabla n=\nabla\partial_t n$ and the evolution equation for $n$, $$\begin{aligned} \label{perstimapsi1diff0} \|{\partial_t}(\nabla n)\|_{L^2_x} &=&\|\nabla({\cal D}+{\cal G}+{\cal E})S( t)n(0)\|_{L^2_x}\nonumber\\ &\leq&\|({\cal D}+{\cal G}+{\cal E})S( t)n_0\|_{H^1_x}+{\epsilon}\|({\cal D}+{\cal G}+{\cal E})S( t)n_1\|_{H^1_x}\nonumber\\ &\leq&\|S( t)n_0\|_{H^3_x}+{\epsilon}\|S( t)n_1\|_{H^3_x} \:\;\leq\:\; C (\|w_0\|_{H^3_{k}}+{\epsilon}\|w_0\|_{H^4_{k+1}})\nonumber\\ &\leq& C \|w_0\|_{H^4_{k+1}}\end{aligned}$$ where we split $n(0)=n_0+{\epsilon}n_1$ and we use , together with . Similarly, $$\begin{aligned} \label{perstimapsi1diff} \|{\partial_t}(\nabla n)\|_{H^1_x} &\leq& C \left (\|\nabla({\cal D}+{\cal G}+{\cal E})S( t)n(0)\|_{L^2_x}+ \|\nabla\cdotp \nabla({\cal D}+{\cal G}+{\cal E})S( t)n(0)\|_{L^2_x}\right )\nonumber\\ &\leq& C \left(\|w_0\|_{H^4_{k+1}}+ \|({\cal D}+{\cal G}+{\cal E})n_0\|_{H^2_x}+ \frac{{\epsilon}}{{\epsilon}t}\|({\cal D}+{\cal G}+{\cal E})n_1\|_{L^2_x}\right)\nonumber\\ &\leq& C \left(\|w_0\|_{H^4_{k+1}}+ \|w_0\|_{H^4_{k}}+\frac{1}{t}\|w_0\|_{H^3_{k+1}}\right)\,,\end{aligned}$$ where the first addendum in the inequality comes from . The second and the third terms come by exchanging $S(t)$ with $({\cal D}+{\cal G}+{\cal E})$ and using $n(0)=n_0+{\epsilon}n_1$, then we apply estimate to get the second term, and estimate to obtain the third term. Finally, inequality follows from estimates and . In order to prove , let us consider again the abstract definition of ${\bar \psi}_1$ (see ): $${\bar \psi}_1(t)=-({\cal Q}({\cal A}+{\cal C}){\cal Q})^{-1}({\cal Q}S{\cal P})(n(t)M)\,.$$ Since ${{\cal Q}S{\cal P}}{\bar \varphi}$ reads (see ) $${{\cal Q}S{\cal P}}nM = n\left[-v\cdotp \nabla_x M+M\int\!\! v \cdotp \nabla_xM dv \right] +\nabla_x n\cdotp \left[ M\left(-v+ \int\!\! v M dv \right) \right],$$ under the present hypotheses, ${{\cal Q}S{\cal P}}(nM)$ belongs to $H^2_{k}$, thus ${\bar \psi}_1(t)\in {\cal D}({\cal Q}({\cal A}+{\cal C}){\cal Q})\cap {\cal D}({\cal Q}S{\cal Q})$. By , $$\begin{gathered} S{\cal Q}{\bar \psi}_1(x,v,t)= \nabla \cdotp \nabla n (x,t)\, v\,\cdotp A(x,v) \\ +\nabla n (x,t) \cdotp ( v \,\cdotp \nabla \cdotp A+ v B)(x,v) + n(x,t)\, v\,\cdotp \nabla B(x,v)\,.\nonumber\end{gathered}$$ Thus, in order to estimate $\|S{\cal Q}{\bar \psi}_1(t)\|_{X_k}$ and $\|S{\cal Q}{\bar \psi}_1(t)\|_{H^1_k}$, it is necessary to evaluate $\|\nabla \cdotp \nabla n (t)\|_{L^2_x}$ and $\|\nabla \cdotp \nabla n (t)\|_{H^1_x}$, $\|\nabla n(t)\|_{L^2_x}$ and $\|\nabla n(t)\|_{H^1_x},$ respectively. In particular, $$\|\nabla\cdotp\nabla n(t)\|_{H^1_x}\leq C\| S(t)n(0)\|_{H^3_x}\leq C\|n(0)\|_{H^3_x}\leq C \|w_0\|_{H^3_k}+{\epsilon}C\|w_0\|_{H^4_{k+1}}\,,$$ again by . Thus, we can conclude by using the regularity properties of $A_i, B$. ------------------------------------------------------------------------ \ [*Observe that it is possible to remove the singularity for $t\to 0^+$ in the estimate , by assuming $w_0\in H^5_{k+1}$ and modifying last two lines of as follows $$\begin{aligned} \label{perstimapsi1diff1} \|{\partial_t}(\nabla n)\|_{H^1_x} &\leq& C \left(\|w_0\|_{H^4_{k+1}}+ \|({\cal D}+{\cal G}+{\cal E})n_0\|_{H^2_x}+ {{\epsilon}}\|({\cal D}+{\cal G}+{\cal E})n_1\|_{H^2_x}\right)\nonumber\\ &\leq& C \left(\|w_0\|_{H^4_{k+1}}+ \|w_0\|_{H^4_{k}}+ {\epsilon}\|w_0\|_{H^5_{k+1}}\right)\,.\end{aligned}$$* ]{}\ Estimate of the error ===================== In this section we prove rigorously that the high-field QDD equation, originated by the asymptotic expansion up to the first order in ${\epsilon}$, is an approximation of order ${\epsilon}^2$ of the high-field Wigner-BGK system . To this aim, we consider the errors obtained by replacing the functions ${\cal P}w={\varphi}$ and ${\cal Q}w=\psi$ by the terms of their expansion up to first order in ${\epsilon}$. We shall prove the following \[maintheorem\] If the initial value $w_0$ belongs to ${H^4_{k+2}}$ and $V$ satisfies Assumption \[ass:assumptiononV\] and $\nabla \Delta V \in W_x^{5,\infty}$, then for any $T$, $0<T<\infty$, there is a constant $C$ independent of ${\epsilon}$ such that $$\label{eq:finalestimate} \left\|{\varphi}(t) + \psi(t)- [{\bar \varphi}(t) + {\epsilon}{\bar \psi}_1(t) + {\tilde \psi}_0(t/{\epsilon}) +{\epsilon}{\tilde \varphi}_1(t/{\epsilon})+ {\epsilon}{\tilde \psi}_1(t/{\epsilon})] \right\|_{X_k} \leq C {\epsilon}^2 \,,$$ uniformly for $ 0 \le t \le T$. This result relies on the estimates established in Propositions \[prop:estimates\_initial\], \[prop:estimates\_bulk\], about the behaviour with respect to time of the initial layer functions ${\tilde \varphi}_1$, ${\tilde \psi}_1$ and the bulk functions. Let us split the error in two contributions $$\label{eq:def_errors} y(t) ={\varphi}(t) - [{\bar \varphi}(t) + {\epsilon}{\tilde \varphi}_1({\tau})]\,,\qquad z(t)= \psi(t) - [{\tilde \psi}_0(\tau)+{\epsilon}{\bar \psi}_1(t) + {\epsilon}{\tilde \psi}_1({\tau}) ] $$ where $ {\tau}= \frac{t}{{\epsilon}}$. The evolution equations for the errors $y$ and $z$ can be deduced from those satisfied by their components (cf. systems ,,). Hence, we have $$\left\{\begin{array}{lcl} \displaystyle \frac{{\partial}y}{{\partial}t} & = & {{\cal P}S{\cal P}}y + {{\cal P}S{\cal Q}}z + f \\[4mm] \displaystyle \frac{{\partial}z}{{\partial}t} & = & {{\cal Q}S{\cal P}}y + {{\cal Q}S{\cal Q}}z + \displaystyle \frac{1}{{\epsilon}} {\cal Q(A+C)Q} z + g \end{array} \right. \label{errorsystem}$$ with initial conditions $$y(0) = 0\,, \qquad z(0) = 0 \,,$$ and inhomogeneous terms $f$ and $g$ defined by $$\begin{aligned} f(t) &=& {\epsilon}\,\left[{{\cal P}S{\cal P}}{\tilde \varphi}_1({\tau}) + {{\cal P}S{\cal Q}}{\tilde \psi}_1({\tau})\right] \\ g(t) &=& {\epsilon}\left[- \frac{{\partial}{\bar \psi}_1}{{\partial}t} + {{\cal Q}S{\cal Q}}{\bar \psi}_1(t) + {{\cal Q}S{\cal P}}{\tilde \varphi}_1({\tau}) + {{\cal Q}S{\cal Q}}{\tilde \psi}_1({\tau}) \right]\,.\end{aligned}$$ It is convenient to separate the evolution of the error relative to initial layer part from the one corresponding to the bulk part. Let us define $$r = y + z = r_i + r_b$$ with $$\label{rirb} r_i = - {\epsilon}{\tilde \varphi}_1 - {\tilde \psi}_0 - {\epsilon}{\tilde \psi}_1 \,,\qquad r_b = {\varphi}+ \psi - {\bar \varphi}- {\epsilon}{\bar \psi}_1 \,.$$ The derivation of estimate is split according to in the next two Lemmata. Under the assumptions $V\in H^{k+4}_x$ and $w_0\in H^2_{k+2}$, for any $T$, $0<T<\infty$, there is a constant $C$ independent of ${\epsilon}$ such that $$\label{eq:finalestimatei} \| r_i(t) \|_{X_k} \leq C {\epsilon}^2 \,,$$ uniformly for $ 0 \le t \le T$. [**Proof**]{}. The initial layer error $r_i= r_i(t)$ satisfies the equation $$\label{eqri} \frac{{\partial}r_i}{{\partial}t} (t)= S r_i(t) + \frac{1}{{\epsilon}} {\cal (A+C)} r_i(t) + {\epsilon}S({\tilde \varphi}_1 + {\tilde \psi}_1) \left(\frac{t}{{\epsilon}}\right) \,, \quad r_i(0)=0\,.$$ The operator $S + {\cal (A+C)}/{\epsilon}$ generates an uniformly bounded semigroup in $X_k$, $Z(t)$, cf. [@FroPavVan]. Thus, the mild solution of is given by $$r_i(t) = Z(t) r_i(0) + {\epsilon}\int_0^t Z(t-s) S({\tilde \varphi}_1 + {\tilde \psi}_1) (s/{\epsilon}) ds \,,$$ with $$\begin{aligned} \|r_i(t)\|_{X_k} &\leq& C {\epsilon}\int_0^t \|S({\tilde \varphi}_1 + {\tilde \psi}_1)\left({s}/{{\epsilon}}\right)\|_{X_k} \, ds \,. \label{mildri}\end{aligned}$$ The estimate of $\|S({\tilde \varphi}_1 + {\tilde \psi}_1)\left({s}/{{\epsilon}}\right)\|_{X_k}$ is a bit tedious, thus we simply sketch it. It is convenient to use the projections ${\cal P}, {\cal Q}$ and evaluate ${{\cal P}S{\cal P}}{\tilde \varphi}_1$, ${{\cal Q}S{\cal P}}{\tilde \varphi}_1$, ${{\cal P}S{\cal Q}}{\tilde \psi}_1$, and ${{\cal Q}S{\cal Q}}{\tilde \psi}_1$ separately. By their definitions (cf. ,) ${{\cal P}S{\cal P}},{{\cal Q}S{\cal P}}\in {\cal L}(H^1_{k},X_k)$, provided $V\in H^{k+2}_x$, thus, it holds the following modification of the estimate for ${\tilde \varphi}_1$, $$\begin{aligned} \label{eq:tilde_zf_1proofbis} \|{{\cal P}S{\cal P}}\tilde{\varphi}_1(\tau)\|_{X_k} &\leq & \||{{\cal P}S{\cal P}}\|||\|{{\cal P}S{\cal Q}}[{\cal Q(A+C)Q}]^{-1}\|| \; \|G(\tau)\psi_0\|_{H^2_{k+1}} \nonumber\\ &\leq&{\rm M}\,e^{-\nu_{k+3}\tau}\|\psi_0\|_{H^2_{k+1}}\,,\end{aligned}$$ since $ {{\cal P}S{\cal Q}}[{\cal Q(A+C)Q}]^{-1}\in{\cal L}(H_{k+1}^2,H^1_k)$, provided $V\in H^{k+4}_x$. Analogously, $$\begin{aligned} \label{eq:tilde_zf_1prooftris} \|{{\cal Q}S{\cal P}}\tilde{\varphi}_1(\tau)\|_{X_k} &\leq & \||{{\cal Q}S{\cal P}}\|||\|{{\cal P}S{\cal Q}}[{\cal Q(A+C)Q}]^{-1}\|| \; \|G(\tau)\psi_0\|_{H^2_{k+1}} \nonumber\\ &\leq&{\rm M}\,e^{-\nu_{k+3}\tau}\|\psi_0\|_{H^2_{k+1}}\,.\end{aligned}$$ Let us recall the expression for $\tilde\psi_1$ (cf. ) $$\tilde\psi_1(\tau) = G(\tau)\tilde \psi_1(0) + \int_{0}^{\tau}G(\tau - \sigma){{\cal Q}S{\cal Q}}G(\sigma)\psi_0d\sigma\,.$$ We evaluate $\| {{\cal P}S{\cal Q}}{\tilde \psi}_1 (\tau) \|_{X_k}$ and $ \| {{\cal Q}S{\cal Q}}{\tilde \psi}_1 (\tau) \|_{X_k}$. Both ${{\cal P}S{\cal Q}}$ and ${{\cal Q}S{\cal Q}}$ belong to ${\cal L}(H^1_{k+1},X_k)$. Moreover $[{\cal Q(A+C)Q}]^{-1}{{\cal Q}S{\cal P}}\in {\cal L}(H^1_k, X_k)$ by definition, provided $V\in H^k_x$; thus $$\begin{aligned} \|{{\cal Q}S{\cal Q}}G(\tau)\tilde\psi_1(0)\|_{X_k}&\leq& |\|{{\cal Q}S{\cal Q}}G(\tau)\||\|({\cal Q(A+C)Q})^{-1}{{\cal Q}S{\cal P}}{\varphi}_0\|_{H^1_{k+1}}\\ &\leq& K{\mathrm e}^{-\nu_k \tau}\|{\varphi}_0\|_{H^2_{k+1}}\,,\end{aligned}$$ provided $V\in H^{k+2}_x\,.$ Concerning the second term, we obtain $$\begin{aligned} \left\|{{\cal Q}S{\cal Q}}\int_{0}^{\tau}G(\tau - \sigma){{\cal Q}S{\cal Q}}G(\sigma)\psi_0d\sigma\right\|_{X_k} \!\!\!\!\!\!\!\!&\leq& |\|{{\cal Q}S{\cal Q}}\||\left\|\int_{0}^{\tau}G(\tau - \sigma){{\cal Q}S{\cal Q}}G(\sigma)\psi_0d\sigma\right\|_{H^1_{k+1}}\\ \!\!\!\!\!\!\!\!&\leq&K{\mathrm e}^{-\nu_{k+2} \tau}\int_{0}^{\tau}{\mathrm e}^{(\nu_{k+2}-\nu_{k+4}) \sigma}\|{\varphi}_0\|_{H^2_{k+2}}\\ \!\!\!\!\!\!\!\!&\leq& K{\mathrm e}^{-\nu_{k+4} \tau}\|{\varphi}_0\|_{H^2_{k+2}}\,,\end{aligned}$$ provided $V\in H_x^{k+4}$. In conclusion, $$\|{{\cal Q}S{\cal Q}}{\tilde \psi}_1 (\tau)\|_{X_k}\leq K{\mathrm e}^{-\nu_{k+4} \tau}\|{\varphi}_0\|_{H^2_{k+2}}\,,$$ and analogously, $$\|{{\cal P}S{\cal Q}}{\tilde \psi}_1 (\tau)\|_{X_k}\leq L\, {\mathrm e}^{-\nu_{k+4} \tau}\|{\varphi}_0\|_{H^2_{k+2}}\,,$$ for some constant $L>0$. Finally, it is possible to find constants $\bar\nu>0$ and ${\overline M}(\|V\|_{H_x^{k+4}})>0$ such that $$\| S({\tilde \varphi}_1 + {\tilde \psi}_1) (\tau) \|_{X_k} \le {\overline M}(\|V\|_{H_x^{k+4}}) \,{\rm e}^{-\bar \nu \tau} \| w_0 \|_{H^2_{k+2}}.$$ Coming back to the estimate of $r_i$, for any time $t$ we have $$\|r_i(t)\|_{X_k} = C {\overline M}{\epsilon}\int_0^t {\rm e}^{-\bar\nu s/{\epsilon}} \| w_0 \|_{H^2_{k+2}} ds \leq C \| w_0 \|_{H^2_{k+2}} {\epsilon}^2.$$ ------------------------------------------------------------------------ Under the same assumptions of Proposition \[prop:estimates\_bulk\], for any $T$, $0<T<\infty$, there is a constant $C$ independent of ${\epsilon}$ such that $$\label{eq:finalestimateb} \| r_b(t) \|_{X_k} \leq C {\epsilon}^2 \,,$$ uniformly for $ 0 \le t \le T$. [**Proof**]{}. The error of the bulk part of the asymptotic expansion satisfies with $f=0$ and, instead of $g$, $$\begin{aligned} g_b(t) &=& {\epsilon}\left[- \frac{{\partial}{\bar \psi}_1}{{\partial}t} + {{\cal Q}S{\cal Q}}{\bar \psi}_1(t) \right] \, . \end{aligned}$$ Since the inhomogeneous term $g_b(t)$ has a non uniform behaviour with respect to ${\epsilon}$ for small times, we split the inhomogeneous term $g(t)$ into the sum of two functions, say $g_{b0}$ and $g_{b1}$, as follows $$g_{b0}(t) = \eta_{\epsilon}g_b(t) \,, \qquad g_{b1} = g_b(t) - g_{b0}(t)\,,$$ where $ \eta_{\epsilon}$ is a not increasing $C^\infty$-function such that $$\eta_{\epsilon}(t) = \left\{\!\!\!\!\!\!\!\! \begin{array}{lclcc} & \, 1 &\quad& {\rm for} &t<{\epsilon}/2 \,,\\[-2mm] \\ & \, 0 &\quad& {\rm for} &t > 3{\epsilon}/2 \,. \end{array} \right.$$ We write the error $r_b$ as the sum of two parts $r_b = r_{b0} + r_{b1}$, solving the equation $$\label{eqribis} \frac{{\partial}r_{b0}}{{\partial}t} = S r_{b0} + \frac{1}{{\epsilon}} {\cal (A+C)} r_{b0} + {\epsilon}g_{b0} \,,\;\;\; r_{b0}(0)=0\,,$$ and an analogous one with the inhomogeneous term $g_{b1}$. Concerning the error $r_{b0}$, the following estimate holds by using Prop. \[prop:estimates\_bulk\] $$\begin{aligned} \|r_{b0}(t)\|_{X_k} &\le& K {\epsilon}\int_0^{3{\epsilon}/2} \|g_{b0}(s)\|_{X_k} ds \:\;\le\:\; K {\epsilon}\int_0^{3{\epsilon}/2} \left( \left\| \frac{\partial {\bar \psi}_1}{{\partial}s} (s) \right\|_{X_k} + \left\| SQ {\bar \psi}_1 (s) \right\|_{X_k} \right) \\ &\le& K {\epsilon}\int_0^{3{\epsilon}/2} \| w_0 \|_{H^4_{k+1}} ds \le K \| w_0 \|_{H^4_{k+1}} {\epsilon}^2\,.\end{aligned}$$ Finally, we consider the evolution equation for $r_{b1}$: we decompose again such an error as $$r_{b1} = \hat r_{b1} + h(t)\,,$$ by introducing the auxiliary function $h$, which solves the problem $$\frac{{\partial}h}{{\partial}t} = \frac{1}{{\epsilon}} {{{\cal Q}({\cal A+C}){\cal Q}}} h + {\epsilon}g_{b1} \,,\;\;\; h(0)=0\,.$$ Consequently, the function $\hat r_{b1}$ satisfies the initial value problem $$\frac{{\partial}\hat r_{b1}}{{\partial}t} = S \hat r_{b1} + \frac{1}{{\epsilon}} {\cal (A+C)} \hat r_{b1} + S {\cal Q} h \,,\;\;\; \hat r_{b1}(0)=0\,,$$ thus it can be easily estimated in terms of the auxiliary function $h$ as $$\|\hat r_{b1}(t)\|_{X_k} \le \int_{{\epsilon}/2}^t\| S {\cal Q} h(s)ds \|_{X_k} \,ds\,.$$ Again by the properties of the operator ${\cal (A+C)}$, the solution reads as follows $$h(t) = \left\{\!\!\!\!\!\!\!\! \begin{array}{lclcc} & \, 0 &\quad& {\rm for} &t<{\epsilon}/2 \,,\\[-2mm] \\ & \, {\epsilon}\int_{{\epsilon}/2}^t G_{{\epsilon}}(t-s) g_{b1}(s) ds &\quad& {\rm for} &t > {\epsilon}/2 \,, \end{array} \right. $$ with $G_{{\epsilon}}({\tau})$ bounded semigroup generated by $(1/{\epsilon})\cal Q(A+C)Q$. $$\begin{aligned} \label{rbcaporalebis} \|\hat r_{b1}(t)\|_{X_k} &\le& {\epsilon}\int_{{\epsilon}/2}^t\| S {\cal Q} h(s)ds \|_{X_k} ds \\ &\le& {\epsilon}K \int_{{\epsilon}/2}^t \int_{{\epsilon}/2}^s {\rm e}^{-\nu_{k+1} \frac{s-s'}{{\epsilon}}}\|g_{b1}(s') \|_{H^1_k} ds' ds \\ &\le& {\epsilon}K \int_{{\epsilon}/2}^t \int_{{\epsilon}/2}^s {\rm e}^{-\nu_{k+1} \frac{s-s'}{{\epsilon}}} \left(1+\frac{1}{s} \right) \| w_0 \|_{H^4_{k+1}}ds' ds \\ &\le& K \| w_0 \|_{{H^4_{k+1}}}{{\epsilon}^2}\,,\end{aligned}$$ by applying again Prop. \[prop:estimates\_bulk\], for any $t \in [0,T]$, where the constants $K$ depend on $T$. In conclusion, $$\|r_{b}(t)\|_{X_k} \le K \| w_0 \|_{{H^4_{k+1}}}{{\epsilon}^2}\,.$$ The authors are grateful to Luigi Barletti and Jacek Banasiak, for many helpful discussions on the position of the problem. This work was performed under the auspices of the [*National Group for Mathematical Physics*]{} of the [*Istituto Nazionale di Alta Matematica*]{} and was partly supported by the [*Italian Ministery of University (MIUR*]{} National Project [“Mathematical Problems of Kinetic Theories", Cofin2004).]{} [99]{} A.M. Anile, G. Mascali and V. Romano, [Recent developments in hydrodynamical modeling of semiconductors]{}, in: [Mathematical Problems in Semiconductor Physics]{}, [A.M. Anile]{}, ed., [Lecture Notes in Math.]{} [**1823**]{} Springer, Berlin, 2003, pp. 1-56. A. Arnold, [Self-consistent relaxation-time models in quantum mechanics]{}, [Comm. Partial Differential Equations]{} [**21(3-4)**]{} (1996), 473-506. A. Arnold, J.A. Carrillo, I. Gamba and C.W. Shu, [Low and high field scaling limits for the Vlasov and the Wigner-Poisson-Fokker-Planck systems]{}, [Transp. Theory Stat. Phys.]{} [**30(2-3)**]{} (2001), 43-100. A. Arnold, E. Dhamo, and C. Manzini, The Wigner-Poisson-Fokker-Planck system: global-in-time solutions and dispersive effects, [Ann. Inst. H. Poincaré Anal. Non Linéaire]{} (2006) (to appear). A. Arnold, E. Dhamo, and C. Manzini, [Dispersive effects in quantum kinetic equations]{}, [Indiana Univ. Math. J.]{} (2006) (to appear). A. Arnold and A. Jüngel, [Multi-scale modeling of quantum semiconductor devices]{} in: [Analysis, Modeling and Simulation of Multiscale Problems]{}, A. Mielke, ed., Springer, Berlin, 2006, pp. 331-363. A. Arnold and C. Sparber, Quantum dynamical semigroups for diffusion models with Hartree interaction, [Comm. Math. Phys.]{} [**251(1)**]{} (2004), 179-207. J. Banasiak, Singularly perturbed linear and semilinear hyperbolic systems: kinetic theory approach to some folk’s theorems, [Acta Appl. Math.]{} [**49**]{} (1997), 199-228. N. Ben Abdallah, P. Degond, P. Markowich, and C. Schmeiser, High field approximations of the sphericl harmonics expansion model for semiconductors, [Z. Angew. Math. Phys.]{} [**52**]{} (2001), 201-230. L.L. Bonilla and R. Escobedo, [Wigner-Poisson and non-local drift-diffusion equation for semiconductor superlattices]{}, [Math. Models Methods Appl. Sci.]{} [**15(8)**]{} (2005), 1253-1272. A.O. Caldeira and A.J. Leggett, Path integral approach to quantum Brownian motion, [Physica A]{} [**121**]{} (1983), 587-616. F. Castella, L. Erdös, F. Fromlet, and P.A. Markowich, Fokker-Planck equations as scaling limits of reversible quantum systems, [J. Stat. Phys.]{} [**100**]{} (2000), 543-601. P. Degond and A. Jüngel, [High-field approximations of the energy-transport model for semiconductors with non-parabolic band structure]{}, [Z. Angew. Math. Phys.]{} [**52**]{} (2001), 1053-1070. P. Degond, F. Méhats, and C. Ringhofer, Quantum energy-transport and drift-diffusion models, [J. Stat. Phys.]{} [**118**]{} (2005), 625-665. P. Degond and C. Ringhofer, Quantum moment hydrodynamics and the entropy principle, [J. Stat. Phys.]{} [**112**]{} (2003), 587-628. K-J. Engel and R. Nagel, [One-Parameter Semigroups for Linear Evolution Equations]{}, Springer, New York, 1999. F. Fromlet, P.A. Markowich, and C. Ringhofer, A Wignerfunction Approach to Phonon Scattering, [VLSI Design]{} [**9**]{} (1999), 339-350. G. Frosali, C. van der Mee, and S. Paveri-Fontana, Conditions for runaway phenomena in the kinetic theory of particle swarms, [J. Math. Phys.]{} [**30(5)**]{} (1989), 1177-1186. C. Gardner, The Quantum Hydrodynamic Model for Semiconductor Devices, [SIAM J. App. Math.]{} [**54(2)**]{} (1994), 409-427. C. Gardner and C. Ringhofer, The Chapman-Enskog Expansion and the Quantum Hydrodynamic Model for Semiconductor Devices, [VLSI Design]{} [**10**]{} (2000), 415-435. I. Gasser and P. Markowich, Quantum hydrodynamics, Wigner transforms and the classical limit, [Asymptotic Analysis]{} [**14**]{} (1997), 97-116. A. Jüngel, [Quasi-hydrodynamic Semiconductor Equations]{}, Birkhäuser, Basel, 2001. A. Jüngel and D. Matthes, A derivation of the isothermal quantum hydrodynamic equations using entropy minimization, [Z. Angew. Math. Mech.]{} [**85**]{} (2005), 806-814. A. Jüngel and R. Pinnau, [Global non-negative solutions of a nonlinear fourth-oder parabolic equation for quantum systems]{}, [SIAM J. Math. Anal.]{} [**32**]{} (2000), 760-777. C.D. Levermore, Moment Closure Hierarchies for Kinetic Theories, [J. Stat. Phys.]{} [**83**]{} (1996), 1021-1065. G. Lindblad, [On the generators of Quantum Dynamical Semigroups]{}, [Comm. Math. Phys.]{} [**48**]{} (1976), 119-130. P.L. Lions and T. Paul, [Sur le measures de Wigner]{}, [Rev. Mat. Iberoam.]{} [**9(3)**]{} (1993), 553-618. C. Manzini, The three dimensional Wigner-Poisson problem with inflow boundary conditions, [J. Math. Anal. Appl.]{} [**313(1)**]{} (2006), 184-196. C. Manzini and L. Barletti, An analysis of the Wigner-Poisson problem with time-dependent, inflow boundary conditions, [Nonlinear Anal.]{}, [**60(1)**]{} (2004), 77-100. P.A. Markowich, C. Ringhofer, and C. Schmeiser, Semiconductor equations, Springer, Wien, 1990. J.R. Mika and J. Banasiak, [Singularly perturbed evolution equations with applications to kinetic theory]{}, World Scientific, Singapore, 1995. F. Poupaud, Runaway phenomena and fluid approximation under high fields in semiconductor kinetic theory, [Z. Angew. Math. Mech.]{} [**72**]{} (1992), 359-372. E. Wigner, On the quantum correction for thermodynamic equilibrium, [Phys. Rev.]{} [**40**]{} (1932), 749-759 .
--- abstract: 'We have used far-infrared data from IRAS, ISO, SWIRE, SCUBA and MAMBO to constrain statistically the mean far-infrared luminosities of quasars. Our quasar compilation at redshifts $0<z<6.5$ and $I$-band luminosities $-20<I_{\rm AB}<-32$ is the first to distinguish evolution from quasar luminosity dependence in such a study. We carefully cross-calibrate IRAS against Spitzer and ISO, finding evidence that IRAS 100$\,\mu$m fluxes at $<1\,$Jy are overestimated by $\sim30$%. We find evidence for a correlation between star formation in quasar hosts and the quasar optical luminosities, varying as SFR $\propto L_{\rm opt}^{0.44\pm0.07}$ at any fixed redshift below $z=2$. We also find evidence for evolution of the mean star formation rate in quasar host galaxies, scaling as $(1+z)^{1.6\pm0.3}$ at $z<2$ for any fixed quasar $I$-band absolute magnitude fainter than $-28$. We find no evidence for any correlation between star formation rate and black hole mass at $0.5<z<4$. Our data are consistent with feedback from black hole accretion regulating stellar mass assembly at all redshifts.' author: - | Stephen Serjeant$^{1}$ and Evanthia Hatziminaoglou$^{2}$\ $^{1}$Department of Physics & Astronomy, Venables Building, The Open University, Milton Keynes, MK7 6AA, UK\ $^{2}$European Southern Observatory, Karl-Schwarzschild-Str. 2, 85748 Garching bei München, Germany date: 'Accepted; Received; in original form' title: The evolution of star formation in quasar host galaxies --- BoxedEPS.tex \[firstpage\] submillimetre – infrared: galaxies – galaxies: high-redshift – galaxies: active – quasars: general. Introduction ============ The interaction between the accretion process into a supermassive black hole residing at the centre of an active nucleus and star formation in the host galaxy is fundamental in regulating both galaxy evolution and the growth of the black hole. In order to understand the link between the two processes and assess the possibility of the two occuring concomitantly, it is important to quantify and constrain the star formation activity in quasar host galaxies. This is, however, a difficult task, as the star formation signature can be diluted by the strong AGN emission, especially at short (e.g. UV/optical) wavelengths. Emission emerging from star formation activity should, therefore, be looked for in the far-infrared (FIR), where the contribution of the AGN (in the form of thermal emission from dust) should be less important. Combined AGN studies with IRAS and ISO already establised the presence of strong FIR emission in quasars (see e.g. Verma et al. 2005 and references therein). According to some models, this radiation might be explained as the emission of dust distributed in a “cloud” around the central engine with a $0.5$kpc radius (Siebenmorgen et al. 2004). However, other models estimate dusty tori extending to several kpc (e.g. Fritz et al. 2006, Hatziminaoglou et al. 2008) with additional very large covering factors ($\ge 90\%$). Spitzer IRS spectroscopy, revealed in addition to the FIR emission, the presence of PAH features in the mid-IR spectra of optically-selected quasars (Schweitzer et al. 2006), that are difficult to reproduce in models assuming dust heated by the hard AGN photons. A number of other arguments including the likely evaporation of PAH features in the presence of AGN emission and in the absence of high column densities (a necessary condition for the AGN to be able to heat the dust at such large distances) or the simultaneous presence of star formation evidence at other wavelengths suggest that the FIR in quasar host galaxies is more likely to be a tracer of star formation. Even though Spitzer observations have increased the number of FIR detections in most low redshift quasars (e.g. Schweitzer et al. 2006) and their analysis consistently points toward star formation driven FIR emission, the number of FIR detected quasars is still low. In preparation for the Herschel ATLAS (Astrophysical Terahertz Large Area Survey), an Open Time $\sim500\,$deg$^2$ blank-field survey, and in preparation for targeted Herschel surveys of AGN, we need the best possible estimates for the quasar fluxes in Herschel bands. Most Sloan Digitized Sky Survey (SDSS) quasars are not detected individually in the Spitzer SWIRE Legacy Survey $70\,\mu$m and $160\,\mu$m data (Lonsdale et al. 2003, 2004) in the SWIRE-SDSS overlap region (e.g. Hatziminaoglou et al. 2005, 2008). Submm and mm-wave observations of $z\simeq2$ and $z>4$ quasars have yielded only a small number of direct detections (e.g. Omont et al. 1996, Omont et al. 2001, Carilli et al. 2001, Isaak et al. 2002, Priddey et al. 2003a, 2003b, Omont et al. 2003, Robson et al. 2004, Beelen et al. 2006, Petric et al. 2006, Wang et al. 2007). IRAS and ISO detected just over half of the Palomar-Green quasar sample at $60\,\mu$m (e.g. Sanders et al. 1989, Haas et al. 2000, 2003). In this paper, we will use stacking analyses to constrain the mean far-infrared luminosities of quasars, selected over a very wide range in redshift and absolute $I$-band magnitude. Our quasar compilation spans enough of the $I$-magnitude–redshift plane to be able to distinguish evolution from quasar luminosity dependence, which would be impossible in a single $I$-magnitude-limited quasar sample. Previous authors have stacked quasar fluxes at one (usually submm) wavelength, but we will combine a large body of multi-wavelength far-infrared, submm and mm-wave quasar photometry using an assumed common SED. We will then show that our conclusions are robust to reasonable choices of SED. We follow SDSS in adopting the concordance cosmology of $\Omega_{\rm M}=0.3$, $\Omega_\Lambda=0.7$, $H_0=70$km/s/Mpc, and in assuming an optical spectral index of $d\ln S_\nu/d\ln\nu=-0.5$ for the quasars. In the far-infrared we assume an M82 SED shape from Efstathiou, Rowan-Robinson & Siebenmorgen 2000, unless otherwise stated. Methodology =========== Sample selection ---------------- Figure \[fig:iz\] shows the absolute I magnitudes of the SDSS quasars with SWIRE $70\,\mu$m and/or $160\,\mu$m coverage, against redshift. The SWIRE data was retrieved on 17th August 2007 and comprises version 2 products in the ELAIS N1 and ELAIS N2 fields, and version 3 products in the Lockman Hole field. There are 281 DR5 SDSS quasars (Adelman-McCarthy et al. 2007) in the SWIRE fields with 70$\,\mu$m and/or $160\,\mu$m coverage. Of these, 264 have 70$\,\mu$m coverage and 261 have 160$\,\mu$m coverage. ELAIS N1 is only partly covered by SDSS. There is no SDSS data for the Southern SWIRE fields XMM-LSS, ELAIS S1 or CDF-S, though this data was retrieved for calibration (see below); the version numbers are 2, 3 and 3 respectively. Figure \[fig:iz\] also shows for comparison the Palomar-Green sample with far-infrared photometry from IRAS and ISO (Sanders et al. 1989, Haas et al. 2000, 2003), in which non-detections have been re-measured using the SCANPI IRAS fluxes discussed below. Figure \[fig:iz\] also shows a compilation of the quasars observed at 850$\,\mu$m and 1200$\,\mu$m (Omont et al. 1996, Omont et al. 2001, Carilli et al. 2001, Isaak et al. 2002, Omont et al. 2003, Priddey et al. 2003a, 2003b, Robson et al. 2004, Wang et al. 2007). The addition of the SDSS quasars to these data sets greatly improves the coverage of the optical luminosity-redshift plane. This will make it possible to make the first constraints on the evolution and luminosity dependence of star formation in quasar hosts. The right-hand panel of figure \[fig:iz\] demonstrates the far-infrared luminosities probed by the various multi-wavelength data sets. SWIRE photometry ---------------- The SWIRE 70$\,\mu$m and 160$\,\mu$m images are supplied calibrated to surface brightness units of MJy/sr. We intend to measure the total mean point source flux of our targets, rather than the mean surface brightness at their locations, so we need to convert the units. We first note that a Gaussian point source with a peak flux of unity will have a total flux of $F=\pi\theta^2_{\rm FWHM}/(4\ln2)$, where $\theta_{\rm FWHM}$ is the full-width half maximum of the point spread function. If $\theta_{\rm FWHM}$ is measured in arcseconds, then this also gives the conversion between total point source flux and the flux per square arcsecond at the peak. Using $\theta_{\rm FWHM}=1.22\lambda/D$, where $\lambda$ is the observed wavelength and $D=85\,$cm is the diameter of the telescope’s primary mirror, we predicted the conversion between point source flux and surface brightness. This is shown in table \[tab:conversions\]. We also derived an empirical conversion obtained by comparing the SWIRE point source catalogue fluxes with the background-subtracted map fluxes measured at the positions of the SWIRE sources. This comparison is shown in figure \[fig:swire\_calibration\]. The empirical conversion is some 29-36% higher than the theoretical prediction, which may be due to the finite size of the map pixels, or a non-Gaussian shape to the point spread function shape (e.g. more resembling an Airy function). We adopt the empirical conversion in the analysis below. ------------- ---------------------- --------------------- Predicted conversion Emprical conversion $70\,\mu$m 11.44 14.76 $160\,\mu$m 59.76 81.14 ------------- ---------------------- --------------------- : \[tab:conversions\]Point source fluxes in mJy for a source in the SWIRE maps with a central surface brightness of 1MJy/sr. Although the maps are supplied with foreground DC offsets removed, we opted to subtract the mean flux level from each image. This ensures that the average total flux contribution is identically zero from each point source not associated with the SDSS quasars. The stacking methodology is to measure the point source fluxes from the SWIRE images at the positions of the SDSS quasars, and search for a significant deviation from zero flux. Photometric errors were estimated by measuring the standard deviation of the SWIRE images. The SDSS quasar astrometric errors are negligible compared to the SWIRE 70-160$\,\mu$m beams. IRAS photometry --------------- SCANPI IRAS flux estimates have recently been adapted to report negative AMP flux estimates, where acceptable fits are available. This would make this estimator suitable to stacking analyses, but we found positive fluxes on average reported at positions randomly offset from our targets (as noted also by e.g. Morel et al. 2001). This was found to be due to the SCANPI algorithm allowing the source position to vary, so that the fits tended to gravitate to nearby noise features when the signal-to-noise is very low. Therefore, we re-fit the coadded SCANPI profiles allowing only the point source flux to vary, using the appropriate point source response function for the predominant scan direction (A. Alexov, priv. comm.). We used the background-subtracted median coadded scans. At 60$\,\mu$m, we subtracted an additional background estimate obtained by fitting a Gaussian to the off-source data histogram for each coadded scan (where “off-source” is defined as where the template is $<1$% of its maximum value). At 100$\,\mu$m, the stronger baseline drifts visible in the coadded scans suggested a more local background subtraction. We defined the 100$\,\mu$m on-source width to be where the template is $>0.1$% of its maximum, and subtracted the mean of the coadded scan data in one template-width either side of the target. Note that our flux calibrations discussed below were found to depend weakly on the sky subtraction algorithm, but were not independent of it. We tested the flux calibration of stacked fluxes using 70$\,\mu$m sources selected from the Spitzer legacy surveys SWIRE and Formation and Evolution of Planetary Systems (FEPS, e.g. Meyer et al. 2004, Hillenbrand et al. 2008), and using 90$\,\mu$m sources selected from the European Large Area ISO Survey (ELAIS, Rowan-Robinson et al. 2004). Note that the SWIRE and ELAIS surveys were conducted in regions of low cirrus; the FEPS survey, while having fewer sources, is more widely-distributed. We selected all SWIRE 70$\,\mu$m sources in the flux range 12-300mJy, then starting from the brightest, we rejected any source closer than 30’ to any selected source. This ensured that both our source fluxes and their background estimates are statistically independent. We adopted the same selection procedure for ELAIS. For FEPS, we selected all sources with 70$\,\mu$m detections at $5\sigma$ or above. For FEPS, SWIRE and ELAIS sources, we extracted IRAS fluxes in a $3\times3$ grid centred on the target, with grid positions separated by 20’. To estimate the noise in each SCANPI flux estimate, we tried two approaches: firstly, estimating the variance in the coadded scan from Gaussian fits to the data histogram, then propagating the noise in the best-fit point source amplitude assuming the data points are statistically independent; secondly, measuring the variance of the point source amplitude estimates in the eight offset sky positions. The latter should give signal-to-noise histograms with zero mean and unit variance, while for the former the histogram of $(S_{\rm IRAS}-S_{\rm Spitzer})/N_{\rm IRAS}$ (where $S$ and $N$ are signal and noise respectively) should also have zero mean and unit variance. The SCANPI data points are not statistically independent, and we found that propagating the noise led to error estimates too small by a factor of around 3.6 at 60$\,\mu$m, and 5.7 at 100$\,\mu$m. The offset positions had consistent noise estimates, but have the disadvantage that the estimates are not local to the target. A visual inspection of the IRAS maps around our Palomar-Green targets (A. Alexov, priv. comm.) suggested problematic cirrus structure near several targets. We therefore adopted the propagated noise estimate, scaled by a factor of 3.7 (5.7) at 60$\,\mu$m (100$\,\mu$m) to account for correlations between the SCANPI data points. Cirrus structure can in principle be alleviated with a matched filter tuned to the power spectrum of the background (e.g. Vio et al. 2002), though in this case the IRAS detector responsivity variations would present a significant complication. We found the SCANPI 1002 coadded scans (median combination of IRAS scans) to give the best signal-to-noise. Figures \[fig:60um\_calibration\] and \[fig:100um\_calibration\] show the unweighted mean average IRAS fluxes of the fainter Spitzer and ISO sources in broad flux bins. This stacking methodology appears to give unbiased estimates even at the faintest 70$\,\mu$m fluxes tested, e.g. $\sim40\,$mJy at 70$\,\mu$m. This is much fainter than the mean 60$\,\mu$m fluxes of Palomar-Green quasars, which we estimate to be $201\pm35\,$mJy. For the offset positions, we estimated the signal and noise from a Gaussian fit to the histogram of the measurements, in order to avoid serendipitous sources. The stacked 60$\,\mu$m fluxes at random offset positions were consistent with zero (e.g. $3.0\pm1.5$mJy for SWIRE, see figure \[fig:60um\_calibration\]). At 100$\,\mu$m, the stacked flux at blank-field positions offset from SWIRE sources was $-6.7\pm6.3\,$mJy. However, we found evidence for flux calibration discrepancy between ELAIS and our 100$\,\mu$m stacked fluxes (figure \[fig:100um\_calibration\]). Héraudeau et al. 2002 found that the ELAIS 90$\,\mu$m point source fluxes were lower than IRAS FSC 100$\,\mu$m fluxes by a factor $0.76\pm0.17$, after taking into account the colour corrections. They attributed this to systematic IRAS overestimates in the faintest catalogues fluxes, citing Moshir et al. 1992; however, we find no evidence for such a calibration offset in our 60$\,\mu$m IRAS stacks. We compared the ELAIS and SWIRE flux calibrations by interpolating the 70$\,\mu$m and 160$\,\mu$m fluxes to obtain 90$\,\mu$m flux estimates, and found a 1:1 correlation with the ELAIS fluxes, so we think it unlikely the ELAIS flux calibration is at fault. We also calculated the 100$\,\mu$m:90$\,\mu$m colour for our three SED models (figure \[fig:100um\_calibration\]), and found this too small an effect to account for the discrepancy. A similar discrepancy has been noted by Jeong et al. (2007), comparing IRAS 100$\,\mu$m and AKARI 90$\,\mu$m fluxes. We have therefore chosen to adopt the Héraudeau et al. correction factor of 0.76 to our stacked 100$\,\mu$m IRAS fluxes. In the appendix, we present our SCANPI photometry of PG quasars and discuss the problematic cases. Results ======= Stacked flux results -------------------- Figure \[fig:sdss\_swire\] shows the SWIRE 70-160$\,\mu$m fluxes for the SDSS quasars as a function of redshift and absolute magnitude. Error bars have been suppressed for clarity, except for a single representative example in each panel. The SDSS quasar population appears bimodal or at least with a significant skew to bright fluxes for a minority of objects, with a small number of far-IR-loud objects at both 70$\,\mu$m and 160$\,\mu$m. For the bulk of the population, there also appears to be a significant positive flux at the positions of the SDSS quasars at redshifts $z<3$, and at most absolute magnitudes. Of the quasars at $0.5<z<1.5$, 6/114 have 70$\,\mu$m fluxes above $20\,$mJy, and 6/113 have 160$\,\mu$m fluxes above $70\,$mJy. In the $1.5<z<2.5$ redshift bin, 3/86 have 70$\,\mu$m fluxes above $20\,$mJy, and 3/84 have 160$\,\mu$m fluxes above $70\,$mJy. The green curves show the predicted flux for an M82 SED, normalised to $10^{11}L_\odot$, $10^{12}L_\odot$ and $10^{13}L_\odot$. To test whether the skewed distribution is caused by a distinct population of far-IR-loud quasars, or whether unrelated far-IR-bright foreground companion galaxies are responsible, we compared the fluxes at the positions of SDSS quasars with those in the map as a whole, following the methodology of Serjeant et al. (2004). This test is preferable to Monte Carlo randomizations of the quasar positions, since it compares the quasar fluxes with the entire distribution of map fluxes, rather than a randomly-selected subset. The results are shown in figure \[fig:histograms\]. The observed skewness in the quasar far-infrared fluxes is therefore not due to a pathological distribution of map fluxes. In table \[tab:swire\_stacks\], we present the average far-infrared fluxes for the SDSS quasars in redshift bins, both with and without the high-flux population. ------------- --------------- ----------------------- --------------- ----------------------- --------------- ----------------------- Wave- $0.5<z<1.5$ $0.5<z<1.5$ $1.5<z<2.5$ $1.5<z<2.5$ $0.5<z<2.5$ $0.5<z<2.5$ length no far-IR bright QSOs no far-IR-bright QSOs no far-IR bright QSOs 70$\,\mu$m 7.64$\pm$0.95 5.98$\pm$0.56 6.11$\pm$0.65 5.48$\pm$0.56 6.98$\pm$0.61 5.77$\pm$0.40 160$\,\mu$m 19.3$\pm$3.0 14.2$\pm$2.3 16.1$\pm$3.6 12.3$\pm$3.0 17.9$\pm$2.3 13.4$\pm$1.8 ------------- --------------- ----------------------- --------------- ----------------------- --------------- ----------------------- It is clear from the curves in figure \[fig:sdss\_swire\] that although the SDSS quasars span a narrow range of far-infrared fluxes, they span a wide range of far-infrared luminosities. The right-hand panel in figure \[fig:iz\] demonstrates that the stacked 160$\,\mu$m signal from $z\simeq2$ quasars in SDSS covers comparable far-infrared luminosities to the expected stacked signal (a few times fainter than the IRAS PSC) from $z\simeq0.5$ Palomar-Green quasars, for the assumed M82 SED. Also, it is clear that combining the Palomar-Green, SDSS and SCUBA quasars gives enough coverage of the optical luminosity-redshift plane to remove the degeneracy between trends in luminosity and in redshift (essentially Malmquist bias in its original sense, Malmquist 1924; see also Teerikorpi 1984). Stacked luminosity results {#sec:stacked_luminosity_results} -------------------------- It is not obvious what the best stacking statistic is where a large number of the sample have high signal-to-noise direct detections, as is the case with our quasars. Variance weighting the stacks would result in high signal-to-noise measurements dominating, but these measurements need not be in agreement with each other, because the population has an intrinsic dispersion. For example, if one has two quasars with fluxes $100\pm1$mJy and $0\pm1$mJy, what can one say about the average in this population? Clearly, it would not be right to quote $50\pm0.7$mJy for this average on the evidence of those two quasars. We have opted to regard flux measurements of individual quasars as attempts to measure the mean of the population, so the dispersion in the population is a noise term on these measurements. The noise on any particular estimate of the population mean is therefore the quadrature sum of the flux error and the population dispersion. But what is an appropriate value for this population dispersion? We have opted to determine this from our data simultaneously with the population mean. If $x_i$ are our measurements, each with a measurement noise level $\sigma_i$, then our data should have the following distribution: $$\rho(x,\sigma)=\frac{1}{\sqrt{2\pi(\sigma^2+\sigma_0^2)}} \exp\left (-\frac{(x-\mu)^2}{2(\sigma^2+\sigma_0^2)}\right ) \label{eqn:rho}$$ where $\mu$ is the population mean and $\sigma_0$ is the population dispersion. We find the maximum-likelihood solutions for $\mu$ and $\sigma_0$ from our data by maximising $\Pi\rho(x_i,\sigma_i)$, and estimate the parameters’ 68% confidence bounds directly from the likelihood surface. Note that there is no covariance between these parameters; this follows from the fact that the expected values of measurements are independent of their noise levels. We used numerical simulations to verify that our maximum-likelihood solutions using this procedure are unbiased estimators of the underlying values, and to verify our confidence bounds. This estimator worked well where we had more than three objects in a bin, but encountered numerical problems with some bins containing only two or three objects. There is also not enough information to constrain both parameters when there is only a single object. One option is to neglect the population dispersion $\sigma_0$ in these problematic cases, but this would lead to over-optimistic error estimates. Another option is to ignore these bins altogether. For readers that wish to do so, the numbers in each bin will be given. However, we found that the quantity in bins with $>3$ objects ranged from 0.51 to 1.41, with a mean 0.84 and standard deviation 0.24. On this basis, we chose to adopt $0.84\mu$ as our estimator for $\sigma_0$ in bins with three or fewer objects. We chose redshift and absolute magnitude bins and made a noise-weighted stack of the starburst far-infrared luminosities of the quasars in figure \[fig:iz\] in each of these bins, using the procedure above. Far-infrared luminosities were estimated assuming an M82 SED. The results are listed in table \[tab:m82\_stacks\], and presented graphically in figures \[fig:stacked\_luminosity\_vs\_i\] and \[fig:stacked\_luminosity\_vs\_z\]. We will discuss the effects of varying the SED in section 3.4. We used 160$\,\mu$m fluxes as our far-infrared luminosity estimator for the SDSS quasars. For PG quasars we used the 60$\,\mu$m fluxes at redshifts up to 0.3, and 100$\,\mu$m fluxes at higher redshifts. Our results are not sensitive to the choice of $z=0.3$ for this transition. ----------------------- ---------------------- -------------------- -------------------- -------------------- -------------------- $0.05<z<0.5$ $0.5<z<1$ $1<z<2$ $2<z<4$ $4<z<7$ $I_{\rm abs}<-28$ $5.5\pm4.5$ (2) $4.9\pm2.0$ (31) $5.43\pm0.76$ (66) $4.32\pm0.70$ (63) $-26>I_{\rm abs}>-28$ $0.52\pm0.38$ (9) $1.9\pm1.4$ (3) $2.17\pm0.42$ (48) $3.71\pm0.87$ (76) $2.41\pm0.47$ (55) $-24>I_{\rm abs}>-26$ $0.33\pm0.13$ (28) $0.30\pm0.11$ (25) $0.83\pm0.22$ (74) $0.7\pm2.3$ (3) $2.0\pm2.2$ (1) $-22>I_{\rm abs}>-24$ $0.086\pm0.018$ (55) $0.18\pm0.08$ (20) $0.62\pm0.66$ (2) $1.7\pm1.9$ (1) $I_{\rm abs}>-22$ $0.041\pm0.021$ (3) ----------------------- ---------------------- -------------------- -------------------- -------------------- -------------------- -------------- ------------------ -------------------- -------------- ------------------ -------------- ------------------ $z$ range $p_1$ $p_2$ $\chi^2_\nu$ Pr($\chi^2,\nu$) $\chi^2_\nu$ Pr($\chi^2,\nu$) $0.05>z>0.5$ 28.3 $\pm$ 1.5 -0.196 $\pm$ 0.057 0.52 0.59 2.67 0.046 $0.5>z>1$ 27.00 $\pm$ 0.81 -0.220 $\pm$ 0.070 0.62 0.53 1.23 0.295 $1>z>2$ 25.21 $\pm$ 0.53 -0.157 $\pm$ 0.036 0.38 0.68 3.96 0.008 $2>z>4$ 19.4 $\pm$ 5.5 -0.070 $\pm$ 0.038 0.77 0.38 2.58 0.076 $z>4$ 22.1 $\pm$ 2.9 -0.080 $\pm$ 0.034 0.05 0.95 1.90 0.127 -------------- ------------------ -------------------- -------------- ------------------ -------------- ------------------ ----------------------- ------------------- ------------------ -------------- ------------------ -------------- ------------------ $I_{\rm abs}$ range $p_3$ $p_4$ $\chi^2_\nu$ Pr($\chi^2,\nu$) $\chi^2_\nu$ Pr($\chi^2,\nu$) $-22<I_{\rm abs}<-24$ 0.055 $\pm$ 0.015 1.93 $\pm$ 0.57 0.16 0.85 0.89 0.444 $-24<I_{\rm abs}<-26$ 0.180 $\pm$ 0.077 1.43 $\pm$ 0.54 0.60 0.61 1.39 0.234 $-26<I_{\rm abs}<-28$ 0.97 $\pm$ 0.30 0.57 $\pm$ 0.21 2.73 0.04 4.59 0.001 $-28<I_{\rm abs}<-32$ 7.3 $\pm$ 3.5 -0.26 $\pm$ 0.31 0.23 0.80 0.39 0.759 ----------------------- ------------------- ------------------ -------------- ------------------ -------------- ------------------ --------------------------- ---------------------- ------------------ ------------------ ------------------ ------------------- $0.05<z<0.5$ $0.5<z<1$ $1<z<2$ $2<z<4$ $4<z<7$ $10^5<M_{\rm BH}<10^6$ $2.4\pm3.6$ (1) $2.5\pm2.4$ (2) $10^6<M_{\rm BH}<10^7$ $0.017\pm0.012$ (2) $9.3\pm8.6$ (1) $0.54\pm0.49$ (2) $10^7<M_{\rm BH}<10^8$ $0.25\pm0.13$ (21) $2.4\pm1.8$ (3) $2.8\pm1.1$ (6) $10^8<M_{\rm BH}<10^9$ $0.121\pm0.034$ (20) $1.7\pm2.7$ (1) $5.4\pm1.0$ (11) $2.7\pm1.3$ (13) $10^9<M_{\rm BH}<10^{10}$ $0.37\pm0.35$ (3) $10.0\pm9.0$ (1) $5.0\pm2.7$ (14) $2.8\pm1.1$ (15) $4.8\pm1.4$ (11) --------------------------- ---------------------- ------------------ ------------------ ------------------ ------------------- Taking bins in isolation, there are no high signal-to-noise detections. Despite this however, there are trends apparent in figures \[fig:stacked\_luminosity\_vs\_i\] and \[fig:stacked\_luminosity\_vs\_z\]. Firstly, at all redshifts there appears to be a significant optical luminosity dependence, scaling as $L_{\rm opt}^{0.441\pm0.069}$ at $z<2$ (see figure \[fig:stacked\_luminosity\_vs\_i\] and table \[tab:imag\_fits\]) and with a shallower scaling at higher redshifts. Secondly, at all absolute magnitudes below $-28$ and redshifts $z<2$, there is a weaker signature of evolution. (see figure \[fig:stacked\_luminosity\_vs\_z\] and table \[tab:evol\_fits\]). The poor $\chi^2$ for the power-law evolution model in the $-26>I_{\rm abs}>-28$ bin is due to the highest redshift data point, in which the positive evolution is reversed, curiously mirroring the evolution of quasar number density. If this data point is excluded, the power-law fit parameters for this $I_{\rm abs}$ slice (table \[tab:evol\_fits\]) are $\nu_{100}L_{100}=(0.53\pm0.25)\times10^{12}L_\odot\times(1+z)^{1.46\pm0.43}$, with $\chi^2=0.92$ and Pr$(\chi^2,\nu)=0.37$. This is consistent with the evolution at brighter absolute magnitudes, which justifies an estimate of the average evolution rate of $I_{\rm abs}>-28$ quasars of $(1+z)^{1.57\pm0.29}$. Stacked black hole mass results ------------------------------- We next examined our data for trends with black hole mass. The black hole mass computation is based on the extrapolation of the reverberation mapping technique, which considers the velocity (full-width half maximum) of emission lines and relates the size of the Broad Line Region (BLR) to the continuum luminosity. Assuming the dynamics of the BLR are dominated by the gravity of the black hole, the black hole mass is then expressed as $$M_{\rm BH}\simeq R_{\rm BLR}\times v^2/G \label{eqn:mbh}$$ where $R_{\rm BLR}$ is the radius of the BLR and $v$ is the velocity of the emission line gas. The velocity $v$ is estimated from the FWHM of H$_\beta$, MgII or CIV depending on the redshift (Kapsi et al. 2000). For quasars with redshifts greater than $z\simeq0.8$, H$_\beta$ is not present in optical spectra, and MgII and CIV have been suggested as alternative estimators. For a detailed analysis of the method and the use of the various emission lines see Kapsi et al. 2000, McLure & Dunlop 2004, and Warner et al. 2004. Depending on the redshift of the object and up to a redshift of 4.8, the following relations are used, that have been argued by these authors to provide equivalent estimates: $$0.0<z<0.8: \frac{M_{\rm BH}}{M_\odot}=4.7\left (\frac{L_{5100}}{10^{37}\rm W}\right )^{0.61}\left (\frac{{\rm FWHM}({\rm H}_\beta)}{\rm km/s}\right )^2 \label{eqn:mbh2}$$ $$0.8<z<2.1: \frac{M_{\rm BH}}{M_\odot}=3.2\left (\frac{L_{3000}}{10^{37}\rm W}\right )^{0.62}\left (\frac{{\rm FWHM}({\rm MgII})}{\rm km/s}\right )^2 \label{eqn:mbh3}$$ $$2.1<z<4.8: \frac{M_{\rm BH}}{M_\odot}=1.4\left (\frac{L_{1450}}{10^{37}\rm W}\right )^{0.70}\left (\frac{{\rm FWHM}({\rm CIV})}{\rm km/s}\right )^2 \label{eqn:mbh4}$$ where $L_\lambda$ is the luminosity at wavelength $\lambda$. Black hole masses were estimated for objects with available SDSS DR5 spectroscopy. The FWHM of the lines were derived from the emission line sigmas given by the SDSS pipeline, present in the second extension of the fits spectra. The error bars on the black hole masses are estimated from the uncertainty of the FWHM of the lines. We also searched the literature for additional black hole mass estimates. For Palomar-Green quasars in Boroson & Green (1992), we recomputed the black hole masses using the relations above. For the $z=6.28$ quasar SDSSJ1148+5251, we applied equation \[eqn:mbh2\] to the data in Shields et al. (2006). For other $z>5$ quasars, black hole masses were taken from Kurk et al. (2007) and Jiang et al. (2007), using our equation \[eqn:mbh3\] where possible, and equation \[eqn:mbh4\] where only CIV is available. The black hole mass estimates vary between these two groups, even when using consistent conversions based on the same emission line, but because our mass bins are very broad (1 dex), these fractional variations ($\sim50\%$) are not enough to move quasars to different bins. The results of the stacks in black hole mass and redshift bins are given in table \[tab:m82\_bh\_stacks\], and plotted in figures \[fig:stacked\_luminosity\_vs\_bh\] and \[fig:stacked\_luminosity\_vs\_z\_bh\]. These figures give the host galaxy far-infrared luminosities as a function of black hole mass and of redshift. Tables \[tab:mbh\_fits\] and \[tab:evol\_fits\_bh\] demonstrate the statistical significances of the trends in these figures. -------------- ------------------- ----------------------------------- -------------- ------------------ -------------- ------------------ $z$ range $p_5$ $p_6$ $\chi^2_\nu$ Pr($\chi^2,\nu$) $\chi^2_\nu$ Pr($\chi^2,\nu$) $0.05>z>0.5$ 0.42 $\pm$ 0.14 -10.66 $\pm$ 0.86 1.20 0.302 4.01 0.007 $2>z>4$ 0.000 $\pm$ 0.082 $3\times10^3$ $\pm$ $1\times10^6$ 1.48 0.217 1.11 0.348 $z>4$ 0.247 $\pm$ 0.082 -6.70 $\pm$ 0.75 0.87 0.454 3.03 0.017 -------------- ------------------- ----------------------------------- -------------- ------------------ -------------- ------------------ ----------------------------- --------------------- ----------------- -------------- ------------------ -------------- ------------------ $M_{\rm BH}$ range $p_7$ $p_8$ $\chi^2_\nu$ Pr($\chi^2,\nu$) $\chi^2_\nu$ Pr($\chi^2,\nu$) $10^{6}<M_{\rm BH}<10^{7}$ 0.0099 $\pm$ 0.0085 2.14 $\pm$ 0.71 1.12 0.289 1.14 0.319 $10^{7}<M_{\rm BH}<10^{8}$ 0.18 $\pm$ 0.11 1.49 $\pm$ 0.38 0.31 0.581 3.55 0.029 $10^{8}<M_{\rm BH}<10^{9}$ 0.082 $\pm$ 0.024 2.12 $\pm$ 0.24 8.47 0.0002 10.71 $4\times10^{-7}$ $10^{9}<M_{\rm BH}<10^{10}$ 0.36 $\pm$ 0.26 1.40 $\pm$ 0.43 1.04 0.374 4.09 0.003 ----------------------------- --------------------- ----------------- -------------- ------------------ -------------- ------------------ Robustness to SED assumptions ----------------------------- Any multi-wavelength compilation such as this will inevitably rely on SED assumptions to relate the multi-wavelength observations. We have assumed an M82 SED up to this point, and quoted far-infrared luminosities on that basis, but this will inevitably neglect any AGN dust tori contributions in the mid-infrared. At best, our far-infrared luminosities can only be considered estimates of the starburst bolometric contributions. Starbursts, too, have a variety of SEDs, and in this section we will test the robustness to our assumed starburst SED shape. The most far-infrared-luminous quasars (i.e. those with direct SWIRE detections) were found by Hatziminaoglou et al. (2008) to resemble the heavily-obscured starburst Arp 220 more often than M82, though they conjectured that fainter quasars would be more likely to resemble M82. We tested our SED dependence by re-running our stacking analyses with an Arp 220 spectrum. Reassuringly, very similar trends are present as in the M82 case, perhaps because most of the rest-frame measurements are in spectral regions where the M82 and Arp 220 SEDs are similar (see tables \[tab:arp220\_stacks\] and \[tab:arp220\_bh\_stacks\]). Moreover, it would not appear to be possible to attribute the trends in figures \[fig:stacked\_luminosity\_vs\_i\], \[fig:stacked\_luminosity\_vs\_z\], \[fig:stacked\_luminosity\_vs\_bh\] and \[fig:stacked\_luminosity\_vs\_z\_bh\] to variations in SED shape as a function of quasar absolute magnitude, black hole mass or redshift. ----------------------- ---------------------- -------------------- -------------------- -------------------- -------------------- $0.05<z<0.5$ $0.5<z<1$ $1<z<2$ $2<z<4$ $4<z<7$ $I_{\rm abs}<-28$ $8.4\pm6.9$ (2) $6.2\pm2.8$ (31) $3.95\pm0.52$ (66) $3.96\pm0.64$ (63) $-26>I_{\rm abs}>-28$ $0.53\pm0.38$ (9) $2.3\pm1.6$ (3) $2.51\pm0.48$ (48) $4.4\pm1.1$ (76) $2.19\pm0.43$ (55) $-24>I_{\rm abs}>-26$ $0.47\pm0.20$ (28) $0.29\pm0.11$ (25) $0.84\pm0.21$ (74) $1.0\pm3.5$ (3) $2.0\pm2.2$ (1) $-22>I_{\rm abs}>-24$ $0.107\pm0.021$ (55) $0.18\pm0.08$ (20) $0.59\pm0.63$ (2) $1.7\pm2.0$ (1) $I_{\rm abs}>-22$ $0.055\pm0.028$ (3) ----------------------- ---------------------- -------------------- -------------------- -------------------- -------------------- --------------------------- ---------------------- --------------- ------------------ -------------------- ------------------- $0.05<z<0.5$ $0.5<z<1$ $1<z<2$ $2<z<4$ $4<z<7$ $10^5<M_{\rm BH}<10^6$ $1.8\pm2.7$ (1) $2.4\pm2.3$ (2) $10^6<M_{\rm BH}<10^7$ $0.023\pm0.017$ (2) $7.2\pm6.6$ (1) $0.46\pm0.42$ (2) $10^7<M_{\rm BH}<10^8$ $0.35\pm0.19$ (21) $1.6\pm1.1$ (3) $2.38\pm0.90$ (6) $10^8<M_{\rm BH}<10^9$ $0.183\pm0.053$ (20) $1.0\pm1.6$ (1) $4.25\pm0.80$ (11) $2.5\pm1.2$ (13) $10^9<M_{\rm BH}<10^{10}$ $0.37\pm0.34$ (3) $15\pm14$ (1) $6.0\pm3.8$ (14) $2.16\pm0.86$ (15) $4.6\pm1.3$ (11) --------------------------- ---------------------- --------------- ------------------ -------------------- ------------------- Few objects in our compilation have photometry at more than one wavelength, except the SWIRE quasars. We tested our SED assumption by fitting M82 and Arp 220 SEDs to the SWIRE photometry. Of 236 quasars with photometric measurements at both 70$\,\mu$m and 160$\,\mu$m, 167 had $\chi^2<1$ for either M82 or Arp 220 SEDs. Of these, 96/167 (57%) had a lower $\chi^2$ for M82. Since most of this photometry is non-detections, all this is capable of showing is that the underlying mean SED is more likely to resemble M82 than Arp 220, in agreement with the suggestion of Hatziminaoglou et al. (2008). Objects with $\chi^2>1$ typically had evidence of a mid-infrared excess, suggestive of an AGN dust torus. We also tried estimating comparing the 70$\,\mu$m:160$\,\mu$m flux ratio with the predictions for redshifted Arp 220 and M82 SEDs. The results are shown in figure \[fig:chi2\]. If an SED model is correct on average, the histogram in figure \[fig:chi2\] should be centred around zero. This again suggests that an M82 SED is a better match to the [*average*]{} quasar far-infrared-submm SED than Arp 220 (though the most luminous may nevertheless more resemble Arp 220). This is in contrast with the bright quasars studied by Hatziminaoglou et al. (2008), though in keeping with their suggestion that fainter quasars have less heavily obscured SEDs. Note however that we have already excluded SED dependence on optical luminosity, redshift or black hole mass as explanations for the trends in figures \[fig:stacked\_luminosity\_vs\_i\], \[fig:stacked\_luminosity\_vs\_z\], \[fig:stacked\_luminosity\_vs\_bh\] and \[fig:stacked\_luminosity\_vs\_z\_bh\]. Discussions =========== Predictions for Herschel ------------------------ Stacking analyses in general only yield information on the mean fluxes, and yield little information on the dispersion within the population. However, in the case considered here, we find evidence for a subset of quasars with bright far-infrared fluxes up to ten times the mean fluxes within the population. The 40 beams/source confusion limits predicted for Herschel by Rowan-Robinson (2001) are $4.6\,$mJy at 70$\,\mu$m and $59\,$mJy at 160$\,\mu$m. If we accept this estimate, then most of the targets in the AGN survey would therefore be very challenging for PACS direct detection. However, HSPOT reports a $5\sigma$ confusion limit of only $1.2\,$mJy at $110\,\mu$m, so this forecast may be pessimistic. We used a crude fit to the data in table \[tab:m82\_stacks\] to predict the individual SDSS quasar fluxes (figures \[fig:stacked\_luminosity\_vs\_i\] and \[fig:stacked\_luminosity\_vs\_z\]), from which we estimate that the Herschel ATLAS survey will detect 92% of SDSS quasars at $z<0.2$ (though the $z<0.2$ quasars represent only 0.5% of all SDSS quasars). At higher redshifts, only the far-infrared-loud subset will be detectable. We estimate that 66% of SDSS QSOs with far-infrared luminosities $5\times$ larger than the mean will be detectable in this survey, corresponding to about $221(f_5/0.05)$ quasars detected over $\simeq500$deg$^2$, where $f_5$ is the fraction of quasars with luminosities $5\times$ larger than the mean. The detected fraction of $5\times$ over-luminous quasars at $z>3$ is only 5%. However, the Herschel ATLAS survey should detect about 98% of all the SDSS quasars with luminosities $10\times$ larger than the mean, i.e. about $333(f_{10}/0.05)$ quasars over $500\,$deg$^2$ where $f_{10}$ the fraction with luminosities $10\times$ larger than the mean. We have neglected type 2 AGN in this analysis. These will double or triple the total number of AGN detected by the Herschel ATLAS survey. AGN not detected individually in this survey will be detectable in stacking analyses. It will be illuminating to test whether sub-classes of quasars have a greater tendency to be far-infrared-loud in this survey (e.g. broad absorption line quasars, nitrogen-rich quasars). Physical interpretation ----------------------- It might be possible for AGN dust tori models to account for the far-infrared and submm luminosities of quasars, but only by assuming very high equatorial optical depths and large physical sizes. If quasar heating dominated the far-infrared outputs throughout our sample, we would expect a linear correlation between quasar absolute magnitude and far-infrared luminosity (figure \[fig:stacked\_luminosity\_vs\_i\]), whereas the observed correlation is shallower. While we cannot exclude AGN heating in a subset of our objects, we will follow Efstathiou & Rowan-Robinson (1995) in treating the far-infrared and submm luminosities of quasars as being typically dominated by star formation. The far-infrared luminosities scale linearly with the star formation rates as $$SFR=\frac{L_{\rm FIR}}{5.8\times10^9L_\odot}M_\odot/{\rm yr}$$ where $L_{\rm FIR}$ is the bolometric luminosity from the starburst (see Kennicutt 1998), assuming a Salpeter initial mass function (IMF) from 0.1 to $100\,M_\odot$, implying that our quasars are forming stars at around $200-2000\,M_\odot$/yr. The local spheroid associated with a $10^8$ ($10^9$) $M_\odot$ black hole has a mass of about $5\times10^{10}$ ($5\times10^{11}$) $M_\odot$ (Marconi & Hunt 2003, Häring & Rix 2004), though there are indications that the spheroids are around a factor of 4 less massive at $z=2$ (e.g. McLure et al. 2006). Sustained star formation at these rates could assemble the $z=0$ stellar mass hosts in a few $\times10^7$ to a few $\times10^9$ years. The correlation between inferred black hole accretion and star formation is similar to the one reported by Hao et al. (2007), though they combined low luminosity - low redshift quasars with high luminosity - high redshift quasars, so their results could also have been attributable to evolution. We span a much bigger range of the optical luminosity - redshift parameter space (figure \[fig:iz\]), so these caveats do not apply to our results. The quasar magnitudes in our lowest redshift bin may in principle be contaminated by the host galaxies. If so, correcting for this effect would only strengthen the dependence of star formation on quasar luminosity, since the correction would apply preferentially at the faintest optical magnitudes. One difficulty in the interpretation of these results is the possibility of luminosity-dependent reddening of quasars. However, even an $A_V$ of one at the faintest end and zero at the brightest would have little impact on our correlations, given the size of our errors and range of absolute magnitudes. We have assumed a single optical spectral index for our quasars, but a slightly better approach would be to use the optical spectra themselves, correcting for dust absorption using the Balmer decrement where possible, or using rest-frame hard X-ray luminosities. Alexander et al. (2005) found an approximately linear relationship between hard X-ray and far-infrared luminosities in a heterogeneous sample of AGN-dominated submm-selected galaxies, though if one adds the submm galaxies classified as starbursts, their correlation is shallower with a wide dispersion. We have not excluded candidate starburst-dominated objects. Alexander et al. (2005) also demonstrate that local AGN show a large scatter in their star formation - black hole accretion relationship; our lowest-redshift quasars in figure \[fig:stacked\_luminosity\_vs\_i\] show marginal evidence for a steeper correlation than at higher redshifts, broadly in agreement with these observations of local active galaxies. The implicit correlations in figure \[fig:stacked\_luminosity\_vs\_i\] between star formation rate and black hole accretion rate hint at common physical parameters (such as gas supply), despite the disparity of spatial scales, but in keeping with qualitative expectations from the black hole - spheroid connections in local galaxies (e.g. Maggorian et al. 1998, Ferrarese & Merritt 2000). There is no insight to be gained by supposing the only common parameter is the total mass of the system, i.e. that this is simply reflecting only a size dependence, because one must still hypothesize some mechanisms to tie both parameters to the total mass (e.g. Serjeant et al. 1998); in any case, the trends in figure \[fig:stacked\_luminosity\_vs\_i\] follow approximately $L_{\rm FIR}\propto L_{\rm opt}^{0.5}$ rather than a linear relationship. Furthermore, the lack of any obvious correlation with black hole mass at redshifts $0.5<z<4$ (see below) argues against any simple scaling with the size of the system, at least outside the local Universe. This non-linear relationship and its evolution do not follow expectations from some semi-analytic models. According to the model of Croton (2006), the ratio of black hole accretion rate and star formation rate is constant with scale but increases with redshift. This is partly due to increased disk disruption at high redshifts generating starbursts but not black hole accretion, and partly due to evolution in the black hole feeding rate in this model. The model does succeed in reproducing the evidence for evolution in the black hole - bulge mass relationships and (qualitatively at least) our evolving normalisation of the black hole accretion – star formation relation from $z<0.5$ to $1<z<2$. However, it does not reproduce our scale dependence. Our observations may prove to be a useful constraint on AGN feedback models. If AGN feedback directly regulates stellar mass assembly in the host, then we may expect stronger trends of far-infrared luminosity with black hole accretion than with black hole mass. Not all of our sample have black hole mass estimates, so our tests of dependence on black hole mass and redshift are more noisy than our correlations against quasar luminosity, though we span a larger logarithmic range of black hole mass than quasar luminosity. In the local Universe, we see a hint of a relation between star formation rate and black hole mass (figure \[fig:stacked\_luminosity\_vs\_bh\], table \[tab:mbh\_fits\]). At these lower redshifts, the hosts may already have assembled a large fraction of their $z\simeq0$ stellar masses, so this may partly represent mutual size dependence. However, at $0.5<z<4$ there seems to be no evidence for a dependence on black hole mass (figure \[fig:stacked\_luminosity\_vs\_bh\]) despite the dependence on inferred black hole accretion (figure \[fig:stacked\_luminosity\_vs\_i\]). At $z>4$ there is some evidence for a weak trend with black hole mass (see table \[tab:evol\_fits\_bh\]), varying roughly as SFR$\propto M_{\rm BH}^{1/4}$; the weakness of this trend suggests that it is less closely related to the primary underlying physical mechanism than the SFR-$L_{\rm opt}$ relation. There are nevertheless hints of trends with redshift at fixed black hole masses (figure \[fig:stacked\_luminosity\_vs\_z\], tables \[tab:m82\_bh\_stacks\] and \[tab:evol\_fits\_bh\]). More black hole mass estimates are needed to improve the statistics, but our measurements are consistent with feedback from black hole accretion at $z>1$ regulating the stellar mass assembly in their hosts. It is likely that the e-folding timescale for black hole growth ($\tau_{\rm BH}\simeq4\lambda(0.1/\epsilon)\times10^7$yr, where $\lambda$ is the Eddington ratio and $\epsilon$ the accretion efficiency) is much faster than the stellar mass assembly timescale (e.g. Malbon et al. 2007). Typical quasar lifetimes at $z<3.5$ may not be much longer than a single e-folding scale (e.g. Martini & Weinberg 2001, Shen et al. 2007), though may be several e-foldings at higher redshifts (Shen et al. 2007). There are 3.6 e-foldings from $I_{\rm AB}=-22$ to $I_{\rm AB}=-26$, making it unlikely that the relationships in figure \[fig:stacked\_luminosity\_vs\_i\] represent the evolutionary tracks of individual objects. We have shown that active nuclei are on average associated with luminous or ultraluminous starbursts at all redshifts, and all absolute magnitudes brighter than about $I_{\rm AB}=-22$. This relationship does not on its own help us address whether the AGN initiates the starburst or is started concurrently, or whether the AGN occurs at some midway point, or whether the AGN quenches the starburst. However, if quasar lifetimes are as long as 600Myr at $z>3.5$, which is the upper limit to the lifetimes suggested by Shen et al. (2007), then we would expect AGN feedback to have quenched the star formation in nearly all $z>3.5$ quasars. Our tentative high-z detections suggest this is not the case. The high-z constraints will shortly be made much stronger with the large far-infrared and submm photometric surveys of $z>3.5$ quasars from the Herschel ATLAS key project, SCUBA-2 and other facilities. Conversely, the shorter inferred quasar lifetimes at lower redshifts, the lack of evidence for any dependence of star formation on black hole mass, the observed dependence of star formation rate with quasar luminosity, and the local bulge - black hole relationships, are all consistent with feedback from black hole accretion regulating stellar mass assembly at lower redshifts. Acknowledgments {#acknowledgments .unnumbered} =============== This research has made use of the NASA/IPAC Infrared Science Archive and the NASA/IPAC Extragalactic Database (NED), which are operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We would particularly like to thank Anastasia Alexov, John C. Good and Anastasia Laity at IPAC for their invaluable help with the IRAS archive. We would also like to thank the anonymous referee for several helpful suggestions. This research was supported by STFC grants PP/D002400/1 and PP/E001408. [99]{} Baird S.R., 1981, ApJ, 245, 208 Adelman-McCarthy, J.K., et al., 2007, ApJS, 172, 634 Alexander, D.M., et al., 2005, ApJ, 632, 736 Beelen, A., et al., 2006, ApJ, 642, 694 Boroson, T.A., Green, R.F., 1992, ApJS, 80, 109 Carilli, C.L., et al., 2001, ApJ, 555, 625 Croton, D., 2006, MNRAS, 369, 1808 Efstathiou, A., Rowan-Robinson, M., 1995, MNRAS, 273, 649 Efstathiou, A., Rowan-Robinson, M., Siebenmorgen, R., 2000, MNRAS, 313, 734 Ferrarese, L., Merritt, D., 2000, ApJ, 539, L9 Fritz, J., Franceschini, A., Hatziminaoglou, E., 2006, MNRAS, 366, 767 Haas, M., et al. 2000, A&A, 354, 453 Haas, M., et al. 2003, A&A, 402, 87 Hao, C.N., Xia, X.Y., Mao, S., Deng, Z.D., Wu, H., 2007, preprint arXiv:0704.3247 Häring, N., Rix, H.W., 2004, ApJ, 604, L89 Hatziminaoglou, E., et al., 2005, AJ, 129, 1198 Hatziminaoglou, E., et al., 2008, MNRAS, 386, 1252 Hillenbrand, L.A., et al., 2008, ApJ, in press Isaak, K.G., et al., 2002, MNRAS, 329, 149 Jeong, W.-S., et al., 2007, PASJ, 59, SP2, 429 Jiang, L., et al., 2007, AJ, 134, 1150 Kapsi, S., et al., 2000, ApJ, 533, 631 Kennicutt, R.J., 1998, ApJ, 498, 541 Kurk, J.D., et al., 2007, ApJ, 669, 32 Londsdale, C.J., et al., 2003, PASP, 115, 897 Londsdale, C., et al., 2004, ApJS, 154, 54 Maggorian, J., et al., 1998, AJ, 115, 2285 Malbon, R.K., Baugh, C.M., Frenk, C.S., Lacey, C.G., MNRAS, 382, 1394 Malmquist, K.G., 1924, Medd. Lund. Astron. Obs., 2(32), 64 Marconi, A., Hunt, L.K., 2003, ApJ, 589, L21 Martini, P., Weinberg, D.H., 2001, ApJ, 547, 12 Mayer, M.L., et al., 2004, ApJS, 154, 422 Morel, T., et al., 2001, MNRAS, 327, 1187 Moshir, M., Kopman, G., Conrow, T.A.O., 1992, IRAS Faint Source Survey, Explanatory Supplement version 2, 1992, JPL D-10015 8/92 (Pasadena: JPL) McLure, R.J., Dunlop, J.S., 2004, MNRAS, 352, 1390 McLure, R.J., et al., 2006, MNRAS, 368, 1395 Omont, A., et al., 1996, A&A, 315, 10 Omont, A., Cox, P., Bertoldi, F., McMahon, R.G., Carilli, C., Isaak, K.G., 2001, A&A, 374, 371 Omont, A., et al., 2003, A&A, 398, 857 Petric, A.O., et al., 2006, AJ, 132, 1307 Priddey, R.S., et al., 2003a, MNRAS, 339, 1183 Priddey, R.S., et al., 2003b, 344, L74 Robson, I., et al., 2004, MNRAS, 344, L74 Rowan-Robinson, M., 2001, ApJ, 549, 745 Rowan-Robinson, M., et al., 2004, MNRAS, 351, 1290 Sanders, D.B., et al., 1989, ApJ, 347, 29 Schweitzer, M., et al., ApJ, 2006, 649, 79 Serjeant, S., et al., 1998, MNRAS, 298, 321 Serjeant, S., et al., 2004, ApJS, 154, 118 Shen, Y., et al., 2007, AJ, 133, 2222 Shields, G.A., Menezes, K.L., Massart, C.A., Vanden Bout, P., 2006, ApJ, 641, 683 Siebenmorgen, R., Freudling, W., Krügel, E. & Haas, M., A&A, 2004, 421, 129 Teerikorpi, P., 1984, A&A, 141, 407 Verma, A., Charmandaris, V., Klaas, U., Haas, M., 2005, SSRv, 119, 355 Vio, R., Tenorio, L., Wamsteker, W., 2002, A&A, 391, 789 Wang, R., et al., 2007, AJ, 134, 617 Warner, C., Hamann, F., Dietrich, M., 2004, ApJ, 608, 136 Discussion of Palomar-Green quasars =================================== We use our SCANPI 100$\,\mu$m measurements in preference to other IRAS 100$\,\mu$m measurements (e.g. Saunders et al. 1989), but that withstanding, we use the 100$\,\mu$m photometry with the smallest errors with the exceptions of the cases discussed below. At 60$\,\mu$m we use the measurements with the smallest errors regardless of their source, again with the exceptions discussed below. Haas et al. (2000, 2003) do not quote errors on their ISO photometry, but state that the detections range from $3-10\sigma$. We conservatively assume $3\sigma$ unless stated otherwise. Objects dominated by non-thermal emission at 60-100$\,\mu$m have been eliminated from our stacking analyses. The adopted photometry for the Palomar-Green sample is given in table \[tab:pg\_photometry\]. 0007+106, or MRK1501, is a radio-loud flat-spectrum quasar, and is likely to have non-thermal emission dominating at 60-100$\,\mu$m. 0050+124, or 1Zw1, has a 60 (100) $\mu$m measurement from ISO of 1752 (2339) mJy reported by Haas et al. 2000, but these are inconsistent with our SCANPI measurements of 2161$\pm$52 (1749$\pm$187) mJy which show no obvious anomalies in the fits. We have opted to use our SCANPI photometry. 0157+001 has a 60$\,\mu$m flux measurement of 2210mJy reported by Haas et al. 2003, though errors are not quoted. Sanders et al. 1989 quotes an IRAS measurement of 2377$\pm$56mJy. Our IRAS SCANPI measurement of 2348$\pm$73mJy is consistent with the latter rather than the former. We have opted to use our SCANPI photometry, which shows no obvious anomalies in the fit. 0832+251 (z=0.320) is reported as $<126$mJy at 60$\,\mu$m in Sanders et al. 1989 but has a SCANPI measurement of 352$\pm$60mJy. There is a similar discrepancy at 100$\,\mu$m. However this is because SCANPI’s fit is strongly affected by the nearby IRAS galaxy IRAS 08325+2512 at z=0.017, which is 2.5 arcmin from the QSO. Both the 60 and 100$\,\mu$m coadded scans appear to be fairly flat off-source. We therefore found the maximum-likelihood fit for the amplitudes of a source fixed at the target position, and another with a position allowed to vary. 0838+770 has an ISO 100$\,\mu$m flux measurement of 180mJy from Haas et al. 2003, which disagrees with the Sanders et al. 1989 IRAS measurement of 426$\pm$30mJy. Our IRAS SCANPI measurement is 293$\pm$184mJy, though with strong baseline drifts in the coadded timeline. Given the uncertainties we have uncovered in the IRAS 100$\,\mu$m flux calibration at this level, and the baseline drifts in our SCANPI data, we have opted to use the Haas et al. photometry with an assumed 33% error. 1001+054 has an ISO 60$\,\mu$m flux measurement of 140mJy from Haas et al. 2003, which disagrees with our IRAS SCANPI measurement of 23$\pm$49mJy. There are no hints of flux at the target position in our coadded scans. Sanders et al. 1989 report 27$\pm$9mJy, which has a remarkably small quoted error. Nothing is reported at this position in either the IRAS PSC or FSC, and nothing is evident on the ISSA plates. We have opted to use the Sanders photometry. 1022+519 has a 100$\,\mu$m flux measurement from the IRAS Faint Source Reject catalogue of 798$\pm$176mJy. Our SCANPI photometry is 200$\pm$103mJy, with a fairly stable coadded baseline. Owing to this, and the lower quoted error of our SCANPI measurements, together with the uncertainties we have uncovered in the catalogued IRAS 100 micron fluxes at this level, we have opted to use our SCANPI measurement. 1100+772, or 3C249.1, is radio-loud and probably synchrotron-dominated at both 60$\,\mu$m and 100$\,\mu$m. 1103-006 has a 60$\,\mu$m IRAS flux of 130$\pm$51mJy quoted in Sanders et al. 1989, while our SCANPI measurement is -8$\pm$98mJy, though the coadded scans are affected by baseline drifts. We have used the Sanders measurement. 1211+143 The Haas et al. 60$\,\mu$m measurement of 518mJy disagrees both with the Sanders et al. 1989 measurement of 305$\pm$53mJy and our own IRAS SCANPI measurement of 284$\pm$81mJy. At 100$\,\mu$m the disagreement is more striking, with Haas et al. reporting an upper limit of $<279$mJy, while our SCANPI measurement is 427$\pm$182mJy and Sanders et al. report 689$\pm$119mJy. Although baseline drifts are evident in our coadded scans, the background subtractions at the position of our target appear to be reliable, and the profile is well-fit. We have opted to use the lowest noise IRAS measurements. 1226+023, or 3C273, has a clearly non-thermal spectrum in IRAS passbands. 1302-102 is radio-loud and probably synchrotron-dominated at 60 and 100$\,\mu$m. 1351+640 has a 100$\,\mu$m flux measurement from Haas et al. 2003 of 526mJy, but the IRAS SCANPI measurement is 912$\pm$156mJy. Although baseline drifts are apparent in our coadded scans, the background subtractions at the position of our target appear to be reliable. Sanders et al. 1989 report 1184$\pm$26mJy. We have opted to use our SCANPI measurement. 1501+106 has a 60$\,\mu$m measurement from Haas et al. 2003 of 750mJy, but our SCANPI measurement is 473$\pm$36mJy. Sanders et al. 1989 report 486$\pm$42mJy. We adopt our SCANPI measurement as the most likely lowest-noise choice. 1545+210 is radio-loud and probably synchrotron-dominated at 60 and 100$\,\mu$m. 1613+658 has a 100$\,\mu$m measurement from Haas et al. 2000 of 1002mJy, consistent with the Sanders et al. 1989 measurement of 1090$\pm$59mJy, but our SCANPI measurement is 474$\pm$70mJy. Our SCANPI measurement shows a baseline drift that may be over-corrected, so we opt to use the Haas et al. measurement, and assume an error of 10%. 1704+608, or 3C351, is radio-loud and probably synchrotron-dominated at 60 and 100$\,\mu$m. 1718+481 is radio-loud and probably synchrotron-dominated at 60 and 100$\,\mu$m. 2209+184 is radio-loud and probably synchrotron-dominated at 60 and 100$\,\mu$m. 2251+113 is radio-loud and probably synchrotron-dominated at 60 and 100$\,\mu$m. 2344+092 is radio-loud and probably synchrotron-dominated at 60 and 100$\,\mu$m. 2349-014 is radio-loud and probably synchrotron-dominated at 60 and 100$\,\mu$m. Only contains objects which are not dominated by synchrotron at 60-100um ---------- ----------------- --------------- -------------- -------------- ---------- -- -- Name Right Ascension Declination $S_{60}$ $S_{100}$ Redshift (J2000) (J2000) (mJy) (mJy) 0002+051 00 05 20.2155 +05 24 10.800 $15\pm$58 $-43\pm$232 1.900 0003+158 00 05 59.200 +16 09 48.00 $37 \pm$70 $-75\pm$378 0.45 0003+199 00 06 19.521 +20 12 10.49 $260\pm$93 $-509\pm$781 0.025 0026+129 00 29 13.6 +13 16 03 $0.9 \pm$109 $-437\pm$105 0.142 0043+039 00 45 47.3 +04 10 24 $-2.7 \pm$48 $165\pm$154 0.385 0044+030 00 47 05.91 +03 19 55.0 $70 \pm$49 $158\pm$79 0.623 0049+171 00 51 54.800 +17 25 58.40 $16 \pm$73 $642\pm$286 0.064 0050+124 00 53 34.940 +12 41 36.20 $2161\pm$52 $1749\pm$187 0.061 0052+251 00 54 52.1 +25 25 38 $93 \pm$18 $163\pm$54 0.155 0117+213 01 20 17.2 +21 33 46 $-0.7 \pm$71 $-96\pm$173 1.493 0119+229 01 22 40.58 +23 10 15.1 $921 \pm$63 $773\pm$264 0.053 0157+001 01 59 50.211 +00 23 40.62 $2348 \pm$73 $1915\pm$168 0.163 0804+761 08 10 58.600 +76 02 42.00 $191 \pm$42 $121\pm$36 0.1 0832+251 08 35 35.820 +24 59 40.65 $182\pm$17 $194\pm$68 0.320 0838+770 08 44 45.26 +76 53 09.5 $174 \pm$9 $180\pm$60 0.131 0844+349 08 47 42.4 +34 45 04 $163 \pm$41 $178\pm$337 0.064 0906+484 09 10 10.010 +48 13 41.80 $172\pm$10 $210\pm$134 0.118 0921+525 09 25 12.870 +52 17 10.52 $131\pm$52 $-171\pm$120 0.035 0923+201 09 25 54.700 +19 54 05.00 $271 \pm$58 $858\pm$291 0.190 0923+129 09 26 03.292 +12 44 03.63 $590.1\pm$53 $675\pm$239 0.029 0931+437 09 35 02.540 +43 31 10.70 $107\pm$72 $110\pm$113 0.457 0934+013 09 37 01.030 +01 05 43.48 $190\pm$102 $-30\pm$673 0.050 0935+417 09 38 57.00 +41 28 20.79 $28\pm$56 $-3.1\pm$156 1.980 0946+301 09 49 41.113 +29 55 19.24 $36 \pm$45 $-68\pm$116 1.216 0947+396 09 50 48.380 +39 26 50.50 $201 \pm$47 $279\pm$137 0.206 0953+414 09 56 52.4 +41 15 22 $107 \pm$56 $47\pm$117 0.239 1001+054 10 04 20.140 +05 13 00.50 $27 \pm$9 $146\pm$49 0.161 1004+130 10 07 26.100 +12 48 56.20 $191 \pm$42 $0.5\pm$130 0.24 1008+133 10 11 10.857 +13 04 11.90 $58 \pm$58 $-91\pm$224 1.287 1011-040 10 14 20.69 -04 18 40.5 $163 \pm$42 $-53\pm$153 0.058 1012+008 10 14 54.900 +00 33 37.30 $-25 \pm$74 $-23\pm$348 0.185 1022+519 10 25 31.278 +51 40 34.87 $153 \pm$40 $200\pm$103 0.045 1048+342 10 51 43.900 +33 59 26.70 $7.3 \pm$70 $-9.7\pm$168 0.167 1048-090 10 51 29.900 -09 18 10.00 $69 \pm$60 $215\pm$389 0.344 1049-005 10 51 51.450 -00 51 17.70 $191 \pm$56 $-11\pm$255 0.357 1103-006 11 06 31.775 -00 52 52.47 $130 \pm$51 $-234\pm$241 0.425 1112+431 11 15 06.020 +42 49 48.90 $182\pm$47 $132\pm$117 0.302 1114+445 11 17 06.400 +44 13 33.30 $191 \pm$47 $200\pm$60 0.144 1115+080 11 18 16.950 +07 45 58.20 $769 \pm$96 $871\pm$242 1.722 1115+407 11 18 30.290 +40 25 54.00 $265 \pm$71 $143\pm$162 0.154 1119+120 11 21 47.103 +11 44 18.26 $452 \pm$49 $481\pm$267 0.049 1121+422 11 24 39.190 +42 01 45.00 $-54 \pm$77 $66\pm$237 0.234 1126-041 11 29 16.661 -04 24 07.59 $669 \pm$26 $415\pm$686 0.06 1138+040 11 41 16.530 +03 46 59.60 $-1.1\pm$54 $-69\pm$151 1.876 1148+549 11 51 20.460 +54 37 33.10 $213 \pm$32 $239\pm$78 0.969 1149-110 11 52 03.544 -11 22 24.32 $215 \pm$67 $314\pm$105 0.049 1151+117 11 53 49.270 +11 28 30.40 $137 \pm$70 $218\pm$147 0.176 ---------- ----------------- --------------- -------------- -------------- ---------- -- -- ---------- ----------------- -------------- -------------- -------------- ---------- -- -- Name Right Ascension Declination $S_{60}$ $S_{100}$ Redshift (J2000) (J2000) (mJy) (mJy) 1202+281 12 04 42.1 +27 54 11 $176 \pm$41 $154\pm$133 0.165 1206+459 12 08 58.012 +45 40 35.87 $215 \pm$64 $383\pm$89 1.158 1211+143 12 14 17.7 +14 03 13 $305 \pm$53 $427\pm$182 0.084 1216+069 12 19 20.9 +06 38 38 $48 \pm$60 $150\pm$144 0.331 1222+228 12 25 27.4 +22 35 13 $67 \pm$65 $-78\pm$159 2.046 1229+204 12 32 03.605 +20 09 29.21 $154 \pm$64 $317\pm$105 0.063 1241+176 12 44 10.859 +17 21 04.32 $132 \pm$55 $217\pm$72 1.273 1244+026 12 46 35.240 +02 22 08.70 $280 \pm$51 $362\pm$121 0.048 1247+267 12 50 05.7 +26 31 08 $102 \pm$55 $174\pm$58 2.038 1248+401 12 50 48.368 +39 51 39.80 $224 \pm$51 $-53\pm$190 1.03 1254+047 12 56 59.959 +04 27 34.16 $98 \pm$51 $242\pm$81 1.024 1259+593 13 01 12.930 +59 02 06.70 $34 \pm$51 $-6.5\pm$125 0.478 1307+085 13 09 47.0 +08 19 49 $117 \pm$51 $155\pm$52 0.155 1309+355 13 12 17.767 +35 15 21.24 $147 \pm$46 $-29\pm$104 0.184 1310-108 13 13 05.8 -11 07 42 $102 \pm$77 $29\pm$288 0.035 1322+659 13 23 49.5 +65 41 48 $90 \pm$30 $100\pm$33 0.168 1329+412 13 31 41.130 +41 01 58.70 $136 \pm$58 $123\pm$131 1.93 1333+176 13 36 02.0 +17 25 13 $121 \pm$53 $157\pm$168 0.554 1338+416 13 41 00.780 +41 23 14.10 $30 \pm$45 $125\pm$149 1.219 1341+258 13 43 56.7 +25 38 48 $84\pm$40 $527\pm$185 0.087 1351+236 13 54 06.432 +23 25 49.09 $364 \pm$51 $306\pm$192 0.055 1351+640 13 53 15.808 +63 45 45.41 $757 \pm$8 $912\pm$156 0.088 1352+183 13 54 35.6 +18 05 17 $-85 \pm$47 $-29\pm$218 0.152 1352+011 13 54 58.7 +00 52 10 $104 \pm$59 $109\pm$165 1.121 1354+213 13 56 32.7 +21 03 52 $-15 \pm$54 $-84\pm$158 0.3 1402+261 14 05 16.195 +25 55 34.93 $318 \pm$47 $213\pm$71 0.164 1404+226 14 06 21.8 +22 23 46 $51 \pm$52 $-42\pm$164 0.098 1407+265 14 09 23.9 +26 18 21 $171 \pm$51 $-16\pm$126 0.94 1411+442 14 13 48.3 +44 00 14 $162 \pm$17 $140\pm$47 0.09 1415+451 14 17 00.820 +44 56 06.40 $112 \pm$37 $147\pm$49 0.114 1416-129 14 19 03.800 -13 10 44.00 $30 \pm$67 $198\pm$398 0.129 1425+267 14 27 35.540 +26 32 13.61 $79 \pm$58 $-16\pm$144 0.366 1426+015 14 29 06.588 +01 17 06.48 $318\pm$47 $62\pm$102 0.086 1427+480 14 29 43.070 +47 47 26.20 $82 \pm$25 $92\pm$31 0.221 1435-067 14 38 16.1 -06 58 21 $-16 \pm$75 $-229\pm$233 0.126 1440+356 14 42 07.463 +35 26 22.92 $652 \pm$21 $793\pm$87 0.079 1444+407 14 46 45.940 +40 35 05.70 $57 \pm$30 $80\pm$27 0.267 1448+273 14 51 08.8 +27 09 27 $117 \pm$37 $-34\pm$100 0.065 1501+106 15 04 01.201 +10 26 16.15 $473 \pm$36 $77\pm$144 0.036 1512+370 15 14 43.042 +36 50 50.41 $61 \pm$20 $160\pm$159 0.37 1519+226 15 21 14.2 +22 27 43 $-21 \pm$49 $155\pm$150 0.137 1522+101 15 24 24.6 +09 58 30 $37 \pm$71 $-38\pm$201 1.321 1534+580 15 35 52.361 +57 54 09.21 $140 \pm$51 $136\pm$128 0.03 1535+547 15 36 38.361 +54 33 33.21 $61 \pm$32 $81\pm$148 0.038 1538+477 15 39 34.8 +47 35 31 $97\pm$39 $107\pm$121 0.770 1543+489 15 45 30.240 +48 46 09.10 $348 \pm$26 $371\pm$79 0.4 1552+085 15 54 44.6 +08 22 22 $-82 \pm$103 $-276\pm$140 0.119 1612+261 16 14 13.210 +26 04 16.20 $252 \pm$72 $330\pm$629 0.131 1613+658 16 13 57.179 +65 43 09.58 $635 \pm$19 $1002\pm$100 0.129 1617+175 16 20 11.288 +17 24 27.70 $52 \pm$45 $45\pm$108 0.114 1626+554 16 27 56.0 +55 22 31 $-28 \pm$46 $70\pm$23 0.133 1630+377 16 32 01.120 +37 37 50.00 $5.9 \pm$36 $-105\pm$110 1.466 1634+706 16 34 28.884 +70 31 33.04 $318 \pm$23 $444\pm$80 1.334 1700+518 17 01 24.800 +51 49 20.00 $480 \pm$36 $374\pm$125 0.292 1715+535 17 16 35.5 +53 28 15 $3.2\pm$60 $-88\pm$123 1.920 2112+059 21 14 52.6 +06 07 42 $105 \pm$19 $193\pm$370 0.466 2130+099 21 32 27.813 +10 08 19.46 $479 \pm$12 $485\pm$162 0.062 2214+139 22 17 12.26 +14 14 20.9 $337 \pm$11 0.067 2233+134 22 36 07.680 +13 43 55.30 $80 \pm$68 $-647\pm$302 0.325 2302+029 23 04 45.0 +03 11 46 $130 \pm$66 $118\pm$174 1.044 2304+042 23 07 02.9 +04 32 57 $60\pm$63 $70\pm$130 0.042 2308+098 23 11 17.758 +10 08 15.46 $83\pm$87 $-539\pm$420 0.433 ---------- ----------------- -------------- -------------- -------------- ---------- -- --
--- abstract: 'In a filtered measure space, a characterization of weights for which the trace inequality of a positive operator holds is given by the use of discrete Wolff’s potential. A refinement of the Carleson embedding theorem is also introduced. Sawyer type characterization of weights for which a two-weight norm inequality for a generalized Doob’s maximal operator holds is established by an application of our Carleson embedding theorem. Moreover, Hytönen-Pérez type one-weight norm estimate for Doob’s maximal operator is obtained by the use of our two-weight characterization.' address: - 'Graduate School of Mathematical Sciences, The University of Tokyo, Tokyo, 153-8914, Japan' - 'Graduate School of Mathematical Sciences, The University of Tokyo, Tokyo, 153-8914, Japan' author: - Hitoshi Tanaka - Yutaka Terasawa title: Positive operators and maximal operators in a filtered measure space --- [^1] [^2] Introduction {#sec1} ============ Weighted Norm Inequalities in Harmonic analysis is an old subject whose systematic investigation was initiated by [@Mu], [@CoFe] and [@MuWh] etc.. A classical reference in the field is [@GaRu]. Dyadic Harmonic Analysis has recently acquired a renewed attention because of its wide applicability to Classical Harmonic Analysis, including weighted norm inequalities. Petermichl [@Pe] and Nazarov-Treil-Volberg [@NaTrVo] were cornerstone works, whose investigations have been continued by many authors. This subject is also old, which can be found in [@Sa1] and [@GaJo] etc.. For more complete references, we refer to the bibliographies of [@NaTrVo] and [@La]. Two of the important topics in the intersection of these subjects are to get sharp one-weight estimates of usual operators in Classical Harmonic Analysis and to get necessary and sufficient conditions of weights for the boundedness of those operators in the two-weight setting. Interestingly, these two topics are closely related. One way to attack these problems is a dyadic discretization technique. For the first problem, one of the important steps of a solution is getting a sharp one-weight estimate for a dyadic discretization of a singular integral operator, i.e., a generalized Haar shift operator. A sharp one-weight estimate of general singular integral operators, i.e., the $A_2$-conjecture, which has been an open problem in this field for a long time, was settled by Hytönen [@Hy3] along this line and its simpler proofs were found by several authors (cf. [@HyLaPe; @Le2] etc.). For (linear) positive operators, one example of which is a fractional integral operator, investigations along this line was done by several authors [@KeSa; @SaWh; @SaWhZh; @VeWh; @CaOrVe1; @CaOrVe2] and more recently by [@LaMoPeTo; @LaSaUr; @Ka1; @Ka2; @Tr]. For the Hardy-Littlewood maximal operator (including a fractional maximal operator), Sawyer [@Sa1] got a two-weight characterization by considering the dyadic Hardy-Littlewood (fractional) maximal operator. Recently, using similar techniques, the sharp weighted estimates of the Hardy-Littlewood (fractional) maximal operator is established in the works [@Le1; @LaMoPeTo; @HyKa; @HyPe], which are continuations of the work of Buckley [@Bu]. For a survey of these developments, we refer to [@Per], [@Hy4] and [@Hy5]. On the other hand, Martingale Harmonic Analysis is a subject which has also been well studied. Doob’s maximal operator, which is a generalization of the dyadic Hardy-Littlewood maximal operator, and a martingale transform, which is an analogue of a singular integral in Classical Harmonic Analysis, are important tools in stochastic analysis. This field is called Martingale Harmonic Analysis and is well explained in the books by Dellacherie and Meyer [@DeMe], Long [@Lo] and Kazamaki [@Ka]. For Doob’s maximal operator, one-weight estimate was studied first by Izumisawa and Kazamaki [@IzKa], assuming some regularity condition on $A_p$ weights. Later, Jawerth [@Ja] found that the added property is superfluous (see Remark \[rem4.6\] below). For two-weight norm inequalities, the first study is done by Uchiyama [@Uc], concerning necessary and sufficient condition of weights for weak type $(p,p)$ inequalities to hold. Concerning strong $(p,q)$ type inequalities, Long and Peng [@LoPe] found necessary and sufficient conditions for weights, which is the analogous to Sawyer’s condition for the boundedness of the Hardy-Littlewood maximal operator. There is also a recent work by Chen and Liu [@ChLi] on this topic. For positive operators, there seems no work done in a filtered probability space or in a filtered measure space and we shall try to generalize the results in the Euclidean space of the weighted estimate for dyadic positive operators to those in a martingale setting. (For fractional integral operators in a martingale setting, there is a recent work by Nakai and Sadasue [@NaSa].) The study of a boundedness property of positive operators and maximal operators is closely related to the Carleson embedding (or measure) theorem, which is a martingale analogue of the Carleson embedding theorem of a Hardy space into a weighted Lebesgue space. In the dyadic setting in the Euclidean space, this coincides with the Dyadic Carleson embedding theorem. The Carleson measure in a continuously filtered probability space was first introduced by Arai [@Ar] with an application to the corona theorem on Complex Brownian Spaces. This was rediscovered later by Long [@Lo] in a discrete case, with an application to a characterization of $BMO$ martingales. Since a dyadic martingale is a special martingale in many ways, it might be useful to see which part of the theory of Dyadic Harmonic Analysis can be generalized to that of Martingale Harmonic Analysis, and which part is special to Dyadic Harmonic Analysis. Our contributions can be regarded as such an attempt. We also expect that such results have some applications to stochastic analysis and analysis on metric spaces. The purpose of this paper is to develop a theory of weights for positive operators and generalized Doob’s maximal operators in a filtered measure space. Martingale Harmonic Analysis in a filtered (infinite) measure space is treated in [@St; @Sc; @Hy2; @Ke; @HyKe]. In this contribution, we generalize the results of dyadic positive operators in the Euclidean space [@KeSa; @SaWh; @CaOrVe1; @CaOrVe2] to a filtered measure space. The generalization of the results in [@LaSaUr] or [@Tr] to our setting seems difficult, since they use arguments related to an inclusion of cubes extensively. We also investigate a necessary and sufficient condition of weights for a two-weight norm inequality of generalized Doob’s maximal operator in a filtered measure space which are generalization of both dyadic Hardy-Littlewood maximal operator and dyadic fractional maximal operator. To state our main theorem, let us introduce some notations and terminologies, most of which are standard (cf. [@Hy2]). Let a triplet $(\Omega,{{\mathcal F}},\mu)$ be a measure space. Denote by ${{\mathcal F}}^0$ the collection of sets in ${{\mathcal F}}$ with finite measure. The measure space $(\Omega,{{\mathcal F}},\mu)$ is called $\sg$-finite if there exist sets $E_i\in{{\mathcal F}}^0$ such that $\bigcup_{i=0}^{\infty}E_i=\Omega$. In this paper all measure spaces are assumed to be $\sg$-finite. Let ${{\mathcal A}}\subset{{\mathcal F}}^0$ be an arbitrary subset of ${{\mathcal F}}^0$. An ${{\mathcal F}}$-measurable function $f:\,\Omega\to{{\mathbb R}}$ is called ${{\mathcal A}}$-integrable if it is integrable on all sets of ${{\mathcal A}}$, i.e., $$1_{E}f\in L^1({{\mathcal F}},\mu) \text{ for all } E\in{{\mathcal A}}.$$ Denote the collection of all such functions by $L_{{{\mathcal A}}}^1({{\mathcal F}},\mu)$. If ${{\mathcal G}}\subset{{\mathcal F}}$ is another $\sg$-algebra, it is called a sub-$\sg$-algebra of ${{\mathcal F}}$. A function $g\in L_{{{\mathcal G}}^0}^1({{\mathcal G}},\mu)$ is called the conditional expectation of $f\in L_{{{\mathcal G}}^0}^1({{\mathcal F}},\mu)$ with respect to ${{\mathcal G}}$ if there holds $$\int_{G}f\,d\mu=\int_{G}g\,d\mu \text{ for all } G\in{{\mathcal G}}^0.$$ The conditional expectation of $f$ with respect to ${{\mathcal G}}$ will be denoted by $E[f|{{\mathcal G}}]$, which exists uniquely in $L_{{{\mathcal G}}^0}^1({{\mathcal G}},\mu)$ due to $\sg$-finiteness of $(\Omega,{{\mathcal G}},\mu)$. A family of sub-$\sg$-algebras $({{\mathcal F}}_i)_{i\in{{\mathbb Z}}}$ is called a filtration of ${{\mathcal F}}$ if ${{\mathcal F}}_i\subset{{\mathcal F}}_j\subset{{\mathcal F}}$ whenever $i,j\in{{\mathbb Z}}$ and $i<j$. We call a quadruplet $(\Omega,{{\mathcal F}},\mu;({{\mathcal F}}_i)_{i\in{{\mathbb Z}}})$ a $\sg$-finite filtered measure space. We write $${{\mathcal L}}:= \bigcap_{i\in{{\mathbb Z}}}L_{{{\mathcal F}}_i^0}^1({{\mathcal F}},\mu).$$ Notice that $L_{{{\mathcal F}}_i^0}^1({{\mathcal F}},\mu) \supset L_{{{\mathcal F}}_j^0}^1({{\mathcal F}},\mu)$ whenever $i<j$. For a function $f\in{{\mathcal L}}$ we will denote $E[f|{{\mathcal F}}_i]$ by ${{\mathcal E}}_if$. By the tower rule of conditional expectations, a family of functions ${{\mathcal E}}_if\in L_{{{\mathcal F}}_i^0}^1({{\mathcal F}}_i,\mu)$ becomes a martingale. (see Definition \[def2.1\] below). By a weight we mean a nonnegative function which belongs to ${{\mathcal L}}$ and, by a convention, we will denote the set of all weights by ${{\mathcal L}}^{+}$. Let $\al_i$, $i\in{{\mathbb Z}}$, be a nonnegative bounded ${{\mathcal F}}_i$-measurable function and set $\al=(\al_i)$. For a function $f\in{{\mathcal L}}$ we define a positive operator $T_{\al}$ by $$T_{\al}f:=\sum_{i\in{{\mathbb Z}}}\al_i{{\mathcal E}}_if,$$ and, define a generalized Doob’s maximal operator $M_{\al}$ by $$M_{\al}f:=\sup_{i\in{{\mathbb Z}}}\al_i|{{\mathcal E}}_if|.$$ When $\al=(1_{\Omega})$ this is Doob’s maximal operator and we will write then $M_{\al}f=:f^{*}$. In this paper we shall first investigate the characterization of the weight $w\in{{\mathcal L}}^{+}$ for which the trace inequality for the discrete positive operator $T_{\al}$ $$\label{1.1} \|T_{\al}f\|_{L^q(wd\mu)} \le C_{\al,w} \|f\|_{L^p(d\mu)}$$ holds with $0<q<\infty$ and $1<p<\infty$. In order to guess what the sufficient condition for to hold is, we argue heuristically in the following. We now assume that the inequality holds for $1<p\le q<\infty$. Then, since the conditional expectation operators are selfadjoint, by duality there holds $$\label{1.2} \|T_{\al}(gw)\|_{L^{p'}(d\mu)} \le C \|g\|_{L^{q'}(wd\mu)},$$ where $p'=\ds\frac{p}{p-1}$ is the conjugate exponent number of $p$. Following a principle of the weight theory, due to Sawyer [@Sa2], to verify it might suffice only to test and over the characteristic functions $1_{E}$. More precisely, one can expect that, the condition that $$\label{1.3} \l(\int_{E}\l(\sum_{j\ge i}\al_j\r)^qw\,d\mu\r)^{\frac1q} \le C \mu(E)^{\frac1p}$$ and $$\label{1.4} \l(\int_{E}\l(\sum_{j\ge i}\al_j{{\mathcal E}}_jw\r)^{p'}\,d\mu\r)^{\frac1{p'}} \le C [wd\mu](E)^{\frac1{q'}}$$ for any $E\in{{\mathcal F}}_i^0$, $i\in{{\mathbb Z}}$, is sufficient for the inequality to hold. This fact was verified for positive operators associated the dyadic lattices in ${{\mathbb R}}^n$ [@LaSaUr] (and also [@Tr]). For some technical reasons, instead of the condition , we must postulate the following strong condition and then we shall prove that the condition is sufficient for the inequality to hold (cf. [@KeSa; @SaWh] in the Euclidean space case). > The function $\al_i$, $i\in{{\mathbb Z}}$, is a nonnegative bounded ${{\mathcal F}}_i$-measurable and $\ol{\al}_i\in{{\mathcal L}}^{+}$, where $\ds\ol{\al}_i:=\sum_{j\ge i}\al_j$. Moreover, $$\label{1.5} > {{\mathcal E}}_i\ol{\al}_i\approx\ol{\al}_i,$$ holds. \[thm1.1\] Let $1<p\le q<\infty$, $\al$ satisfy the condition and $w\in{{\mathcal L}}^{+}$ be a weight. Then the following statements are equivalent: 1. There exists a constant $C_1>0$ such that $$\|T_{\al}f\|_{L^q(wd\mu)} \le C_1 \|f\|_{L^p(d\mu)};$$ 2. There exists a constant $C_2>0$ such that $$\l(\int_{E}\l(\sum_{j\ge i}\al_j{{\mathcal E}}_jw\r)^{p'}\,d\mu\r)^{\frac1{p'}} \le C_2 [wd\mu](E)^{\frac1{q'}}$$ for any $E\in{{\mathcal F}}_i^0$, $i\in{{\mathbb Z}}.$ Moreover, the least possible $C_1$ and $C_2$ are equivalent. In their papers [@CaOrVe1] and [@CaOrVe2], Cascante, Ortega and Verbitsky established the characterization the weight $w$ for which the inequality holds for $0<q<p<\infty$ and $1<p<\infty$ in terms of discrete Wolff’s potential in the cases when discrete positive integral operators are associated to the dyadic cubes in ${{\mathbb R}}^n$. The following theorem is an extension of their results to a filtered measure space. (cf. [@Ta; @TaGu] in the Euclidean space). Our condition corresponds to the dyadic logarithmic bounded oscillation condition" introduced in [@CaOrVe1]. \[thm1.2\] Let $\al$ satisfy the condition , $w\in{{\mathcal L}}^{+}$ be a weight and consider the following statements: 1. There exists a constant $C_1>0$ such that $$\|T_{\al}f\|_{L^q(wd\mu)} \le C_1 \|f\|_{L^p(d\mu)};$$ 2. There exists a constant $C_2>0$ such that, for $\ds\frac1r=\frac1q-\frac1p$, $$\|({{\mathcal W}}_{\al}[w])^{\frac1{p'}}\|_{L^r(wd\mu)}<c_2,$$ where $${{\mathcal W}}_{\al}[w] := \sum_{i\in{{\mathbb Z}}}\al_i\ol{\al}_i^{p'-1}({{\mathcal E}}_iw)^{p'-1}$$ is discrete Wolff’s potential in a filtered measure space. Then, if $0<q<p<\infty$ and $1<p<\infty$, [(b)]{} implies [(a)]{} with $C_1\le CC_2$. Conversely, if $1<q<p<\infty$, [(a)]{} implies [(b)]{} with $C_2\le CC_1$. \[rem1.3\] In [@CaOrVe2], in the cases when discrete positive integral operators are associated to the dyadic cubes in ${{\mathbb R}}^n$, Cascante, Ortega and Verbitsky proved the equivalence between (a) and (b) in the full range $0<q<p<\infty$ and $1<p<\infty$. Thanks to a powerful lemma (Lemma \[lem2.3\] below) and the condition , the proof of Theorems \[thm1.1\] and \[thm1.2\] can be reduced to the Carleson embedding (or measure) theorem. In Section \[sec3\] we shall investigate that theorem in the setting of a filtered measure space (see Theorems \[thm3.1\] and \[thm3.5\]). In Section \[sec4\], as an application of that theorem, we establish the analogue of Sawyer type characterization of weights for which two-weight norm inequality for the generalized Doob’s maximal operator $M_{\al}$ holds (see Theorem \[thm4.1\]). In Section \[sec5\] we also establish Hytönen-Pérez type one-weight norm estimate for Doob’s maximal operator $f^{*}$ (see Theorem \[5.1\]). Finally, we would like to comment on our weight class ${{\mathcal L}}^{+}$. \[rem1.4\] Let $(\Omega,{{\mathcal F}},\mu;({{\mathcal F}}_i)_{i\in{{\mathbb Z}}})$ be a $\sg$-finite filtered measure space. Then, it naturally contains a filtered probability space with a filtration indexed by ${{\mathbb N}}$ and a Euclidean space with a dyadic filtration. It also contains doubling metric measure space with dyadic lattice constructed by Hytönen and Kairema [@HyKa]. Our weight class ${{\mathcal L}}^{+}$ coincides with a set of all locally integrable weights in the case of the Euclidean space with a Lebesgue measure with a dyadic filtration. Since the dyadic $A_p$ weights in Euclidean space are locally integrable, it seems natural to introduce the class ${{\mathcal L}}^{+}$. We could not find this class of weights in a filtered measure space in the literatures. We notice that the class $L_{{{\mathcal F}}^0}^1({{\mathcal F}},\mu)$ used in several literatures does not include functions which grows at spacial infinity in the Euclidean space with ${{\mathcal F}}$ a $\sigma$-algebra of the Lebesgue measurable sets and $ \mu $ a Lebesgue measure. The letter $C$ will be used for constants that may change from one occurrence to another. Constants with subscripts, such as $C_1$, $C_2$, do not change in different occurrences. By $A\approx B$ we mean that $c^{-1}B\le A\le cB$ with some positive constant $c$ independent of appropriate quantities. Proof of Theorems 1.1 and 1.2 {#sec2} ============================= In what follows we prove Theorems \[thm1.1\] and \[thm1.2\]. We first list two basic properties of the conditional expectation and the definition of a martingale. Let $(\Omega,{{\mathcal F}},\mu)$ be a $\sg$-finite measure space and ${{\mathcal G}}$ be a sub-$\sg$-algebra of ${{\mathcal F}}$. Then the following holds. 1. Let $f\in L_{{{\mathcal G}}^0}^1({{\mathcal F}},\mu)$ and $g$ be a ${{\mathcal G}}$-measurable function. Then the two conditions $fg\in L_{{{\mathcal G}}^0}^1({{\mathcal F}},\mu)$ and $gE[f|{{\mathcal G}}]\in L_{{{\mathcal G}}^0}^1({{\mathcal G}},\mu)$ are equivalent and, assuming one of these conditions, we have $$E[fg|{{\mathcal G}}]=gE[f|{{\mathcal G}}];$$ 2. Let $f_1,f_2\in L_{{{\mathcal G}}^0}^1({{\mathcal F}},\mu)$. Then the three conditions $$E[f_1|{{\mathcal G}}]f_2\in L_{{{\mathcal G}}^0}^1({{\mathcal G}},\mu),\quad E[f_1|{{\mathcal G}}]E[f_2|{{\mathcal G}}]\in L_{{{\mathcal G}}^0}^1({{\mathcal G}},\mu) \text{ and } f_1E[f_2|{{\mathcal G}}]\in L_{{{\mathcal G}}^0}^1({{\mathcal G}},\mu)$$ are all equivalent and, assuming one of these conditions, we have $$E[E[f_1|{{\mathcal G}}]f_2|{{\mathcal G}}] = E[f_1|{{\mathcal G}}]E[f_2|{{\mathcal G}}] = E[f_1E[f_2|{{\mathcal G}}]|{{\mathcal G}}].$$ 3. Let $ {{\mathcal G}}_1 \subset {{\mathcal G}}_2~(\subset {{\mathcal F}}) $ be two sub-$\sigma$-algebras of ${{\mathcal F}}$ and let $ f \in L_{{{\mathcal G}}_2^0}({{\mathcal F}}, \mu). $ Then $$E[f | {{\mathcal G}}_1] = E[E[f | {{\mathcal G}}_2] | {{\mathcal G}}_1].$$ \(i) can be proved by an approximation by simple functions. The property (ii) means that conditional expectation operators are selfadjoint and can be easily deduced from (i). (iii) can be proved easily and called the tower rule of conditional expectations. \[def2.1\] Let $(\Omega,{{\mathcal F}},\mu;({{\mathcal F}}_i)_{i\in{{\mathbb Z}}})$ be a $\sg$-finite filtered measure space. Let $(f_i)_{i\in{{\mathbb Z}}}$ be a sequence of ${{\mathcal F}}_i$-measurable functions. Then the sequence $(f_i)_{i\in{{\mathbb Z}}}$ is called a martingale" if $f_i\in L_{{{\mathcal F}}_i^0}^1({{\mathcal F}}_i,\mu)$ and $f_i={{\mathcal E}}_if_j$ whenever $i<j$. We also introduce the notion of a stopping time for later uses. \[def2.2\] Let $(\Omega,{{\mathcal F}},\mu;({{\mathcal F}}_i)_{i\in{{\mathbb Z}}})$ be a $\sg$-finite filtered measure space. Then a function $\tau:\,\Omega\rightarrow\{-\infty\}\cup{{\mathbb Z}}\cup\{+\infty\}$ is called a stopping time if for any $i\in{{\mathbb Z}}$ $$\{\tau\le i\} = \{\omega\in\Omega:\,\tau(\omega)\le i\} \in{{\mathcal F}}_i.$$ Let $f_i$, $i\in{{\mathbb Z}}$, be an ${{\mathcal F}}_i$-measurable function and let $\la\in{{\mathbb R}}$. Then, it is easy to see that $\tau:=\inf\{i:\,f_i>\la\}$ is a stopping time. All the stopping times we will use are of this type. Next we will state a principal lemma which plays a key role in the proof of Theorems \[thm1.1\] and \[thm1.2\]. Principal lemma {#ssec2.1} --------------- The following is the principal lemma, which is an extension of [@CaOrVe1 Theorem 2.1] to a filtered measure space. \[lem2.3\] Let $\al_i$, $i\in{{\mathbb Z}}$, be a nonnegative bounded ${{\mathcal F}}_i$-measurable function, let $s>1$ and $w\in{{\mathcal L}}^{+}$ be a weight. Then the following quantities are equivalent: $$\begin{aligned} {2} A_1&:=\int_{\Omega}\l(\sum_{i\in{{\mathbb Z}}}\al_i{{\mathcal E}}_iw\r)^s\,d\mu; \\ A_2&:=\int_{\Omega}\sum_{i\in{{\mathbb Z}}}\al_i{{\mathcal E}}_iw\l({{\mathcal E}}_i(\ol{\al}_iw)\r)^{s-1}\,d\mu; \\ A_3&:=\int_{\Omega}\l(\sup_{i\in{{\mathbb Z}}}{{\mathcal E}}_i(\ol{\al}_iw)\r)^s\,d\mu,\end{aligned}$$ where $\ds\ol{\al}_i:=\sum_{j\ge i}\al_j$. By a standard limiting argument, we may assume without loss of generality that there are only a finite number of $\al_i\ne 0$ and $w$ is bounded and summable. [**(i)**]{}   We prove $A_1\le CA_2$. We use an elementary inequality $$\label{2.1} \l(\sum_ia_i\r)^s \le s \sum_ia_i\l(\sum_{j\ge i}a_j\r)^{s-1},$$ where $(a_i)_{i\in{{\mathbb Z}}}$ is a sequence of summable nonnegative reals. First, we verify the simple case $1<s\le 2$. It follows from that $$\begin{aligned} {2} \lefteqn{ \int_{\Omega}\l(\sum_i{{\mathcal E}}_i(\al_iw)\r)^s\,d\mu }\\ &\le s \sum_i \int_{\Omega} {{\mathcal E}}_i(\al_iw) \l(\sum_{j\ge i}{{\mathcal E}}_j(\al_jw)\r)^{s-1} \,d\mu \\ &= s \sum_i \int_{\Omega} {{\mathcal E}}_i{{\mathcal E}}_i(\al_iw) \l(\sum_{j\ge i}{{\mathcal E}}_j(\al_jw)\r)^{s-1} \,d\mu \\ &= s \sum_i \int_{\Omega} {{\mathcal E}}_i(\al_iw) {{\mathcal E}}_i\l[\l(\sum_{j\ge i}{{\mathcal E}}_j(\al_jw)\r)^{s-1}\r] \,d\mu,\end{aligned}$$ where we have used the fact that conditional expectation operators are selfadjoint. We notice that $s-1\le 1$. From Jensen’s inequality and the tower rule of conditional expectations, $$\begin{aligned} {2} &\le s \sum_i \int_{\Omega} {{\mathcal E}}_i(\al_iw) \l(\sum_{j\ge i}{{\mathcal E}}_i{{\mathcal E}}_j(\al_jw)\r)^{s-1} \,d\mu \\ &= s \int_{\Omega} \sum_i {{\mathcal E}}_i(\al_iw) \l({{\mathcal E}}_i(\ol{\al}_iw)\r)^{s-1} \,d\mu.\end{aligned}$$ Next, we prove the case $s>2$. Let $k=\lceil s-2 \rceil$ be the smallest integer greater than $s-2$. Applying $(k+1)$-times, we have $$\begin{aligned} {2} A_1 &= \int_{\Omega}\l(\sum_i{{\mathcal E}}_i(\al_iw)\r)^s\,d\mu \\ &\le s(s-1)\cdots(s-k) \\ &\times \sum_{i_k\ge\cdots\ge i_1\ge i_0} \int_{\Omega} {{\mathcal E}}_{i_0}(\al_{i_0}w) {{\mathcal E}}_{i_1}(\al_{i_1}w) \cdots {{\mathcal E}}_{i_k}(\al_{i_k}w) \l(\sum_{j\ge i_k}{{\mathcal E}}_j(\al_jw)\r)^{s-k-1} \,d\mu.\end{aligned}$$ Since ${{\mathcal E}}_{i_0}(\al_{i_0}w) {{\mathcal E}}_{i_1}(\al_{i_1}w) \cdots {{\mathcal E}}_{i_k}(\al_{i_k}w)$ becomes an ${{\mathcal F}}_{i_k}$-measurable function, the integral of the right-hand sides is equals to $$\begin{aligned} {2} \lefteqn{ \int_{\Omega} {{\mathcal E}}_{i_k} \l[ {{\mathcal E}}_{i_0}(\al_{i_0}w) {{\mathcal E}}_{i_1}(\al_{i_1}w) \cdots {{\mathcal E}}_{i_k}(\al_{i_k}w) \r] \l(\sum_{j\ge i_k}{{\mathcal E}}_j(\al_jw)\r)^{s-k-1} \,d\mu }\\ &= \int_{\Omega} {{\mathcal E}}_{i_0}(\al_{i_0}w) {{\mathcal E}}_{i_1}(\al_{i_1}w) \cdots {{\mathcal E}}_{i_k}(\al_{i_k}w) {{\mathcal E}}_{i_k} \l[\l(\sum_{j\ge i_k}{{\mathcal E}}_j(\al_jw)\r)^{s-k-1}\r] \,d\mu \\ &\le \int_{\Omega} {{\mathcal E}}_{i_0}(\al_{i_0}w) {{\mathcal E}}_{i_1}(\al_{i_1}w) \cdots {{\mathcal E}}_{i_k}(\al_{i_k}w) \l({{\mathcal E}}_{i_k}(\ol{\al}_{i_k}w)\r)^{s-k-1} \,d\mu,\end{aligned}$$ where we have used $0<s-k-1\le 1$. This yields $$A_1 \le C \int_{\Omega} \l(\sum_i{{\mathcal E}}_i(\al_iw)\r)^k \l(\sum_i{{\mathcal E}}_i(\al_iw)\l({{\mathcal E}}_i(\ol{\al}_iw)\r)^{s-k-1}\r) \,d\mu.$$ Hölder’s inequality with exponent $\ds\frac{k}{s-1}+\frac{s-k-1}{s-1}=1$ gives $$\sum_i{{\mathcal E}}_i(\al_iw)\l({{\mathcal E}}_i(\ol{\al}_iw)\r)^{s-k-1} \le \l(\sum_i{{\mathcal E}}_i(\al_iw)\r)^{\frac{k}{s-1}} \l(\sum_i{{\mathcal E}}_i(\al_iw)\l({{\mathcal E}}_i(\ol{\al}_iw)\r)^{s-1}\r)^{\frac{s-k-1}{s-1}},$$ and, hence, $$A_1 \le C \int_{\Omega} \l(\sum_i{{\mathcal E}}_i(\al_iw)\r)^{\frac{sk}{s-1}} \l(\sum_i{{\mathcal E}}_i(\al_iw)\l({{\mathcal E}}_i(\ol{\al}_iw)\r)^{s-1}\r)^{\frac{s-k-1}{s-1}} \,d\mu.$$ Hölder’s inequality with the same exponent gives $$A_1 \le C \l\{\int_{\Omega}\l(\sum_i{{\mathcal E}}_i(\al_iw)\r)^s\,d\mu\r\}^{\frac{k}{s-1}} \l\{\int_{\Omega}\sum_i{{\mathcal E}}_i(\al_iw)\l({{\mathcal E}}_i(\ol{\al}_iw)\r)^{s-1}\,d\mu\r\}^{\frac{s-k-1}{s-1}}.$$ Thus, we obtain $$A_1\le CA_1^{\frac{k}{s-1}}A_2^{\frac{s-k-1}{s-1}}.$$ This implies $A_1\le CA_2$. [**(ii)**]{}   We prove $A_2\le CA_3$. It follows that $$\begin{aligned} {2} A_2 &= \int_{\Omega}\sum_i\al_i{{\mathcal E}}_iw\l({{\mathcal E}}_i(\ol{\al}_iw)\r)^{s-1}\,d\mu \\ &\le \int_{\Omega} \l\{\sum_i\al_i{{\mathcal E}}_iw\r\} \l\{\sup_j{{\mathcal E}}_j(\ol{\al}_jw)\r\}^{s-1} \,d\mu.\end{aligned}$$ Hölder’s inequality gives $$A_2 \le A_1^{\frac1s} A_3^{\frac1{s'}}.$$ Since we have had $A_1\le CA_2$, we obtain $$A_2 \le C A_2^{\frac1s} A_3^{\frac1{s'}},$$ and $A_2\le CA_3$. [**(iii)**]{}   We prove $A_3\le CA_1$. It follows that $$\begin{aligned} {2} A_3 &= \int_{\Omega}\l(\sup_i{{\mathcal E}}_i(\ol{\al}_iw)\r)^s\,d\mu \\ &\le \int_{\Omega} \l\{\sup_i{{\mathcal E}}_i\l(\sum_j\al_jw\r)\r\}^s\,d\mu \\ &\le C \int_{\Omega}\l(\sum_i\al_iw\r)^s\,d\mu \\ &= CA_1,\end{aligned}$$ where we have used $s>1$ and Doob’s maximal inequality. This completes the proof. Proof of Theorem 1.1 {#ssec2.2} -------------------- Without loss of generality we may assume that $f$ is a nonnegative function. By duality (a) is equivalent to $$\label{2.2} \|T_{\al}(gw)\|_{L^{p'}(d\mu)} \le C \|g\|_{L^{q'}(wd\mu)}.$$ It follows from Lemma \[lem2.3\] that $$\begin{aligned} {2} \|T_{\al}(gw)\|_{L^{p'}(d\mu)}^{p'} &\approx \int_{\Omega} \sum_i\al_i{{\mathcal E}}_i(gw)\l({{\mathcal E}}_i(\ol{\al}_igw)\r)^{p'-1} \,d\mu \\ &\approx \int_{\Omega} \sum_i\al_i\ol{\al}_i^{p'-1}\l({{\mathcal E}}_i(gw)\r)^{p'} \,d\mu,\end{aligned}$$ where we have used the condition . We denote ${{\mathcal E}}^w_if$ by the conditional expectation of $f$ with respect to ${{\mathcal F}}_i$, $wd\mu$ in place of $d\mu$. We now claim that there holds $$\frac{{{\mathcal E}}_i(gw)}{{{\mathcal E}}_iw} = {{\mathcal E}}^w_ig.$$ Indeed, by a simple limiting argument, if necessary, we can assume that $g$ is a bounded function. Then, since $\ds\frac{{{\mathcal E}}_i(gw)}{{{\mathcal E}}_iw}$ is an ${{\mathcal F}}_i$-measurable function and belongs to ${{\mathcal L}}^{+}$, for any $E\in{{\mathcal F}}_i$, $$\begin{aligned} {2} \lefteqn{ \int_{E}\frac{{{\mathcal E}}_i(gw)}{{{\mathcal E}}_iw}w\,d\mu = \int_{E} {{\mathcal E}}_i\l(\frac{{{\mathcal E}}_i(gw)}{{{\mathcal E}}_iw}\r)w\,d\mu }\\ &= \int_{E} \frac{{{\mathcal E}}_i(gw)}{{{\mathcal E}}_iw}{{\mathcal E}}_iw\,d\mu = \int_{E}{{\mathcal E}}_i(gw)\,d\mu = \int_{E}gw\,d\mu.\end{aligned}$$ This claim yields $$\|T_{\al}(gw)\|_{L^{p'}(d\mu)}^{p'} \approx \int_{\Omega}\sum_ia_i({{\mathcal E}}_iw)^{p'}({{\mathcal E}}^w_ig)^{p'}\,d\mu,$$ where $a_i:=\al_i\ol{\al}_i^{p'-1}$. Thus, is equivalent to $$\label{2.3} \l(\int_{\Omega}\sum_ia_i({{\mathcal E}}_iw)^{p'}({{\mathcal E}}^w_ig)^{p'}\,d\mu\r)^{\frac1{p'}} \le C \|g\|_{L^{q'}(wd\mu)}.$$ By the Carleson embedding theorem (Corollary \[cor3.4\] below), is equivalent to the statement that there exists a constant $C>0$ such that $$\label{2.4} \int_{E}\sum_{j\ge i}a_j({{\mathcal E}}_jw)^{p'}\,d\mu \le C [wd\mu](E)^{\frac{p'}{q'}}$$ holds for any $E\in{{\mathcal F}}_i^0$, $i\in{{\mathbb Z}}$. From the condition and Lemma \[lem2.3\][^3] there holds $$\begin{aligned} {2} \lefteqn{ \int_{E}\sum_{j\ge i}a_j({{\mathcal E}}_jw)^{p'}\,d\mu }\\ &= \int_{E}\sum_{j\ge i}\al_j\ol{\al}_j^{p'-1}({{\mathcal E}}_jw)^{p'}\,d\mu \\ &\approx \int_{E} \sum_{j\ge i} \al_j{{\mathcal E}}_jw \l({{\mathcal E}}_j(\ol{\al}_jw)\r)^{p'-1} \,d\mu \\ &\approx \int_{E}\l(\sum_{j\ge i}\al_j{{\mathcal E}}_jw\r)^{p'}\,d\mu.\end{aligned}$$ Hence, is equivalent to $$\l(\int_{E}\l(\sum_{j\ge i}\al_j{{\mathcal E}}_jw\r)^{p'}\,d\mu\r)^{\frac1{p'}} \le C [wd\mu](E)^{\frac1{q'}}.$$ Then we finish the proof. $\square$ Proof of Theorem 1.2 {#ssec2.3} -------------------- We need another lemma. \[lem2.4\] Let $1<p<\infty$, $\al$ satisfy the condition and $w$ be a weight. Then $$\|T_{\al}f\|_{L^p(vd\mu)} \le C \|f\|_{L^p(d\mu)},$$ where $$v:=\frac{w}{{{\mathcal W}}_{\al}[w]^{p-1}} \text{ and } {{\mathcal W}}_{\al}[w] := \sum_i\al_i\ol{\al}_i^{p'-1}({{\mathcal E}}_iw)^{p'-1}.$$ We need only verify that the weight $v$ fulfill with $q=p$. It suffices to show that there exists a constant $C>0$ such that $$\int_{E}\sum_{j\ge i}a_j({{\mathcal E}}_jv)^{p'}\,d\mu \le C [vd\mu](E)$$ holds for any $E\in{{\mathcal F}}_i^0$, $i\in{{\mathbb Z}}$, where $a_j=\al_j\ol{\al}_j^{p'-1}$. By conditional Hölder’s inequality we see that $$({{\mathcal E}}_jv)^{p'} \le ({{\mathcal E}}_jw)^{p'-1} {{\mathcal E}}_j\l(\frac{w}{{{\mathcal W}}_{\al}[w]^p}\r).$$ This implies $$\begin{aligned} {2} \lefteqn{ \int_{E}\sum_{j\ge i}a_j({{\mathcal E}}_jv)^{p'}\,d\mu }\\ &\le \int_{E} \sum_{j\ge i} a_j({{\mathcal E}}_jw)^{p'-1} {{\mathcal E}}_j\l(\frac{w}{{{\mathcal W}}_{\al}[w]^p}\r) \,d\mu \\ &\approx \int_{E} \sum_{j\ge i} {{\mathcal E}}_j\l[a_j({{\mathcal E}}_jw)^{p'-1}\frac{w}{{{\mathcal W}}_{\al}[w]^p}\r] \,d\mu \\ &\approx \int_{E} \sum_{j\ge i} a_j({{\mathcal E}}_jw)^{p'-1}\frac{w}{{{\mathcal W}}_{\al}[w]^p} \,d\mu \\ &\le C \int_{E}\frac{w}{{{\mathcal W}}_{\al}[w]^{p-1}}\,d\mu =C [vd\mu](E),\end{aligned}$$ where we have used the condition . This is our desired inequality. Recall that in this case $0<q<p<\infty$, $1<p<\infty$ and $\ds\frac1r=\frac1q-\frac1p$. Hölder’s inequality gives $$\begin{aligned} {2} \|T_{\al}f\|_{L^q(wd\mu)} &= \|{{\mathcal W}}_{\al}[w]^{\frac1{p'}}\cdot {{\mathcal W}}_{\al}[w]^{-\frac1{p'}}T_{\al}f\|_{L^q(wd\mu)} \\ &\le \|{{\mathcal W}}_{\al}[w]^{\frac1{p'}}\|_{L^r(wd\mu)} \|T_{\al}f({{\mathcal W}}_{\al}[w])^{-\frac1{p'}}\|_{L^p(wd\mu)} \\ &\le C \|{{\mathcal W}}_{\al}[w]^{\frac1{p'}}\|_{L^r(wd\mu)} \|f\|_{L^p(wd\mu)},\end{aligned}$$ where in the last inequality we have used Lemma \[lem2.4\]. Recall now that $1<q<\infty$ and $\ds\frac1r=\frac1q-\frac1p$. Our standing assumption is that holds. Then the statement (b) is also a consequence of the Carleson embedding theorem (Corollary \[cor3.6\] below). Carleson embedding theorem {#sec3} ========================== In this section we will discuss the well-known Carleson embedding theorem in a filtered measure space. The Carleson embedding theorem proved here is a refinement of several previous results which are generalizations of the classical Carleson embedding theorem. The related works we would like to mention are [@Ke; @Ar; @Lo; @BlJa; @Gra]. Kemppainen [@Ke] treats the Carleson embedding theorem in $\sg$-finite filtered measure space. His result corresponds to the case $p=2$ of Corollary \[cor3.4\] below. Although his argument can be adapted to our situation, our assumptions about a filtered measure space is weaker than his and we also treat not only weighted measure but also general measure for the Carleson measure. Treating $p\ne q$ case is also new compared with his result. Related results which treats a vector-valued case are in [@Hy1; @HyMcPo; @HyKe]. Arai [@Ar] treats the Carleson measure in a continuously filtered probability space. His result corresponds to Corollaries \[cor3.3\] and \[cor3.4\]. While he only treats a probability space, we treat a $\sg$-finite measure space. Notice also that his Carleson measure definition uses any stopping time whereas our definition uses only a special type of stopping times. Long [@Lo] treats the Carleson measure in a discretely filtered probability space and proves the Carleson embedding theorem. His result can be regarded as the discrete version of the result in Arai [@Ar]. He treats all $({{\mathcal F}}_i)_{i\in{{\mathbb Z}}}$-measurable functions in his formulation of the Carleson embedding theorem and Theorem \[thm3.1\] is similar to it in this respect. The work of Blasco and Jarchows [@BlJa] investigates the Carleson measure on $\bar{D}$, i.e., a finite positive Borel measure $\mu$ on $ \bar{D} $ such that, for given values of $0<p,q<\infty$, the embedding $J_{\mu}:\,H^p(D)\rightarrow L^q(\bar{D},\mu)$ exists. Here, $D$ is a unit ball in the plane and $\bar{D}$ is a closure of the unit ball. Our theorems in this section which treat different exponents $ p, q $ are the analogues of their results in a setting of a filtered measure space. Theorem \[thm3.1\] corresponds to [@Gra Theorem 7.3.5.] which is the Carleson embedding theorem for functions in the half space. Theorem \[thm3.1\] (,resp., [@Gra Theorem 7.3.5.]) treats arbitrary measurable functions instead of martingales (,resp., harmonic functions). Throughout this section we let $(\Omega,{{\mathcal F}},\mu;({{\mathcal F}}_i)_{i\in{{\mathbb Z}}})$ be a $\sg$-finite filtered measure space. We also let $f_i$, $i\in{{\mathbb Z}}$, be an ${{\mathcal F}}_i$-measurable nonnegative real-valued function and $\nu_i$ be a measure on ${{\mathcal F}}_i$. Set a maximal function of $f=(f_i)$ by $f^{*}:=\sup_if_i$. \[thm3.1\] Let $\theta\ge 1$ be arbitrarily taken and be fixed. Then the following conditions are equivalent: 1. There exists a constant $C_0>0$ such that $$\sum_{j\ge i}\nu_j(E)\le C_0\mu(E)^{\theta}$$ for any $E\in{{\mathcal F}}_i$, $i\in{{\mathbb Z}};$ 2. For any" $p\in(0,\infty)$ there exists a constant $C_p>0$ such that $$\l(\sum_{i\in{{\mathbb Z}}}\int_{\Omega}f_i^{p\theta}\,d\nu_i\r)^{\frac1{p\theta}} \le C_p \|f^{*}\|_{L^p(d\mu)};$$ 3. For some" $p_0\in(0,\infty)$ there exists a constant $C_{p_0}>0$ such that $$\l(\sum_{i\in{{\mathbb Z}}}\int_{\Omega}f_i^{p_0\theta}\,d\nu_i\r)^{\frac1{p_0\theta}} \le C_{p_0} \|f^{*}\|_{L^{p_0}(d\mu)}.$$ Moreover, the least possible $C_0$ and $C_p$ enjoy $$C_p\le (C_0\theta)^{\frac1{p\theta}}, \qquad C_0\le C_p^{p\theta}.$$ By a standard limiting argument, we can replace $ {{\mathbb Z}}$ by ${{\mathbb N}}$. Hence we consider $f_i$, $i\in{{\mathbb N}}$, and $\nu_i$, $i\in{{\mathbb N}}$. [**(i) $\Rightarrow$ (ii)**]{}   For $\la>0$ we set $F=\{f^{*}>\la\}$ and $F_i=\{f_i>\la\}$, $i\in{{\mathbb N}}$. Then we have $F=\ds\bigcup_iF_i$. We define a stopping time $\tau$ by $$\tau:=\inf\{i:\,f_i>\la\}.$$ Using this, we set $$G_i=\{\tau=i\}$$ for $i\in{{\mathbb N}}$. Then we easily see that $G_i$’s are disjoint and that $F_i\subset\ds\bigcup_{j=0}^iG_j$. Hence, we have $F=\ds\bigcup_{i\in{{\mathbb N}}}G_i$. We define a measure space $(\Omega\times{{\mathbb N}},{{\mathcal G}},\nu)$ by the following: 1. ${{\mathcal G}}$ is generated by $(\{i\}\times{{\mathcal F}}_i)_{i\in{{\mathbb N}}}$; 2. $\nu|_{\{i\}\times{{\mathcal F}}_i}=\nu_i$. We can easily see that there exists a unique measure $\nu$ on ${{\mathcal G}}$ satisfying [(2)]{}. We regard $f=(f_i)$ as a function on $\Omega\times{{\mathbb N}}$. Then we see that $f$ is a ${{\mathcal G}}$-measurable function on $\Omega\times{{\mathbb N}}$. We estimate $\nu(\{f>\la\})$ from above by $\mu(\{f^{*}>\la\})$ as follows: $$\begin{aligned} {2} \nu(\{f>\la\}) &\le \sum_i\nu_i(F_i) \le \sum_i\nu_i\l(\bigcup_{0\le j\le i}G_j\r) = \sum_j\sum_{i\ge j}\nu_i(G_j) \le C_0 \sum_j\mu(G_j)^{\theta} \\ &\le C_0 \l(\sum_j\mu(G_j)\r)^{\theta} \le C_0 \mu\l(\bigcup_jG_j\r)^{\theta} \le C_0 \mu(F)^{\theta} =C_0 \mu(\{f^{*}>\la\})^{\theta},\end{aligned}$$ where we have used the assertion (i) and the fact that $\theta\ge 1$. Thus, we obtain $$\begin{aligned} {2} \lefteqn{ \frac1{p\theta} \int_{\Omega\times{{\mathbb N}}}f^{p\theta}\,d\nu = \int_{0}^{\infty} \la^{p\theta -1}\nu(\{f>\la\}) \,d\la }\\ &\le C_0 \int_{0}^{\infty} \la^{p\theta-1}\mu(\{f^{*}>\la\})^{\theta} =C_0 \int_{0}^{\infty} \l(\la^p\mu(\{f^{*}>\la\})\r)^{\theta-1} \l(\la^{p-1}\mu(\{f^{*}>\la\})\r) \,d\la \\ &\le \frac{C_0}{p} \|f^{*}\|_{L^p(d\mu)}^{p\theta-p} \cdot p\int_{0}^{\infty}\la^{p-1}\mu(\{f^{*}>\la\})\,d\la =\frac{C_0}{p} \|f^{*}\|_{L^p(d\mu)}^{p\theta-p}\|f^{*}\|_{L^p(d\mu)}^p \\ &=\frac{C_0}{p} \|f^{*}\|_{L^p(d\mu)}^{p\theta},\end{aligned}$$ where we have used Chebyshev’s inequality. Taking $\ds\frac1{p\theta}$th power in both sides, we obtain $$\l(\sum_i\int_{\Omega}f_i^{p\theta}\,d\nu_i\r)^{\frac1{p\theta}} \le (C_0\theta)^{\frac1{p\theta}} \|f^{*}\|_{L^p(d\mu)}.$$ Hence we obtain the assertion (ii). [**(ii) $\Rightarrow$ (iii)**]{}   Obvious. [**(iii) $\Rightarrow$ (i)**]{}   It suffices to take $\ds f_j := \begin{cases} 0\text{ for }j<i, \\ 1_{E}\text{ for }j\ge i. \end{cases} $.\ This completes the proof. \[rem3.2\] Let $ \theta \geq 1. $ Let a sigma-algebra $ {{\mathcal G}}$ on $ \Omega\times{{\mathbb Z}}$ be generated by $ (\{i\} \times {{\mathcal F}}_i)_{i \in {{\mathbb Z}}}. $ We call a measure $\nu$ which is defined on $(\Omega\times{{\mathbb Z}},{{\mathcal G}})$ a $\theta$-Carleson measure on $\Omega\times{{\mathbb Z}}$ if $ \nu_i := \nu|_{\{i\} \times {{\mathcal F}}_i} $, $ i \in {{\mathbb Z}}$ satisfy the condition (i) in Theorem \[thm3.1\]. We call the infimum of $C_0$ in (i) in Theorem \[thm3.1\] the $\theta$-Carleson measure norm of $ \nu $. It is easy to see that the condition (i) in Theorem \[thm3.1\] is equivalent to $$\begin{aligned} {1}\label{3.1} \sup_{\tau} \mu(\{\tau<\infty\})^{-\theta} \nu(\{(\omega,k)\in\Omega\times{{\mathbb Z}}:\,k\ge \tau(\omega)\}) <\infty,\end{aligned}$$ where $\tau$ runs through all stopping times where $\mu(\{\tau<\infty\})$ is nonzero and finite, and that $\theta$-Carleson measure norm of $\nu$ is equal to this quantity. The concept of a “$\theta$-Carleson measure” was first introduced by [@Ar; @Lo] using as a definition when $ \theta = 1. $ Thanks to Doob’s maximal inequality, we have the following corollary of the theorem. \[cor3.3\] Let $(f_i)_{i\in{{\mathbb Z}}}$ be a martingale on $(\Omega,{{\mathcal F}},\mu)$ and $\theta\ge 1$ be arbitrarily taken and be fixed. Then the following conditions are equivalent: 1. There exists a constant $C_0>0$ such that $$\sum_{j\ge i}\nu_j(E)\le C_0\mu(E)^{\theta}$$ for any $E\in{{\mathcal F}}_i$, $i\in{{\mathbb Z}};$ 2. For any" $p\in(1,\infty)$ there exists a constant $C_p>0$ such that $$\l(\sum_{i\in{{\mathbb Z}}}\int_{\Omega}|f_i|^{p\theta}\,d\nu_i\r)^{\frac1{p\theta}} \le C_p \sup_{i\in{{\mathbb Z}}}\|f_i\|_{L^p(d\mu)};$$ 3. For some" $p_0\in(1,\infty)$ there exists a constant $C_{p_0}>0$ such that $$\l(\sum_{i\in{{\mathbb Z}}}\int_{\Omega}|f_i|^{p_0\theta}\,d\nu_i\r)^{\frac1{p_0\theta}} \le C_{p_0} \sup_{i\in{{\mathbb Z}}}\|f_i\|_{L^{p_0}(d\mu)}.$$ Moreover, the least possible $C_0$ and $C_p$ enjoy $$C_p\le C(C_0\theta)^{\frac1{p\theta}}, \qquad C_0\le C_p^{p\theta}.$$ We have another corollary, where we only consider martingales consisting of the conditional expectations of a function. \[cor3.4\] Let $\theta\ge 1$ be arbitrarily taken and be fixed. Then the following conditions are equivalent: 1. There exists a constant $C_0>0$ such that $$\sum_{j\ge i}\nu_j(E)\le C_0\mu(E)^{\theta}$$ for any $E\in{{\mathcal F}}_i$, $i\in{{\mathbb Z}};$ 2. For any" $p\in(1,\infty)$ there exists a constant $C_p>0$ such that $$\l(\sum_{i\in{{\mathbb Z}}}\int_{\Omega}|{{\mathcal E}}_if|^{p\theta}\,d\nu_i\r)^{\frac1{p\theta}} \le C_p \|f\|_{L^p(d\mu)};$$ 3. For some" $p_0\in(1,\infty)$ there exists a constant $C_{p_0}>0$ such that $$\l(\sum_{i\in{{\mathbb Z}}}\int_{\Omega}|{{\mathcal E}}_if|^{p_0\theta}\,d\nu_i\r)^{\frac1{p_0\theta}} \le C_{p_0} \|f\|_{L^{p_0}(d\mu)}.$$ Moreover, the least possible $C_0$ and $C_p$ enjoy $$C_p\le C(C_0\theta)^{\frac1{p\theta}}, \qquad C_0\le C_p^{p\theta}.$$ We next consider the Carleson embedding theorem for the case $q<p$. \[thm3.5\] Let $w_i$, $i\in{{\mathbb Z}}$, be an ${{\mathcal F}}_i$-measurable nonnegative real-valued function and $w\in{{\mathcal L}}^{+}$ be a weight. Suppose that $\ds\frac{w_i}{{{\mathcal E}}_iw}$, $i\in{{\mathbb Z}}$, belong to the class ${{\mathcal L}}^{+}$. Let $\theta>1$ be arbitrarily taken and be fixed. Then the following conditions are equivalent: 1. There exists a constant $C_0>0$ such that $$\l\|\sum_{i\in{{\mathbb Z}}}\frac{w_i}{{{\mathcal E}}_iw}\r\|_{L^{\theta'}(wd\mu)} \leq C_0;$$ 2. For any" $q\in(0,\infty)$ there exists a constant $C_q>0$ such that $$\l(\sum_{i\in{{\mathbb Z}}}\int_{\Omega}w_if_i^q\,d\mu\r)^{\frac1q} \le C_q \|f^{*}\|_{L^{q\theta}(wd\mu)};$$ 3. For some" $q_0\in(0,\infty)$ there exists a constant $C_{q_0}>0$ such that $$\l(\sum_{i\in{{\mathbb Z}}}\int_{\Omega}w_if_i^{q_0}\,d\mu\r)^{\frac1{q_0}} \le C_{q_0} \|f^{*}\|_{L^{q_0\theta}(wd\mu)}.$$ Moreover, the least possible $C_0$ and $C_q$ enjoy $$C_q\le C_0^{\frac1q}, \qquad C_0\le CC_q.$$ [**(i) $\Rightarrow$ (ii)**]{}   It follows that $$\sum_i\int_{\Omega}w_if_i^q\,d\mu = \sum_i\int_{\Omega}\frac{w_i}{{{\mathcal E}}_iw}f_i^q{{\mathcal E}}_iw\,d\mu.$$ By a simple limiting argument, if necessary, we can assume that $f_i$ is a bounded function. Then, since $\ds\frac{w_i}{{{\mathcal E}}_iw}f_i^q$ is an ${{\mathcal F}}_i$-measurable function and belongs to the class ${{\mathcal L}}^{+}$, $$\begin{aligned} {2} &= \sum_i\int_{\Omega}{{\mathcal E}}_i\l[\frac{w_i}{{{\mathcal E}}_iw}f_i^qw\r]\,d\mu \\ &= \sum_i\int_{\Omega}\frac{w_i}{{{\mathcal E}}_iw}f_i^qw\,d\mu \\ &\le \int_{\Omega} \l(\sum_i\frac{w_i}{{{\mathcal E}}_iw}\r) \l(\sup_jf_j\r)^q w\,d\mu.\end{aligned}$$ Hölder’s inequality with exponent $\theta$ gives $$\le \l\|\sum_i\frac{w_i}{{{\mathcal E}}_iw}\r\|_{L^{\theta'}(wd\mu)} \|f^{*}\|_{L^{q\theta}(wd\mu)}^q.$$ This yields the assertion (ii) with $\ds C_q\le C_0^{\frac1q}$. [**(ii) $\Rightarrow$ (iii)**]{}   Obvious. [**(iii) $\Rightarrow$ (i)**]{}   It follows that, for nonnegative $g\in L^{\theta}(wd\mu)\cap L^{\infty}(wd\mu)$, $$\begin{aligned} {2} \lefteqn{ \int_{\Omega}\l(\sum_i\frac{w_i}{{{\mathcal E}}_iw}\r)gw\,d\mu = \sum_i\int_{\Omega}\frac{w_i}{{{\mathcal E}}_iw}gw\,d\mu }\\ &= \sum_i\int_{\Omega}{{\mathcal E}}_i\l(\frac{w_i}{{{\mathcal E}}_iw}\r)gw\,d\mu = \sum_i\int_{\Omega}\frac{w_i}{{{\mathcal E}}_iw}{{\mathcal E}}_i(gw)\,d\mu \\ &= \sum_i\int_{\Omega}w_i\frac{{{\mathcal E}}_i(gw)}{{{\mathcal E}}_iw}\,d\mu = \sum_i\int_{\Omega}w_i\l\{\l(\frac{{{\mathcal E}}_i(gw)}{{{\mathcal E}}_iw}\r)^{\frac1{q_0}}\r\}^{q_0}\,d\mu \\ &\le C_{q_0} \l\|\l(\sup_i\frac{{{\mathcal E}}_i(gw)}{{{\mathcal E}}_iw}\r)^{\frac1{q_0}}\r\|_{L^{q_0\theta}(wd\mu)}^{q_0} \le CC_{q_0} \|g\|_{L^{\theta}(wd\mu)},\end{aligned}$$ where we have used the assertion (iii) and Doob’s maximal inequality. By a limiting argument and duality we must have $$\l\|\sum_i\frac{w_i}{{{\mathcal E}}_iw}\r\|_{L^{\theta'}(wd\mu)} \le CC_{q_0}.$$ This yields $C_0\le CC_{q_0}$ and completes the proof. \[cor3.6\] Let $w_i$, $i\in{{\mathbb Z}}$, be an ${{\mathcal F}}_i$-measurable nonnegative real-valued function and $w\in{{\mathcal L}}^{+}$ be a weight. Suppose that $\ds\frac{w_i}{{{\mathcal E}}_iw}$, $i\in{{\mathbb Z}}$, belong to the class ${{\mathcal L}}^{+}$. Let $0<q<p<\infty$ and $1<p<\infty$. Then the following conditions are equivalent: 1. For $\ds\frac1r=\frac1q-\frac1p$ there exists a constant $C_1>0$ such that $$\l\|\l(\sum_{i\in{{\mathbb Z}}}\frac{w_i}{{{\mathcal E}}_iw}\r)^{\frac1q}\r\|_{L^r(wd\mu)} \leq C_1;$$ 2. There exists a constant $C_2>0$ such that $$\l(\sum_{i\in{{\mathbb Z}}}\int_{\Omega}w_i|{{\mathcal E}}_if|^q\,d\mu\r)^{\frac1q} \le C_2 \|f\|_{L^p(wd\mu)}.$$ Moreover, the least possible $C_1$ and $C_2$ are equivalent. It suffices to notice Doob’s maximal inequality and fact that $$\frac1{(p/q)'}\frac1q = \l(1-\frac{q}{p}\r)\frac1q = \frac{p-q}{pq} =\frac1r.$$ Two-weight norm inequalities for generalized Doob’s maximal operator {#sec4} ==================================================================== In this section, by the use of the Carleson embedding theorem, we give a simple proof of the analogue of Sawyer’s theorem [@Sa1] characterizing the weights governing the two-weight strong-type norm inequality for generalized Doob’s maximal operator $M_{\al}$. The following is the analogue of Sawyer’s theorem in a martingale setting. \[thm4.1\] Let $1<p\le q<\infty$, $\al_i$, $i\in{{\mathbb Z}}$, be a nonnegative bounded ${{\mathcal F}}_i$-measurable function and $u,v\in{{\mathcal L}}^{+}$ be a weight. Then the following statements are equivalent: 1. There exists a constant $C_1>0$ such that $$\|M_{\al}f\|_{L^q(ud\mu)} \le C_1 \|f\|_{L^p(vd\mu)};$$ 2. If $\sg=v^{1-p'}\in{{\mathcal L}}^{+}$, then there exists a constant $C_2>0$ such that $$\l(\int_{E}\l(\sup_{j\ge i}\al_j{{\mathcal E}}_j\sg\r)^qu\,d\mu\r)^{\frac1q} \le C_2 [\sg d\mu](E)^{\frac1p}$$ for any $E\in{{\mathcal F}}_i^0$, $i\in{{\mathbb Z}}.$ Moreover, the least possible $C_1$ and $C_2$ are equivalent. We follow the argument in [@Cr]. The proof of (a) $\Rightarrow$ (b) follows at once if we substitute the test function $f=1_{E}\sg$. We shall prove converse. Without loss of generality we may assume that $f$ is a nonnegative function. For $j\in{{\mathbb Z}}$ define a stopping time $$\tau_j:=\inf\{i:\,\al_i{{\mathcal E}}_if>2^j\}.$$ Clearly, $\tau_j\le\tau_{j+1}$. If we let $$F_j := \{-\infty<\tau_j<\infty\} \setminus \{-\infty<\tau_{j+1}<\infty\},$$ then we see that $F_j$’s are disjoint and $$\{M_{\al}f>0\} = \bigcup_jF_j.$$ We now set $$E_j^i := F_j\cap\{\tau_j=i\}.$$ It follows that $E_j^i$’s are disjoint, $F_j=\ds\bigcup_iE_j^i$ and, if $E_j^i\ne\emptyset$, then $$M_{\al}f \approx \al_i{{\mathcal E}}_if \text{ on } E_j^i.$$ We now estimate as follows: $$\begin{aligned} {2} \int_{\Omega}(M_{\al}f)^qu\,d\mu &= \sum_{i,j} \int_{E_j^i}(M_{\al}f)^qu\,d\mu \le C \sum_{i,j} \int_{E_j^i}(\al_i{{\mathcal E}}_if)^qu\,d\mu \\ =C \sum_i \int_{\Omega} \l(\sum_j1_{E_j^i}(\al_i{{\mathcal E}}_i\sg)^q\r) \l(\frac{{{\mathcal E}}_if}{{{\mathcal E}}_i\sg}\r)^q u\,d\mu.\end{aligned}$$ Since $\ds \frac{{{\mathcal E}}_if}{{{\mathcal E}}_i\sg} = {{\mathcal E}}^{\sg}_i\l[\frac{f}{\sg}\r] $, we shall evaluate $$\sum_i \int_{\Omega} \l(\sum_j1_{E_j^i}(\al_i{{\mathcal E}}_i\sg)^q\r) \l({{\mathcal E}}^{\sg}_i\l[\frac{f}{\sg}\r]\r)^q u\,d\mu.$$ Applying the Carleson embedding theorem (Corollary \[cor3.4\]), we need only verify that there exists a constant $C>0$ such that $$\sum_{j\ge i} \int_{E} \l(\sum_k1_{E_k^j}(\al_j{{\mathcal E}}_j\sg)^q\r) u\,d\mu \le C [\sg d\mu](E)^{\frac{q}{p}}$$ holds for any $E\in{{\mathcal F}}_i$, $i\in{{\mathbb Z}}.$ The fact that $E_k^j$’s are disjoint and the assertion (b) yield $$\int_{E} \sum_{j\ge i}\sum_k 1_{E_k^j}(\al_j{{\mathcal E}}_j\sg)^qu\,d\mu \le \int_{E}\l(\sup_{j\ge i}\al_j{{\mathcal E}}_j\sg\r)^qu\,d\mu \le C [\sg d\mu](E)^{\frac{q}{p}}.$$ This completes the proof. The following lemma was proved in [@Ch Theorem 1]. For the sake of the completeness the full proof is given here. \[lem4.2\] Let $1<p<\infty$, $w\in{{\mathcal L}}^{+}$ be a weight and $\sg=w^{1-p'}\in{{\mathcal L}}^{+}$. Then the following statements are equivalent: 1. There exists a constant $C_1>0$ such that $$\sup_{i\in{{\mathbb Z}}} \|({{\mathcal E}}_iw)({{\mathcal E}}_i\sg)^{p-1}\|_{L^{\infty}(d\mu)}<C_1;$$ 2. There exists a constant $C_2>0$ such that $$\int_{E}\l(\sup_{j\ge i}{{\mathcal E}}_j\sg\r)^pw\,d\mu \le C_2^p [\sg d\mu](E)$$ for any $E\in{{\mathcal F}}_i^0$, $i\in{{\mathbb Z}}.$ Moreover, the least possible $C_1$ and $C_2$ enjoy $$C_1\le C_2^p, \qquad C_2\le CC_1^{\frac1{p-1}}.$$ Proof of (b) $\Rightarrow$ (a). It follows that, for any $E\in{{\mathcal F}}_i^0$, $i\in{{\mathbb Z}}$, $$\begin{aligned} {2} \int_{E}({{\mathcal E}}_iw)({{\mathcal E}}_i\sg)^p\,d\mu &= \int_{E}{{\mathcal E}}[({{\mathcal E}}_i\sg)^pw]\,d\mu = \int_{E}({{\mathcal E}}_i\sg)^pw\,d\mu \le \int_{E} \l(\sup_{j\ge i}{{\mathcal E}}_j\sg\r)^p w\,d\mu \\ &\le C_2^p \int_{E}\sg\,d\mu =C_2^p \int_{E}{{\mathcal E}}_i\sg\,d\mu.\end{aligned}$$ This implies $$({{\mathcal E}}_iw)({{\mathcal E}}_i\sg)^p \le C_2^p {{\mathcal E}}_i\sg$$ and, hence, yields (a) with $C_1\le C_2^p$. We now verify converse (a) $\Rightarrow$ (b). By the assertion (a) we have $$({{\mathcal E}}_i\sg)^p \le C_1^{p'} ({{\mathcal E}}_iw)^{-p'} =C_1^{p'} \l({{\mathcal E}}^w_i[w^{-1}]\r)^{p'}.$$ This yields, for any $E\in{{\mathcal F}}_i^0$, $i\in{{\mathbb Z}}$, $$\begin{aligned} {2} \int_{E}\l(\sup_{j\ge i}{{\mathcal E}}_j\sg\r)^pw\,d\mu &\le C_1^{p'} \int_{E}1_{E}\l(\sup_{j\ge i}{{\mathcal E}}^w_j[w^{-1}]\r)^{p'}w\,d\mu = \int_{E}\l(\sup_{j\ge i}{{\mathcal E}}^w_j[1_{E}w^{-1}]\r)^{p'}w\,d\mu \\ &\le CC_1^{p'} \int_{E}w^{1-p'}\,d\mu,\end{aligned}$$ where we have used Doob’s maximal inequality. Thus, we obtain $$\int_{E}\l(\sup_{j\ge i}{{\mathcal E}}_j\sg\r)^pw\,d\mu \le CC_1^{p'}[\sg d\mu](E)$$ and have (b) with $\ds C_2=C_1^{\frac1{p-1}}$. This proves the theorem. \[cor4.3\] Let $1<p<\infty$, $\al_i$, $i\in{{\mathbb Z}}$, be a nonnegative bounded ${{\mathcal F}}_i$-measurable function, $u,v\in{{\mathcal L}}^{+}$ be a weight and $\sg=v^{1-p'}\in{{\mathcal L}}^{+}$. Then, two-weight norm inequality $$\|M_{\al}f\|_{L^p(ud\mu)} \le C_1 \|f\|_{L^p(vd\mu)}$$ holds if and only if" there exists a constant $C_2>0$ such that $${{\mathcal E}}_i\l[\l(\sup_{j\ge i}\al_i{{\mathcal E}}_j\sg\r)^pu\r] \le C_2^p{{\mathcal E}}_i\sg.$$ for any $i\in{{\mathbb Z}}.$ Moreover, the least possible $C_1$ and $C_2$ are equivalent. \[rem4.4\] Long and Peng [@LoPe] showed that Corollary \[cor4.3\] holds for Doob’s maximal operator in a filtered probability space. (See also a recent work by Chen and Liu [@ChLi].) \[cor4.5\] Let $1<p<\infty$, $w\in{{\mathcal L}}^{+}$ be a weight and $\sg=w^{1-p'}\in{{\mathcal L}}^{+}$. Then, one-weight norm inequality $$\|f^{*}\|_{L^p(wd\mu)} \le C_1 \|f\|_{L^p(wd\mu)}$$ holds if and only if" $$\sup_{i\in{{\mathbb Z}}} \|({{\mathcal E}}_iw)({{\mathcal E}}_i\sg)^{p-1}\|_{L^{\infty}(d\mu)}<C_2<\infty$$ for any $i\in{{\mathbb Z}}.$ Moreover, the least possible $C_1$ and $C_2$ enjoy $$C_2\le C_1^p, \qquad C_1\le CC_2^{\frac1{p-1}}.$$ \[rem4.6\] For the $A_p$ weights with some regularity condition, Izumisawa and Kazamaki [@IzKa] proved first that Corollary \[cor4.5\] holds in a filtered probability space. Jawerth [@Ja] found that the added property is superfluous. One-weight norm estimates of Hytönen-Pérez type for Doob’s maximal operator {#sec5} =========================================================================== In this section, by an application of Theorem \[thm4.1\], we will sharpen Corollary \[cor4.5\] following the argument due to Hytönen and Pérez (see [@Hy4; @HyPe]). Let $1<p<\infty$, $w\in{{\mathcal L}}^{+}$ be a weight and $\sg:=w^{1-p'}\in{{\mathcal L}}^{+}$. We define $$[w]_{A_p} := \sup_{i\in{{\mathbb Z}}} \|({{\mathcal E}}_iw)({{\mathcal E}}_i\sg)^{p-1}\|_{L^{\infty}(d\mu)}$$ and define $$[w]_{A_{\infty}} := \sup_{i\in{{\mathbb Z}}} \l\|({{\mathcal E}}_iw)\exp\l(-{{\mathcal E}}_i(\log w)\r)\r\|_{L^{\infty}(d\mu)}.$$ Then, one sees that $[w]_{A_{\infty}}\le[w]_{A_p}$ for $1<p<\infty$ and, using the dominated convergence theorem for conditional expectations, one sees also that $({{\mathcal E}}_i\sg)^{p-1}$ converges a. e. to $\ds\exp\l(-{{\mathcal E}}_i(\log w)\r)$. Corollary \[cor4.5\] assert that there exists a constant $C_p>0$ such that $$\|(\cdot)^{*}\|_{L^p(wd\mu)\rightarrow L^p(wd\mu)} \le C_p [w]_{A_p}^{\frac1{p-1}},$$ where $C_p$ depends on $p$ but not on $w$. Since $\ds[w]_{A_p}=[\sg]_{A_{p'}}^{p-1}$, we have $$\label{5.1} \|(\cdot)^{*}\|_{L^p(wd\mu)\rightarrow L^p(wd\mu)} \le C_p \l([w]_{A_p}[\sg]_{A_{p'}}\r)^{\frac1p}.$$ The following theorem sharpens . \[thm5.1\] $$\|(\cdot)^{*}\|_{L^p(wd\mu)\rightarrow L^p(wd\mu)} \le C_p \l([w]_{A_p}[\sg]_{A_{\infty}}\r)^{\frac1p},$$ where $C_p$ depends on $p$ but not on $w$. Let $i\in{{\mathbb Z}}$ be arbitrarily chosen and fixed. By Theorem \[thm4.1\], we have to prove that , for any $E\in{{\mathcal F}}_i^0$, $$\|1_{E}\sup_{j\ge i}{{\mathcal E}}_j\sg\|_{L^p(wd\mu)}^p \le C [w]_{A_p}[\sg]_{A_{\infty}} [\sg d\mu](E).$$ Let us now apply the construction of principal set as follows. Since we have $$\|1_{E}\sup_{j\ge i}{{\mathcal E}}_j\sg\|_{L^p(wd\mu)}^p = \|1_{E}\sup_{j\ge i}{{\mathcal E}}_j[1_{E}\sg]\|_{L^p(wd\mu)}^p,$$ we may assume that $E=P_0$ satisfies $P_0\in{{\mathcal F}}_i^0$, $\mu(P_0)>0$ and, for some $k\in{{\mathbb Z}}$, $$2^{k-1} 1_{P_0}<{{\mathcal E}}_i[1_{P_0}\sg]\le 2^k 1_{P_0}$$ by a simple dyadic decomposition argument. We write $\kp_1(P_0):=i$ and $\kp_2(P_0):=k$. We let ${{\mathcal P}}_1:=\{P_0\}$ which we call the first generation of principal sets. To get the second generation of principal sets we define a stopping time $$\tau_{P_0} := \inf\{j\ge i:\,{{\mathcal E}}_j[1_{P_0}\sg]>2^{k+1} 1_{P_0}\}.$$ We say that a set $P\subset P_0$ is a principal set with respect to $P_0$ if it satisfies $\mu(P)>0$ and there exist $j>i$ and $l>k+1$ such that $$P = \{2^{l-1} 1_{P_0} <{{\mathcal E}}_j[1_{\{\tau_{P_0}=j\}}\sg]\le 2^l 1_{P_0}\}.$$ Noticing that such $j$ and $l$ are unique, we write $\kp_1(P):=j$ and $\kp_2(P):=l$. We let ${{\mathcal P}}(P_0)$ be the set of all principal sets with respect to $P_0$ and let ${{\mathcal P}}_2:={{\mathcal P}}(P_0)$ which we call the second generation of principal sets. We now need to verify that $$\label{5.2} \mu(P_0)\le 2\mu(E(P_0))$$ where $$E(P_0) := P_0\cap\{\tau_{P_0}=\infty\} = P_0\setminus\bigcup_{P\in{{\mathcal P}}(P_0)}P.$$ Indeed, it follows from the use of weak-$(1,1)$ boundedness of Doob’s maximal operator that $$\mu(P_0\cap\{\tau_{P_0}<\infty\}) \le 2^{-k-1}\int_{P_0}\sg\,d\mu = 2^{-k-1}\int_{P_0}{{\mathcal E}}_i\sg\,d\mu \le 2^{-1}\mu(P_0).$$ This clearly implies . The next generations are defined inductively, $${{\mathcal P}}_{n+1} := \bigcup_{P\in{{\mathcal P}}_n}{{\mathcal P}}(P),$$ and we define the collection of principal sets ${{\mathcal P}}$ by $${{\mathcal P}}:=\bigcup_{n=0}^{\infty}{{\mathcal P}}_n.$$ It is easy to see that the collection of principal sets ${{\mathcal P}}$ satisfies the following properties: 1. The sets $E(P)$ where $P\in{{\mathcal P}}$, are disjoint and $P_0=\ds\bigcup_{P\in{{\mathcal P}}}E(P)$; 2. $P\in{{\mathcal F}}_{\kp_1(P)}$; 3. $\mu(P)\le 2\mu(E(P))$; 4. $2^{\kp_2(P)-1} <{{\mathcal E}}_{\kp_1(P)}\sg\le 2^{\kp_2(P)} $ on $P$; 5. $\ds\sup_{j\ge i}{{\mathcal E}}_j[1_{P}\sg] \le 2^{\kp_2(P)+1} $ on $E(P)$. We estimate as follows: $$\begin{aligned} {2} (*) &:= \|1_{P_0}\sup_{j\ge i}{{\mathcal E}}_j[1_{P_0}\sg]\|_{L^p(wd\mu)}^p \\ &= \sum_{P\in{{\mathcal P}}} \|1_{E(P)}\sup_{j\ge i}{{\mathcal E}}_j[1_{P_0}\sg]\|_{L^p(wd\mu)}^p \\ &\le 2^p \sum_{P\in{{\mathcal P}}} [wd\mu](E(P))2^{p\kp_2(P)} \le 2^p2^{p-1} \sum_{P\in{{\mathcal P}}} 2^{\kp_2(P)} [wd\mu](E(P))2^{(p-1)(\kp_2(P)-1)} \\ &\le 2^p2^{p-1} \sum_{P\in{{\mathcal P}}} 2^{\kp_2(P)} \int_{E(P)}\l({{\mathcal E}}_{\kp_1(P)}\sg\r)^{p-1}w\,d\mu \le 2^p2^{p-1} \sum_{P\in{{\mathcal P}}} 2^{\kp_2(P)} \int_{P}\l({{\mathcal E}}_{\kp_1(P)}\sg\r)^{p-1}w\,d\mu \\ &= 2^p2^{p-1} \sum_{P\in{{\mathcal P}}} 2^{\kp_2(P)} \int_{P}\l({{\mathcal E}}_{\kp_1(P)}\sg\r)^{p-1}{{\mathcal E}}_{\kp_1(P)}w\,d\mu,\end{aligned}$$ where in the last two steps we have used $E(P)\subset P$ and (ii). The definition of $A_p$ and (iii) yield $$(*) \le 4^p[w]_{A_p} \sum_{P\in{{\mathcal P}}} 2^{\kp_2(P)}\mu(E(P)).$$ Since the definition of $A_{\infty}$ and (iv) imply $$2^{\kp_2(P)} \le 2{{\mathcal E}}_{\kp_1(P)}\sg \le 2[\sg]_{A_{\infty}} \exp\l({{\mathcal E}}_{\kp_1(P)}\log\sg\r) \text{ on }E(P),$$ we have further that $$\begin{aligned} {2} (*) &\le 2\cdot 4^p[w]_{A_p}[\sg]_{A_{\infty}} \sum_{P\in{{\mathcal P}}} \int_{E(P)} \sup_{j\ge i} \exp\l({{\mathcal E}}_j[\log1_{P_0}\sg]\r) \,d\mu \\ &= 2\cdot 4^p[w]_{A_p}[\sg]_{A_{\infty}} \int_{P_0} \sup_{j\ge i} \exp\l({{\mathcal E}}_j[\log1_{P_0}\sg]\r) \,d\mu.\end{aligned}$$ For any $q>1$ we have $$\exp\l({{\mathcal E}}_j[\log1_{P_0}\sg]\r) = \l\{\exp\l({{\mathcal E}}_j[\log(1_{P_0}\sg)^{\frac1q}]\r)\r\}^q \le \l({{\mathcal E}}_j[(1_{P_0}\sg)^{\frac1q}]\r)^q$$ by Jensen’s inequality for conditional expectation. This yields $$\sup_{j\ge i} \exp\l({{\mathcal E}}_j[\log1_{P_0}\sg]\r) \le \l( \sup_{j\ge i} {{\mathcal E}}_j[(1_{P_0}\sg)^{\frac1q}] \r)^q.$$ Finally, Doob’s maximal inequality gives us that $$(*) \le 2\cdot 4^p[w]_{A_p}[\sg]_{A_{\infty}} (q')^q [\sg d\mu](P_0).$$ Letting $q\rightarrow\infty$, we obtain $$(*) \le 2\cdot 4^p[w]_{A_p}[\sg]_{A_{\infty}}e [\sg d\mu](P_0).$$ This completes the proof. [999]{} Arai H., *Measures of Carleson type on filtrated probability spaces and the corona theorem on complex Brownian spaces*, Proc. Amer. Math. Soc., **96** (1986), 643–647. Buckley S., *Estimates of operator norms on weighted spaces and reverse Jensen inequalities*, Trans. Amer. Math. Soc., **340** (1993), 253–272. Blasco O. and Jarchow H., *A note on Carleson measures on Hardy spaces*, Acta Sci. Math. (Szeged), **71** (2005), 371–389. Cascante C., Ortega J. and Verbitsky I. E., *Nonlinear potentials and two weight trace inequalities for general dyadic and radial kernels*, Indiana Univ. Math. J., **53** (2004), 845–882. —-, *On $L^p$-$L^q$ trace inequalities*, J. London Math. Soc. (2), **74** (2006), 497–511. Chang X., *Some Sawyer type inequalities for martingales*, Studia Math., **111** (1994), 187–194. Chen W. and Liu P., *Weighted inequalities for the generalized maximal operator in martingale spaces*, Chin. Ann. Math. Ser. B, **32** (2011), 781–792. Coifman R. R., Fefferman C., *Weighted norm inequalities for maximal functions and singular integrals* Studia Math., **51** (1974), 241–250. Cruz-Uribe D., *New proofs of two-weight norm inequalities for the maximal operator*, Georgian Math. J., **7** (2000), 33–42. Dellancherie C. and Meyer P. A., [*Probabilities and potential*]{}, North-Holland Publishing Co., Amsterdam, 1988. Garnett J. B. and Jones P. W., *BMO from dyadic BMO*, Pacific J. Math., **99** (1982), 351–371. García-Cuerva J. and Rubio de Francia J. [*Weighted norm inequalities and related topics*]{} North-Holland Mathematics Studies, [**116**]{} Notas de Matemática \[Mathematical Notes\], [**104**]{} North-Holland Publishing Co., Amsterdam, 1985. x+604 pp. Grafakos, L., [*Classical and Modern Fourier Analysis*]{}, Pearson Education, Inc. (2004). Hytönen T., *The vector-valued non-homogeneous $Tb$ theorem*, arXiv:0809.3097v3 (2009). —-, *Representation of singular integrals by dyadic operators, and the $A_2$ theorem* arXiv:1108.5119 (2011). —-, *The sharp weighted bound for general Calderón-Zygmund Operators*, Ann. of Math., **175** (2012), 1473–1506. —-, [*Martingales and Harmonic Analysis*]{}, Lecture Note in the author’s webpage (2008). `http://www.helsinki.fi/~tpehyton/maha/maha-eng.pdf` —-, [*Weighted norm inequalities*]{}, Lecture Note of a course at the University of Helsinki, Winter 2011. `http://wiki-app.it.helsinki.fi/download/attachments/64424417/weighted.pdf` Hytönen T. and Kairema A., *Systems of dyadic cubes in a doubling metric space*, Colloq. Math. **126** (2012), 1–33. Hytönen T. and Kemppainen M., *On the relation of Carleson’s embedding and the maximal theorem in the context of Banach space geometry*, Math. Scand., **109** (2011), 269–284. Hytönen T., Lacey M. T. and Pérez C., *Non-probabilistic proof of the $A_2$ theorem, and sharp weighted bounds for the q-variation of singular integrals*, arXiv:1202.2229 (2012). Hytönen T., Mcintosh A. and Portal P., *Kato’s square root problem in Banach spaces*, J. Funct. Anal., **254** (2008), 675–726. Hyönen T. and Pérez C., *Sharp weighted bounds involving $A_\infty$*, arXiv:1103.5562 (2011). Izumisawa M. and Kazamaki N., *Weighted norm inequalities for martingales*, Tôhoku Math. J. (2), **29** (1977), 115–124. Jawerth B., *Weighted inequalities for maximal operators: linearization, localization and factorization*, Amer. J. Math., **108** (1986), 361–414. Kairema A., *Two-weight norm inequalities for potential type and maximal operators in a metric space*, Publ. Mat., **57** (2013), 3-56. —-, *Sharp weighted bounds for fractional integral operators in a space of homogeneous type*, arXiv:1202.6587 (2012), to appear in Math. Scand.. Kazamaki N., [*Continuous exponential martingales and BMO*]{}, Lecture Notes in Mathematics, **1579** Springer-Verlag, Berlin, 1994. Kemppainen M., *On the Rademacher Maximal Function*, Licentiate’s thesis, University of Helsinki, Department of Mathematics and Statistics (2010). Kerman R. and Sawyer E., *The trace inequality and eigenvalue estimates for Schrödinger operators*, Ann. Inst. Fourier (Grenoble), **36** (1986), 207–228. Lacey M., *The linear bound in $A_2$ for Calderón-Zygmund operators: a survey*, arXiv:1011.5784 (2010). Lacey M., Moen K., Pérez C. and Torres R. H., *Sharp weighted bounds for fractional integral operators*, J. Funct. Anal. **259** (2010), 1073–1097. Lacey M., Sawyer E. and Uriarte-Tuero I., *Two weight inequalities for discrete positive operators*, arXiv:0911.3437 (2009). Lerner A., *An elementary approach to several results on the Hardy-Littlewood maximal operator*, Proc. Amer. Math. Soc., **136** (2008), 2829–2833. —-, *A simple proof of the $A_2$ conjecture*, Int. Math. Res. Not. (2012); doi: 10.1093/imrn/rns145. Long R.-L., [*Martingale spaces and inequalities*]{}, Peking University Press, Beijing; Friedr. Vieweg & Sohn, Braunschweig, 1993. Long R.-L. and Peng L.-Z., *$(p, q)$ maximal inequalities with two weights in martingale theory*, (Chinese) Acta Math. Sinica, **29** (1986), 253–258. Muckenhoupt B., *Weighted norm inequalities for the Hardy maximal function*, Trans. Amer. Math. Soc., **165** (1972), 207–226. Muckenhoupt B. and Wheeden R., *Weighted norm inequalities for fractional integrals*, Trans. Amer. Math. Soc. **192** (1974), 261–274. Nazarov F., Treil S. and Volberg A., *The $Tb$-theorem on non-homogeneous spaces*, Acta Math., **190** (2003), 151–239. Nakai E. and Sadasue G., *Martingale Morrey-Campanato spaces and fractional integrals*, J. Funct. Spaces Appl. (2012) Article ID 673929, 29 p. Perez C., [*A Course on Singular Integrals and Weights*]{} Advanced Courses in Mathematics C.R.M., Barcelona, Suiza Birkhaeuser Verlag, 2011. Petermichl S., *Dyadic shifts and a logarithmic estimate for Hankel operators with matrix symbol*, C. R. Acad. Sci. Paris Sér. I Math., **330** (2000), 455–460. Sawyer E., *A characterization of a two-weight norm inequality for maximal operators*, Studia Math., **75** (1982), 1–11. —-, *A characterization of two weight norm inequalities for fractional and Poisson integrals*, Trans. Amer. Math. Soc., **308** (1988), 533–545. Sawyer E. and Wheeden R. L., *Weighted inequalities for fractional integrals on euclidean and homogeneous spaces*, Amer. J. Math., **114** (1992), 813–874. Sawyer E., Wheeden R. L. and Zhao S., *Weighted norm inequalities for operators of potential type and fractional maximal functions*, Potential Anal., **5** (1996), 523–580. Schilling R., [*Measures, integrals and martingales*]{}, Cambridge University Press, New York, 2005. Stroock D. W., [*Probability theory. An analytic view.*]{}, Second edition, Cambridge University Press, Cambridge, 2011. Uchiyama A., *Weight functions on probability spaces*, Tôhoku Math. J. (2), **30** (1978), 463–470. Tanaka H., *Two weighted norm inequalities for potential type integral operators in the case $p>q>0$ and $p>1$*, submitted. Tanaka H. and Gunawan H., *The local trace inequality for potential type integral operators*, to appear in Potential Analysis. Treil S., *A remark on two weight estimates for positive dyadic operators*, arXiv:1201.1455 (2012). Verbitsky I. E. and Wheeden R. L., *Weighted norm inequalities for integral operators*, Trans. Amer. Math. Soc., **350** (1998), 3371–3391. [^1]: The first author is supported by the Global COE program at Graduate School of Mathematical Sciences, the University of Tokyo, Grant-in-Aid for Scientific Research (C) (No. 23540187), the Japan Society for the Promotion of Science, and was supported by Fūjyukai foundation. [^2]: The second author is a Research Fellow of the Japan Society for the Promotion of Science. [^3]: We let $\ds \al_j := \begin{cases} 0\text{ for }j<i, \\ 1_{E}\al_j\text{ for }j\ge i. \end{cases} $
--- abstract: | We present a short proof of the Jordan-Hölder theorem with uniqueness for semimodular semilattice: Given two maximal chains in a semimodular semilattice of finite height, they both have the same length. Moreover there is a unique bijection that takes the prime intervals of the first chain to the prime intervals of the second chain such that the interval and its image are up-and-down projective. The theorem generalizes the classical result that all composition series of a finite group have the same length and isomorphic factors. Moreover, it shows that the isomorphism is in some sense unique. author: - Pavel Paták - 'Pavel Paták[^1]' bibliography: - 'jh.bib' title: 'Jordan-Hölder with uniqueness for semimodular semilattices' --- Introduction ============ The classical Jordan-Hölder theorem [@Jordan1869; @Jordan1870; @Hoelder1889; @Baumslag2006] tells us that any two composition series[^2] of a finite group have the same length and, up to a permutation, isomorphic factors. It is an essential structural result, which generalizes the fundamental theorem of arithmetics and allows to decompose each finite group into uniquely determined basic building blocks, called simple groups. Thus in order to classify all finite groups it suffices to classify all simple groups; and the way they can be composed. One can easily extend Jordan-Hölder theorem to structures that extend groups, e.g. rings, modules, vector spaces or more generally, groups with operators. Given the importance of the theorem it is natural to ask whether the underlying group structure is needed for the theorem to hold. It was already clear to Dedekind that the first part of the statement is not specific to the lattice of subnormal groups. He observed that any two maximal chains in an arbitrary finite (semi)modular lattice[^3] have the same length [@Dedekind1900]. However, a lattice theoretic generalization of the whole statement was only proven by Grätzer and Nation in 2010 [@Graetzer2010]. To do so, they introduced the concept of projectivity, a lattice theoretic analogue of the second isomorphism theorem in groups. They showed that for any two maximal chains in a finite semimodular lattice there is a bijection that takes the prime intervals of one chain to the prime intervals of the second chain such that the interval and its image are up-and-down projective. In that form the range of applications extends from groups to much broader class of structures which for example include matroids[^4] and antimatroids. The statement has been further generalized to semimodular posets [@Ronse2018]. In 2011 Czédli and Schmidt [@Czedli2011] established the strongest form of the theorem for semimodular lattices by showing that for them the permutation is unique (Theorem \[thm:jh\]). In their proof Czédli and Schmidt compare the two chains by looking at the join semilattice generated by them and showing that it is planar. Then they use the theory of planar semimodular semilattices to deduce the result. Eventually, the ideas led to a developed theory of planar semimodular lattices, which is very valuable by itself [@Graetzer2014 Chapter 3]. However, for the proof of the uniqueness in Jordan-Hölder theorem the theory can be bypassed, which shortens the proof significantly. Here we present a short, distilled proof of Jordan-Hölder theorem together with its uniqueness part, based on the original inductive approach by Grätzer and Nation. The paper is organized as follows. First we recall some basic notions. Then we introduce the concept of projectivity in a (semi)lattice, show several of its properties and compare it to the second isomorphism theorem in groups. After that we present the inductive proof of Theorem \[thm:jh\]. Preliminaries ============= We recall the basic notions for reader not familar with (semimodular) lattices. By a lattice we mean a poset $L$, where every two elements $a,b\in L$ have the least common bound, called *meet* and denoted $a\wedge b$, and the greatest common bound, called *join* and denoted $a\vee b$. The operations $\vee$ and $\wedge$ are commutative, associative, idempotent and satisfy the following absorption laws $(a\wedge b)\vee a =a$; $(a\vee b)\wedge a =a$. A poset $S$ is called *join semilattice* if the greatest common bound $a\vee b$ exists for every two elements $a,b\in S$. We write $a\preceq b$ , iff $a\leq b$ and there is no $c$ with $a<c<b$. If $a\preceq b$ and $a\neq b$, we write $a\prec b$. An interval $[a,b]$ is called *prime*, if $a\prec b$. A join semilattice $L$ is called *semimodular* if $a\preceq b$ implies $a\vee c\preceq b\vee c$ for every $c\in L$ and $a,b\in L$. A *chain* in a poset $P$ is a linearly ordered subset of $P$. A poset $P$ is of *finite height*, if all its chains are finite. Projectivity ============ If $L$ is a lattice, we say that an interval $[a,b]$ is up-projective to $[x,y]$, written $[a,b]\nearrow [x,y]$, if and only if $b\wedge x=a$ and $b\vee x=y$. Equivalently we can write $[x,y]\searrow [a,b]$. If $[a,b]$ and $[c,d]$ are two intervals and there is $[x,y]$ such that $[a,b]\nearrow [x,y]\searrow [c,d]$, we say that $[a,b]$ is *up-and-down projective to $[c,d]$* and write $[a,b]{\raisebox{0.15ex}{\ensuremath{\diagup\kern-0.40em\searrow}}}[c,d]$, see Figure \[fig:perspectives\]. & y&\ b& x& d\ a & & c \ $[a,b]\nearrow[x,y]$, $[x,y]\searrow[c,d]$,\ $[a,b]{\raisebox{0.15ex}{\ensuremath{\diagup\kern-0.40em\searrow}}}[c,d]$ & & f\ & d& e\ b& c\ a \ $[a,b]\nearrow[c,d]\nearrow[e,f]$ Let us now compare this notion to the second isomorphism theorem for groups. If $L$ is the lattice of subgroups of some group, $[a,b]\nearrow[x,y]$ and $x$ is normal in $y$, the second isomorphism theorem tells us that $a$ is normal in $b$ and the quotient groups $b/a$ and $y/x$ are isomorphic. We are going to use the following two properties of projectivity. \[lem:primeperspectivity\] Let $L$ be a lattice, $a,b, x,y \in L$ and $a \prec b$. Then $[a,b]\nearrow [x,y]$ if and only if $x\neq y$, $a\vee x = x$ and $b\vee x =y$. If $[a,b]\nearrow [x,y]$, then $b\vee x =y$ by definition and $a\vee x=(b\wedge x)\vee x = x$. The relation $a\prec b$ implies $a=b\wedge x\neq b$, that is $b\nleq x$, and consequently $y=b\vee x\neq x$. If $x\neq y$, $a\vee x = x$, $b\vee x =y$, and $a\prec b$, then $a$ is a lower bound of $\{b,x\}$. Therefore, $a\leq b\wedge x\leq b$. By $a\prec b$ there is no $c\in L$ with $a<c<b$, so either $b\wedge x = b$, leading to the forbidden $x=b\vee x=y$, or $b\wedge x=a$, which together with $b\vee x=y$ shows $[a,b]\nearrow [x,y]$. Observe that for $a\prec b$, Lemma \[lem:primeperspectivity\] characterizes projectivity by joins only, hence, in the case $a\prec b$ and $c\prec d$, it allows us to extend the definition of $[a,b]\nearrow[c,d]$ and $[a,b]{\raisebox{0.15ex}{\ensuremath{\diagup\kern-0.40em\searrow}}}[c,d]$ to join semilattices for the case $a\prec b$ and $c\prec d$. \[obs:perspTransitivity\] Let $L$ be a semimodular join semilattice. If $a\prec b$, then $[a,b]\nearrow[c,d]$, $[c,d]\nearrow[e,f]$ implies $[a,b]\nearrow[e,f]$. First of all, the semimodularity and $c\neq d$ imply $c\prec d$. Thus we can use the semilattice definition for $[c,d]\nearrow [e,f]$ as well. Since $[c,d]\nearrow [e,f]$ we have $e\neq f$ and $e=c\vee e$, which implies $a\vee e=e$ and $b\vee e=f$, as required. Jordan-Hölder theorem ===================== Finally we can state the main theorem. \[thm:jh\] Let $L$ be an upper semimodular join semilattice of finite height. Let $0=c_0\prec c_1\prec \ldots c_n=1$ and $0=d_0\prec d_1\prec \ldots d_m=1$ be two maximal chains in $L$. Then 1. $m=n$. 2. There is a unique permutation $\pi\in S_n$ such that\[it:uniquePermutation\] $[c_i,c_{i-1}]{\raisebox{0.15ex}{\ensuremath{\diagup\kern-0.40em\searrow}}}[d_{\pi(i)},d_{\pi(i)-1}]$ for all $i=1,2,\ldots, n$. 3. If $[c_i,c_{i-1}]{\raisebox{0.15ex}{\ensuremath{\diagup\kern-0.40em\searrow}}}[d_{j},d_{j-1}]$, then $j\leq \pi(i)$, where $\pi$ is the same as in \[it:uniquePermutation\].\[it:uniqueness\] We proceed by induction on the height of $L$. The statement is obviously true for height $0$ or $1$. So let the height of $L$ be higher and let $l$ be the largest integer such that $c_1\nleq d_l$. Clearly $l<m$. We set $d'_j:=c_1\vee d_j$ for all $j=0, \ldots, m$. Then $d'_0:=c_1$, $d'_l=d'_{l+1}=d_{l+1}$ and $d'_j = d_j$ for $j\geq l+1$. Furthermore, we define $e_0=d_1$ and $e_i=d'_i$ for $i>0$, see Fig. \[fig:induction\]. ![Illustration of the induction steps.[]{data-label="fig:induction"}](induction) The “red” chain $c_1=d'_0\preceq d'_1\preceq\ldots\preceq d'_l = d'_{l+1}\preceq\ldots \preceq d'_m=1$ and the “blue” chain $d_1=e_0\preceq e_1\preceq\ldots\preceq e_l=e_{l+1}\preceq \ldots \preceq e_m=1$ have obviously the same length and by semimodularity, they are maximal. By induction applied in $[c_1,1]$ the length of the red chain equals the length of $c_1\prec c_2\prec\ldots\prec c_n=1$. By induction in $[d_1,1]$, the length of the blue chain is the same as the length of $d_1\prec d_2\prec\ldots\prec d_m$. Thus $m=n$, and the first part of the theorem is proven. Consequently $d_0'\prec d_1'\prec \ldots d_l'=d_{l+1}'\prec d_{l+2}' \prec\ldots \prec d_m'$. Let us now find $\pi$. By induction in $[c_1,1]$, there is a unique bijection $\sigma\colon\{2,3,\ldots, n\}\to \{1,2,\ldots, l, l+2, l+3,\ldots, n\}$ such that $[c_{i-1},c_i]{\raisebox{0.15ex}{\ensuremath{\diagup\kern-0.40em\searrow}}}[d'_{\sigma(i)-1},d'_{\sigma(i)}]$. By construction, $[d'_{\sigma(i)-1},d'_{\sigma(i)}]\searrow[d_{\sigma(i)-1},d_{\sigma(i)}]$ and $[c_0,c_1]\nearrow [d_l,d_{l+1}]$. Therefore, Observation \[obs:perspTransitivity\] implies $[c_{i-1},c_i]{\raisebox{0.15ex}{\ensuremath{\diagup\kern-0.40em\searrow}}}[d_{\pi(i)-1},d_{\pi (i)}]$ for $i=1,\ldots, n$ if we set $$\pi(i):=\begin{cases} \sigma(i) & \text{if $i>1$,}\\ l+1 & \text{if $i=1$.} \end{cases}$$ We now prove that $[c_{i-1},c_i]{\raisebox{0.15ex}{\ensuremath{\diagup\kern-0.40em\searrow}}}[d_{j-1},d_j]$ implies $j\leq \pi(i)$. This clearly implies the uniqueness of $\pi$. So let $[c_{i-1},c_i]\nearrow [x,y]\searrow [d_{j-1},d_j]$ for some $x,y\in L$. By Lemma \[lem:primeperspectivity\], $x\neq y$. There are two cases: 1. Then, $x\neq y=x\vee c_1$ implies $c_1\nleq x$. Thus $d_{j-1}\leq x$ gives $c_1\nleq d_{j-1}$, so $j-1\leq l$ by the definition of $l$. Hence $j\leq l+1 = \pi(1)$. 2. Then $[c_{i-1},c_i]\nearrow [x,y]$ implies $y>x\geq c_{i-1}\geq c_1$, see top right part of Figure \[fig:induction\]. From Lemma \[lem:primeperspectivity\], we immediately obtain $x,y\in [c_1,1]$, $x\neq y$, $x\vee d'_{j-1} = x\vee (c_1\vee d_{j-1}) = (x\vee c_1) \vee d_{j-1} = x\vee d_{j-1} = x$; and $x\vee d'_j = x\vee (c_1\vee d_j) = (x\vee c_1)\vee d_j = x\vee d_j = y$. By Lemma \[lem:primeperspectivity\] this implies that in $[c_1,1]$ one has $[c_{i-1},c_i]{\raisebox{0.15ex}{\ensuremath{\diagup\kern-0.40em\searrow}}}[d'_{j-1},d'_j]$. So by induction hypothesis $j\leq\sigma(i)=\pi(i)$, which finishes the proof. [^1]: The research stay of P.P. at IST Austria is funded by the project CZ.02.2.69/0.0/0.0/17\_050/0008466 Improvement of internationalization in the field of research and development at Charles University, through the support of quality projects MSCA-IF. [^2]: A composition series is a maximal chain $1=G_0\vartriangleleft G_1\vartriangleleft G_2\vartriangleleft\ldots\vartriangleleft G_n=G$ of subgroups of $G$. The quotient groups $G_{i+1}/G_i$, are called its *factors*. Any subgroup that appears in some composition series of $G$ is called *subnormal*. [^3]: We note that the subnormal groups form a sublattice of the lattice of all subgroups [@Wielandt1939]. It is not hard to see that this lattice is dually semimodular, see for example [@Stern1999 p. 302]. [^4]: In matroids we do not have second isomorphism theorem, so up-and-down projectivity plays a smaller role. However, it still tells us something. For example, if $a$ and $b$ are points of a single matroid $M$, then $[0,a]{\raisebox{0.15ex}{\ensuremath{\diagup\kern-0.40em\searrow}}}[0,b]$ in the lattice of flats if and only if $a$ and $b$ lie in the same component of $M$ [@graetzer2011 Proofs of Theorems 393 and 396].
--- abstract: 'Precision tests of the Kobayashi-Maskawa model of CP violation are discussed, pointing out possible signatures for other sources of CP violation and for new flavor-changing operators. The current status of the most accurate tests is summarized.' address: | Physics Department, Technion – Israel Institute of Technology\ 32000 Haifa, Israel\ [email protected] author: - MICHAEL GRONAU title: 'CP VIOLATION IN BEAUTY DECAYS[^1]' --- Introduction ============ It took thirty-seven years from the discovery of a tiny CP violating effect of order $10^{-3}$ in $K_L\to\pi^+\pi^-$ [@Christenson:1964fg] to a first observation of a breakdown of CP symmetry outside the strange meson system. A large CP asymmetry of order one between rates of initial $B^0$ and $\bar B^0$ decays to $J/\psi K_S$ was measured in summer 2001 by the Babar and Belle Collaborations.[@Aubert:2001nu] A sizable however smaller asymmetry had been anticipated twenty years earlier[@Carter:1980hr] in the framework of the Kobayashi-Maskawa (KM) model of CP violation,[@Kobayashi:1973fv] in the absence of crucial information on $b$ quark couplings. The asymmetry was observed in a time-dependent measurement as suggested,[@Dunietz:1986vi] thanks to the long $B^0$ lifetime and the large $B^0$-$\bar B^0$ mixing.[@Albrecht:1987dr] The measured asymmetry, fixing (in the standard phase convention[@Wolfenstein:1983yz]) the sine of the phase $2\beta~(\equiv 2\phi_1) \equiv 2{\rm arg}(V_{tb}V^*_{td})$ of the top-quark dominated $B^0$-$\bar B^0$ mixing amplitude, was found to be in good agreement with other determinations of Cabibbo-Kobayasi-Maskawa (CKM) parameters,[@Charles:2006yw; @Bona:2006ah] including a recent precise measurement of $B_s$-$\bar B_s$ mixing.[@Abazov:2006dm] This showed that the CKM phase $\gamma~(\equiv \phi_3) \equiv {\rm arg}(V^*_{ub})$, which seems to be unable to account for the observed cosmological baryon asymmetry,[@Dolgov:2005wf] is the dominant source of CP violation in flavor-changing processes. With this confirmation the next pressing question became whether small contributions beyond the CKM framework occur in CP violating flavor-changing processes, and whether such effects can be observed in beauty decays. One way of answering this question is by over-constraining the CKM unitarity triangle through precise CP conserving measurements related to the lengths of the sides of the triangle. An alternative and more direct way, focusing on the origin of CP violation in the CKM framework, is to measure $\beta$ and $\gamma$ in a variety of $B$ decay modes. Different values obtained from asymmetries in several processes, or values different from those imposed by other constraints, could provide clues for new sources of CP violation and for new flavor-changing interactions. Such phases and interactions occur in the low energy effective Hamiltonian of extensions of the Standard Model (SM) including models based on supersymmetry.[@Gabrielli:1995bd] In this presentation we will focus on the latter approach based primarily on CP asymmetries, using also complementary information on hadronic $B$ decay rates which are expected to be related to each other in the CKM framework. In the next section we outline several of the most relevant processes and the theoretical tools applied for their studies, quoting numerous papers where these ideas have been originally proposed and where more details can be found.[@References] Sections 3, 4 and 5 describe a number of methods in some detail, summarizing at the end of each section the current experimental situation. Section 6 discusses several tests for NP effects, while Section 7 concludes. Processes, methods and New Physics effects ========================================== Whereas testing the KM origin of CP violation in most hadronic $B$ decays requires separating strong and weak interaction effects, in a few “golden modes" CP asymmetries are unaffected by strong interactions. For instance, the decay $B^0\to J/\psi K_S$ is dominated by a single tree-level quark transition $\bar b\to \bar c c \bar s$, up to a correction smaller than a fraction of a percent.[@Gronau:1989ia; @Boos:2004xp; @Ciuchini:2005mg; @Li:2006vq] Thus, the asymmetries measured in this process and in other decays dominated by $\bar b\to \bar c c \bar s$ have already provided a rather precise measurement of $\sin 2\beta$,[@Aubert:2006aq; @Chen:2006nk; @HFAG] 2= 0.678 0.025 . This value permits two solutions for $\beta$ at $21.3^\circ$ and at $68.7^\circ$. Time-dependent angular studies of $B^0\to J/\psi K^{*0}$,[@Aubert:2004cp] and time-dependent Dalitz analyses of $B^0 \to Dh^0~(D\to K_S\pi^+\pi^-, h^0=\pi^0, \eta, \omega)$[@Krokovny:2006sv] measuring $\cos 2\beta > 0$ have excluded the second solution at a high confidence level, implying = (21.3 1.0)\^ . Since $B^0\to J/\psi K_S$ proceeds through a CKM-favored quark transition, contributions to the decay amplitude from physics at a higher scale are expected to be very small, potentially identifiable by a tiny direct asymmetry in this process or in $B^+\to J/\psi K^+$.[@Fleischer:2001cw] Another process where the determination of a weak phase is not affected by strong interactions is $B^+\to DK^+$, proceeding through tree-level amplitudes $\bar b\to \bar c u \bar s$ and $\bar b\to \bar u c \bar s$. The interference of these two amplitudes, from $\bar D^0$ and $D^0$ which can always decay to a common hadronic final state, leads to decay rates and a CP asymmetry which measure very cleanly the relative phase $\gamma$ between these amplitudes.[@Gronau:1990ra; @Gronau:1991dp] The trick here lies in recognizing the measurements which yield this fundamental CP-violating quantity. Physics beyond the SM is expected to have a negligible effect on this determination of $\gamma$ which relies on the interference of two tree amplitudes. $B$ decays into pairs of charmless mesons, such as $B\to \pi\pi$ (or $B\to \rho\rho$) and $B\to K\pi$ (or $B\to K^*\rho$), involve contributions of both tree and penguin amplitudes which carry different weak and strong phases.[@Gronau:1989ia; @London:1989ph; @Grinstein:1989df] Contrary to the case of $B\to DK$, the determination of $\beta$ and $\gamma$ using CP asymmetries in charmless $B$ decays involves two correlated aspects which must be considered: its dependence on strong interaction dynamics and its sensitivity to potential New Physics (NP) effects. This sensitivity follows from the CKM and loop suppression of penguin amplitudes, implying that new heavy particles at the TeV mass range, replacing the $W$ boson and the top-quark in the penguin loop, may have sizable effects.[@Gronau:1996rv]. In order to claim evidence for physics beyond the SM from a determination of $\beta$ and $\gamma$ in these processes one must handle first the question of dynamics. There are two approaches for treating the dynamics of charmless hadronic $B$ decays: \(1) Study systematically strong interaction effects in the framework of QCD. \(2) Identify by symmetry observables which do not depend on QCD dynamics. The first approach faces the difficulty of having to treat precisely long distance effects of QCD including final state interactions. Remarkable theoretical progress has been made recently in proving a leading-order (in $1/m_b$) factorization formula for these amplitudes in a heavy quark effective theory approach to perturbative QCD.[@Beneke:1999br; @Keum:2000ph; @Bauer:2004tj] However, there remain differences between ways of treating in different approaches power counting, the scale of Wilson coefficients, end-point quark distribution functions of light mesons, and nonperturbative contributions from charm loops.[@Ciuchini:1997hb] Also, the nonperturbative input parameters in these calculations involve non-negligible uncertainties. These parameters include heavy-to-light form factors at small momentum transfer, light-cone distribution amplitudes, and the average inverse momentum fraction of the spectator quark in the $B$ meson. The resulting inaccuracies in calculating magnitudes and strong phases of amplitudes prohibit a precise determination of $\gamma$ from measured decay rates and CP asymmetries. Also, the calculated rates and asymmetries cannot provide a clear case for physics beyond the SM in cases where the results of a calculation deviate slightly from the measurements. In the second approach one applies isospin symmetry to obtain relations among several decay amplitudes. For instance, using the distinct behavior under isospin of tree and penguin operators contributing in $B\to\pi\pi$, a judicious choice of observables permits a determination of $\gamma$ or $\alpha~(\equiv \phi_2) = \pi-\beta - \gamma$. [@Gronau:1990ka] The same analysis applies in $B$ decays to pairs of longitudinally polarized $\rho$ mesons. In case that an observable related to the subdominant penguin amplitude is not measured with sufficient precision, it may be replaced in the analysis by a CKM-enhanced SU(3)-related observable, in which a large theoretical uncertainty is translated to a small error in $\gamma$. The precision of this method is increased by including contributions of higher order electroweak penguin amplitudes, which are related by isospin to tree amplitudes.[@Buras:1998rb; @Gronau:1998fn] With sufficient statistics one should also take into account isospin-breaking corrections of order $(m_d - m_u)/\Lambda_{\rm QCD} \sim 0.02$,[@Gardner:1998gz; @Gronau:2005pq] and an effect caused by the $\rho$ meson width.[@Falk:2003uq] A similar analysis proposed for extracting $\gamma$ in $B\to K\pi$ [@Nir:1991cu; @Deshpande:1994pw] requires using flavor SU(3) instead of isospin for relating electroweak penguin contributions and tree amplitudes.[@Gronau:1998fn; @Neubert:1998pt] While flavor SU(3) is usually assumed to be broken by corrections of order $(m_s-m_d)/\Lambda_{\rm QCD}\sim 0.3$, in this particular case a rather precise recipe for SU(3) breaking is provided by QCD factorization, reducing the theoretical uncertainty in $\gamma$ to only a few degrees.[@Neubert:1998re] Charmless $B$ decays, which are sensitive to physics beyond the SM [@Gronau:1996rv], provide a rich laboratory for studying various signatures of NP. A large variety of theories have been studied in this context, including supersymmetric models, models involving tree-level flavor-changing $Z$ or $Z'$ couplings, models with anomalous three-gauge-boson couplings and other models involving an enhanced chromomagnetic dipole operator.[@Grossman:1996ke; @Ciuchini:1997zp] The following effects have been studied and will be discussed in Section 6 in a model-independent manner: (1) Within the SM, the three values of $\gamma$ extracted from $B\to\pi\pi$, $B\to K\pi$ and $B^+\to DK^+$ are equal. As we will explain, these three values are expected to be different in extensions of the SM involving new low energy four-fermion operators behaving as $\Delta I=3/2$ in $B\to\pi\pi$ and as $\Delta I = 1$ in $B\to K\pi$. (2) Other signatures of anomalously large $\Delta I = 1$ operators contributing to $B\to K\pi$ are violations of isospin sum rules, holding in the SM for both decay rates and CP asymmetries in these decays.[@Gronau:1998ep; @Atwood:1997iw; @Gronau:2005gz] (3) Time-dependent asymmetries in $B^0 \to \pi^0K_S$, $B^0\to \phi K_S$ and $B^0\to \eta'K_S$ and in other $b\to s$ penguin-dominated decays may differ substantially from the asymmetry $\sin 2\beta\sin\Delta mt$, predicted approximately in the SM.[@London:1989ph; @Grossman:1996ke; @London:1997zk] Significant deviations are expected in models involving anomalous $|\Delta S|=1$ operators behaving as $\Delta I=0$ or $\Delta I=1$. (4) An interesting question, which may provide a clue to the underlying New Physics once deviations from SM predictions are observed, is how to diagnose the value of $\Delta I$ in NP operators contributing to $|\Delta S|=1$ charmless $B$ decays. We will discuss an answer to this question which has been proposed recently.[@Gronau:2007ut] Determining $\gamma$ in $B\to DK$ ================================= In this section we will discuss in some length a rather rich and very precise method for determining $\gamma$ in processes of the form $B\to D^{(*)}K^{(*)}$, which uses both charged and neutral $B$ mesons and a large variety of final states. It is based on a broad idea that any coherent admixture of a state involving a $\bar D^0$ from $\bar b\to \bar cu \bar s$ and a state with $D^0$ from $\bar b \to \bar uc\bar s$ can decay to a common final state.[@Gronau:1990ra; @Gronau:1991dp] The interference between the two channels, $B\to D^{(*)0}K^{(*)},~D^0\to f_D$ and $B\to \bar D^{(*)0}K^{(*)},~\bar D^0\to f_D$, involves the weak phase difference $\gamma$, which may be determined with a high theoretical precision using a suitable choice of measurements. Effects of $D^0$-$\bar D^0$ mixing are negligible.[@Grossman:2005rp] While some of these processes are statistically limited, combining them together is expected to reduce the experimental error in $\gamma$. In addition to (quasi) two-body $B$ decays, the $D$ or $D^*$ in the final state may be accompanied by any multi-body final state with quantum numbers of a kaon.[@Gronau:1991dp] Each process in this large class of neutral and charged $B$ decays is characterized by two pairs of parameters, describing complex ratios of amplitudes for $D^0$ and $\bar D^0$ for the two steps of the decay chain (we use a convention $r_B, r_f \ge 0, 0\le \delta_B, \delta_f < 2\pi$), \[ratios\] = r\_Be\^[i(\_B +)]{} ,      = r\_fe\^[i\_f]{} . In three-body decays of $B$ and $D$ mesons, such as $B\to DK\pi$ and $D\to K\pi\pi$, the two pairs of parameters $(r_B,\delta_B)$ and $(r_f,\delta_f)$ are actually functions of two corresponding Dalitz variables describing the kinematics of the above three-body decays. The sensitivity of determining $\gamma$ depends on $r_B$ and $r_f$ because this determination relies on an interference of $D^0$ and $\bar D^0$ amplitudes. For $D$ decay modes with $r_f\sim 1$ (see discussion below) the sensitivity increases with the magnitude of $r_B$. For each of the eight sub-classes of processes, $B^{+,0}\to D^{(*)}K^{(*)+,0}$, one may study a variety of final states in neutral $D$ decays. The states $f_D$ may be divided into four families, distinguished qualitatively by their parameters $(r_f,\delta_f)$ defined in Eq. (\[ratios\]): \(1) $f_D=$ CP-eigenstate[@Gronau:1990ra; @Gronau:1991dp; @Gronau:1998vg] ($K^+K^-, K_S\pi^0, {\rm etc.}); r_f =1, \delta_f =0,\pi$. \(2) $f_D=$ flavorless but non-CP state[@Grossman:2002aq] ($K^+K^{*-}, K^{*+}K^-, {\rm etc.}); r_f ={\cal O}(1)$. \(3) $f_D=$ flavor state[@Atwood:1996ci] ($K^+\pi^-, K^+\pi^-\pi^0, {\rm etc.}); r_f \sim \tan^2\theta_c$. \(4) $f_D=$ 3-body self-conjugate state[@Giri:2003ty] ($K_S\pi^+\pi^-); r_f,\delta_f$ vary across the Dalitz\ $~~~~~~~~~$ plane.\ In the first family, CP-odd states occur in Cabibbo-favored $D^0$ and $\bar D^0$ decays, while CP-even states occur in singly Cabibbo-suppressed decays. The second family of states occurs in singly Cabibbo-suppressed decays, the third family occurs in Cabibbo-favored $\bar D^0$ decays and in doubly Cabibbo-suppressed $D^0$ decays, while the last state is formally a Cabibbo-favored mode for both $D^0$ and $\bar D^0$. The parameters $r_B$ and $\delta_B$ in $B\to D^{(*)}K^{(*)}$ depend on whether the $B$ meson is charged or neutral, and may differ for $K$ vs $K^*$,[@Dunietz:1991yd] and for $D$ vs $D^*$, where a neutral $D^*$ can be observed in $D^*\to D\pi^0$ or $D^*\to D\gamma$.[@Bondar:2004bi] The ratio $r_B$ involves a CKM factor $|V_{ub}V_{cs}/V_{cb}V_{us}|\simeq 0.4$ in both $B^+$ and $B^0$ decays, and a color-suppression factor in $B^+$ decays, while in $B^0$ decays both $\bar b\to \bar cu \bar s$ and $\bar b \to \bar uc\bar s$ amplitudes are color-suppressed. A rough estimate of the color-suppression factor in these decays may be obtained from the color-suppression measured in corresponding CKM-favored decays, $B\to D\pi, D^*\pi, D\rho, D^*\rho$, where the suppression is found to be in the range $0.3-0.5$.[@Yao:2006px] Thus, one expects $r_B(B^0) \sim 0.4,~ r_B(B^+) = (0.3-0.5) r_B(B^0)$ in all the processes $B^{+,0}\to D^{(*)}K^{(*)+,0}$. We note that three-body $B^+$ decays, such as $B^+\to D^0 K^+\pi^0$, are not color-suppressed, making these processes advantageous by their potentially large value of $r_B$ which varies in phase space.[@Aleksan:2002mh; @Gronau:2002mu] The above comparison of $r_B(B^+)$ and $r_B(B^0)$ may be quantified more precisely by expressing the four ratios $r_B(B^0)/r_B(B^+)$ in $B\to D^{(*)}K^{(*)}$ in terms of reciprocal ratios of known magnitudes of amplitudes:[@Gronau:2004gt] \[r-ratio\]  . This follows from an approximation, \[A0=A+\] A(B\^0D\^[(\*)0]{}K\^[(\*)0]{}) A (B\^+D\^[(\*)0]{}K\^[(\*)+]{}) , where the $B^0$ and $B^+$ processes are related to each other by replacing a spectator $d$ quark by a $u$ quark. While formally Eq. (\[A0=A+\]) is not an isospin prediction, it may be obtained using an isospin triangle relation,[@Gronau:1998un] \[iso\] A(B\^0D\^[(\*)0]{}K\^[(\*)0]{}) = A (B\^+D\^[(\*)0]{}K\^[(\*)+]{}) + A(B\^+D\^[(\*)+]{}K\^[(\*)0]{}), and neglecting the second amplitude on the right-hand-side which is “pure annihilation".[@Blok:1997yj] This amplitude is expected to be suppressed by a factor of four or five relative to the other two amplitudes appearing in (\[iso\]) which are color-suppressed. Evidence for this kind of suppression is provided by corresponding ratios of CKM-favored amplitudes,[@Yao:2006px] $|A(B^0\to D^-_s K^+)/\sqrt{2}A(\bar D^0\pi^0)|=0.23 \pm 0.03, |A(B^0\to D^{*-}_s K^+)/\sqrt{2}A(\bar D^{*0}\pi^0)|<0.24$. Applying Eq. (\[r-ratio\]) to measured branching ratios,[@Yao:2006px; @Aubert:2006qn] one finds \[ratio-r\] = { [cccc]{}    BDK   &   BDK\^\*   &   BD\^\*K    &   B D\^\*K\^\*\ 2.9 0.4 &   3.7 0.3    &   &gt; 2.2   &   &gt;3.0\ . This agrees with values of $r_B(B^0)$ near 0.4 and $r_B(B^+)$ between 0.1 and 0.2. Note that in spite of the expected larger values of $r_B$ in $B^0$ decays, from the point of view of statistics alone (without considering the question of flavor tagging and the efficiency of detecting a $K_S$ in $B^0\to D^{(*)}K^0$), $B^+$ and $B^0$ decays may fare comparably when studying $\gamma$. This follows from (\[A0=A+\]) because the statistical error on $\gamma$ scales roughly as the inverse of the smallest of the two interfering amplitudes. We will now discuss the actual manner by which $\gamma$ can be determined using [*separately*]{} three of the above-mentioned families of final states $f_D$. We will mention advantages and disadvantages in each case. For illustration of the method we will consider $B^+\to f_DK^+$. We will also summarize the current status of these measurements in all eight decay modes $B^{+,0}\to D^{(*)}K^{(*)+,0}$. $f_D=$ CP-eigenstates --------------------- One considers four observables consisting of two charge-averaged decay rates for even and odd CP states, normalized by the decay rate into a $D^0$ flavor state, R\_[[CP]{}]{}  , and two CP asymmetries for even and odd CP states, A\_[[CP]{}]{}  . In order to avoid dependence of $R_{{\rm CP}\pm}$ on errors in $D^0$ and $D_{\rm CP}$ branching ratio measurements one uses a definition of $R_{{\rm CP}\pm}$ in terms of ratios of $B$ decay branching ratios into $DK$ and $D\pi$ final states.[@Gronau:2002mu] The four observables $R_{{\rm CP}\pm} $ and $A_{{\rm CP}\pm}$ provide three independent equations for $r_B, \delta_B$ and $\gamma$, R\_[[CP]{}]{} & = & 1 + r\_B\^2 2r\_B\_B ,\ A\_[[CP]{}]{} & = & 2r\_B \_B /R\_[[CP]{}]{} . While in principle this is the simplest and most precise method for extracting $\gamma$, up to a discrete ambiguity, in practice this method is sensitive to $r_B^2$, because $(R_{{\rm CP}+}+R_{{\rm CP}-})/2 = 1 +r_B^2$. This becomes very difficult for charged $B$ decays where one expects $r_B\sim 0.1-0.2$, but may be feasible for neutral $B$ decays where $r_B\sim 0.4$. An obvious signature for a non-zero value of $r_B$ would be observing a difference between $R_{{\rm CP}+}$ and $R_{{\rm CP}-}$ which is linear in this quantity. Studies of $B^+\to D_{\rm CP} K^+, B^+ \to D_{\rm CP}K^{*+}$ and $B^+\to D^*_{\rm CP}K^+$ have been carried out recently,[@Aubert:2005cc; @Aubert:2005rw; @Abe:2006hc] each consisting of a few tens of events. A nonzero difference $R_{{\rm CP}+}-R_{{\rm CP}-}$ at $2.6$ standard deviations, measured in $B^+\to D_{\rm CP}K^{*+}$,[@Aubert:2005cc] is probably a statistical fluctuation. A larger difference is anticipated in $B^0\to D_{CP}K^{*0}$, as the value of $r_B$ in this process is expected to be three or four times larger than in $B^+\to DK^{*+}$. \[See Eq. (\[ratio-r\]).\] Higher statistics is required for a measurement of $\gamma$ using this method. $f_D=$ flavor state ------------------- Consider a flavor state $f_D$ in Cabibbo-favored $\bar D^0$ decays, accessible also to doubly Cabibbo-suppressed $D^0$ decays, such that one has $r_f\sim \tan^2\theta_c$ in Eq. (\[ratios\]). One studies the ratio of two charge-averaged decay rates, for decays into $\bar f_DK$ and $f_DK$, R\_f  , and the CP asymmetry, A\_f  . These observables are given by \[Rf\] R\_f & = & r\_B\^2 + r\^2\_f + 2r\_Br\_f(\_B - \_f) ,\ A\_f &= & 2rr\_f(\_B - \_f)/R\_f , where a multiplicative correction $1+{\cal O}(r_Br_f)\sim 1.01$ has been neglected in (\[Rf\]). These two observables involve three unknowns, $r_B, \delta_B-\delta_f$ and $\gamma$. One assumes $r_f$ to be given by the measured ratio of doubly Cabibbo-suppressed and Cabibbo-favored branching ratios. Thus, one needs at least two flavor states, $f_D$ and $f'_D$, for which two pairs of observables ($R_f, A_f$) and ($R_{f'}, A_{f'}$) provide four equations for the four unknowns, $r_B, \delta_B-\delta_f, \delta_B - \delta_{f'}, \gamma$. The strong phase differences $\delta_f, \delta_{f'}$ can actually be measured at a $\psi''$ charm factory,[@Silva:1999bd] thereby reducing the number of unknowns to three. While the decay rate in the numerator of $R_f$ is rather low, the asymmetry $A_f$ may be large for small values of $r_B$ around 0.1, as it involves two amplitudes with a relative magnitude $r_f/r_B$. So far, only upper bounds have been measured for $R_f$ implying upper limits on $r_B$ in several processes, $r_B(B^+\to DK^+)<0.2$,[@Aubert:2005pj; @Abe:2005gi; @Aubert:2006ga] $r_B(B^+\to D^*K^+) < 0.2$,[@Aubert:2005pj] $r(B^+\to DK^{*+})<0.4$,[@Aubert:2005cr] and $r_B(B^0\to DK^{*0}) < 0.4$.[@Aubert:2006qn; @Krokovny:2002ua] Further constraints on $r_B$ in the first three processes have been obtained by studying D decays into CP-eigenstates and into the state $K_S\pi^+\pi^-$. Using $r_B(B^0\to DK^{*0})/r_B(B^+\to DK^{*+}) = 3.7 \pm 0.3$ in (\[ratio-r\]) and assuming that $r_B(B^+\to DK^{*+})$ is not smaller than about 0.1, one may conclude that a nonzero measurement of $r_B(B^0\to DK^{*0})$ should be measured soon. The signature for $B^0\to D^0K^{*0}$ events would be two kaons with opposite charges. $f_D=K_S\pi^+\pi^-$ ------------------- The amplitude for $B^+ \to (K_S\pi^+\pi^-)_DK^+$ is a function of the two invariant-mass variables, $m^2_{\pm} \equiv (p_{K_S}+p_{\pi^{\pm}})^2$, and may be written as A(B\^+ (K\_S\^+\^-)\_DK\^+) = f(m\^2\_+,m\^2\_-) + r\_Be\^[i(\_B+)]{}f(m\^2\_-,m\^2\_+) . In $B^-$ decay one replaces $m_+ \leftrightarrow m_-,~\gamma \to -\gamma$. The function $f$ may be written as a sum of about twenty resonant and nonresonant contributions modeled to describe the amplitude for flavor-tagged $\bar D^0\to K_S\pi^+\pi^-$ which is measured separately.[@Poluektov:2006ia; @Aubert:2006am] This introduces a model-dependent uncertainty in the analysis. Using the measured function $f$ as an input and fitting the rates for $B^{\pm}\to (K_S\pi^+\pi^-)_DK^{\pm}$ to the parameters, $r_B,\delta_B$ and $\gamma$, one then determines these three parameters. The advantage of using $D\to K_S\pi^+\pi^-$ decays over CP and flavor states is being Cabibbo-favored and involving regions in phase space with a potentially large interference between $D^0$ and $\bar D^0$ decay amplitudes. The main disadvantage is the uncertainty introduced by modeling the function $f$. Two recent analyses of comparable statistics by Belle and Babar, combining $B^\pm\to DK^\pm, B^\pm \to D^*K^\pm$ and $B^\pm \to DK^{*\pm}$, obtained values[@Poluektov:2006ia] $\gamma = [53^{+15}_{-18}\pm 3 \pm 9 ({\rm model})]^\circ$ and $\gamma = [92\pm 41\pm 11\pm 12 ({\rm model})]^\circ$.[@Aubert:2006am] \[This second value does not use the process $B^+\to D(K_S\pi^+)_{K^*}$, also studied by the same group,[@Aubert:2005yj].\] The larger errors in the second analysis are correlated with smaller values of the extracted parameters $r_B$ in comparison with those extracted in the first study. The model-dependent errors may be reduced by studying at CLEO-c the decays $D_{CP\pm}\to K_S\pi^+\pi^-$, providing further information on strong phases in $D$ decays.[@Silva:1999bd] [**Conclusion**]{}: The currently most precise value of $\gamma$ is $\gamma = [53^{+15}_{-18}\pm 3 \pm 9 ({\rm model})]^\circ$, obtained from $B^{\pm}\to D^{(*)}K^{(*)\pm}$ using $D\to K_S\pi^+\pi^-$. These errors may be reduced in the future by combining the study of [*all $D$ decay modes*]{} in $B^{+,0}\to D^{(*)}K^{(*)+,0}$. The decay $B^0\to DK^{*0}$ seems to carry a high potential because of its expected large value of $r_B$. Decays $B^0\to D^{(*)}K^0$ may also turn useful, as they have been shown to provide information on $\gamma$ without the need for flavor tagging of the initial $B^0$.[@Gronau:2004gt; @Gronau:2007bh] The currently most precise determination of $\gamma$: $B\to\pi\pi, \rho\rho, \rho\pi$ ===================================================================================== $B\to\pi\pi$ ------------ The amplitude for $B^0\to\pi^+\pi^-$ contains two terms, conventionally denoted “tree" ($T$) and “penguin" ($P$) amplitudes, [@Gronau:1989ia; @London:1989ph] involving a weak CP-violating phase $\gamma$ and a strong CP-conserving phase $\delta$, respectively: A(\^+ \^-) = |T| e\^[i ]{} + |P| e\^[i ]{} . Time-dependent decay rates, for an initial $B^0$ or a $\ob$, are given by \[Asym\] (B\^0(t)/(t)\^+\^-) = e\^[-t]{}\_[\^+\^-]{} , where \[SC\] =  ,    =  ,    e\^[-2i ]{}  . One has[@Gronau:1989ia] & = & 2+ 2|P/T|2(+) +[O]{}(|P/T|\^2) ,& = & 2|P/T|(+ )+ [O]{}(|P/T|\^2) . This tells us two things:\ (1) The deviation of $\spp$ from $\sin 2\alpha$ and the magnitude of $\cpp$ increase with $|P/T|$, which can be estimated to be $|P/T|\sim 0.5$ by comparing $B\to \pi\pi$ rates with penguin-dominated $B\to K\pi$ rates.[@Gronau:2004ej]\ (2) $\Gamma_{\pi^+\pi^-}$, $\spp$ and $\cpp$ are insufficient for determining $|T|, |P|, \delta$ and $\gamma$ (or $\alpha$).\ Further information on these quantities may be obtained by applying isospin symmetry to all $B\to\pi\pi$ decays. In order to carry out an isospin analysis,[@Gronau:1990ka] one uses the fact that the three physical $B\to\pi\pi$ decay amplitudes and the three $\bar B\to\pi\pi$ decay amplitudes, depending each on two isospin amplitudes, obey triangle relations of the form, \[isotr\] A(B\^0\^+\^-)/+ A(B\^0\^0\^0)-A(B\^+\^+\^0)=0  . Furthermore, the penguin amplitude is pure $\Delta I= 1/2$; hence the $\Delta I=3/2$ amplitude carries a week phase $\gamma$, $A(B^+\to\pi^+\pi^0)=e^{2i\gamma} A(B^-\to\pi^-\pi^0)$. Defining $\sin 2\alpha_{\rm eff} \equiv S_{+-}/(1 - C^2_{+-})^{1/2}$, the difference $\alpha_{\rm eff}-\alpha$ is then determined by an angle between corresponding sides of the two isospin triangles sharing a common base, $|A(B^+\to\pi^+\pi^0)|= |A(B^-\to\pi^-\pi^0)|$. A sign ambiguity in $\alpha_{\rm eff}-\alpha$ is resolved by two model-independent features which are confirmed experimentally, $|P|/|T|\le 1, |\delta|\le \pi/2$. This implies $\alpha < \alpha_{\rm eff}$.[@Gronau:2004sj] Current CP-averaged branching ratios and CP asymmetries for $B\to\pi\pi$ and $B\to\rho\rho$ decays are given in Table I,[@HFAG] where $A_{CP}\equiv-C$ for decays to CP eigenstates. An impressive experimentally progress has been achieved in the past two years in extracting a precise value for $\alpha_{\rm eff}$, $\alpha_{\rm eff}=(110.6^{+3.6}_{-3.2})^\circ$. However, the error on $\alpha_{\rm eff}-\alpha$ using the isospin triangles is still large. An upper bound, given by CP-averaged rates and a direct CP asymmetry in $B^0\to\pi^+\pi^-$,[@Gronau:2001ff; @Grossman:1997jr] \[bound\] 2(\_[eff]{}-)  , leads to $0<\alpha_{\rm eff}-\alpha<31^\circ$ at $1\sigma$. Adding in quadrature the error in $\alpha_{\rm eff}$ and the uncertainty in $\alpha-\alpha_{\rm eff}$, this implies $\alpha=(95\pm 16)^\circ$ or $\gamma =(64\pm 16)^\circ$ by . A similar central value but a smaller error, $\alpha = (97\pm 11)^\circ$, has been reported recently by the Belle Collaboration.[@Ishino:2006if] The possibility that a penguin amplitude in $B^0\to\pi^+\pi^-$ may lead to a large CP asymmetry S for values of $\alpha$ near $90^\circ$ where $\sin 2\alpha=0$ was anticipated fifteen years ago.[@Gronau:1992rm] The bound on $\alpha_{\rm eff}-\alpha$ may be improved considerably by measuring a nonzero direct CP asymmetry in $B^0\to\pi^0\pi^0$. This asymmetry can be shown to be [*large and positive*]{} (see Eq. (\[ACPpi0pi0\]) in Sec. 5.2), implying a large rate for $\bar B^0$ but a small rate for $B^0$. Namely, the triangle (\[isotr\]) is expected to be squashed, while the $\bar B$ triangle is roughly equal sided. An alternative way of treating the penguin amplitude in $B^0\to\pi^+\pi^-$ is by combining within flavor SU(3) the decay rate and asymmetries in this process with rates and asymmetries in $B^0\to K^0\pi^+$ or $B^0\to K^+\pi^-$.[@Gronau:2004ej] The ratio of $\Delta S=1$ and $\Delta S=0$ tree amplitudes in these processes, excluding CKM factors, is taken to be given by $f_K/f_\pi$ assuming factorization, while the ratio of corresponding penguin amplitudes is allowed to vary by $\pm 0.22$ around one. A current update of this rather conservative analysis obtains [@GR] \[gamma-pipi\] = (734\^[+10]{}\_[-8]{})\^ , where the first error is experimental, while the second one is due to an uncertainty in SU(3) breaking. A discussion of SU(3) breaking factors relating $B^0\to\pi^+\pi^-$ and $B^0\to K^+\pi^-$ is included in Section 5.2. $B\to\rho\rho$ -------------- Angular analyses of the pions in $\rho$ decays have shown that $B^0\to \rho^+\rho^-$ is dominated almost $100\%$ by longitudinal polarization [@HFAG]. This simplifies the isospin analysis of CP asymmetries in these decays to becoming similar to $B^0\to\pi^+\pi^-$. The advantage of $B\to \rho\rho$ over $B\to \pi\pi$ is the relative small value of $\b(\rho^0\rho^0)$ in comparison with $\b(\rho^+\rho^-)$ and $\b(\rho^+\rho^0)$ (see Table I), indicating a smaller $|P/T|$ in $B\to\rho^+\rho^-$ ($|P/T|< 0.3$ [@Charles:2006yw]) than in $B^0\to\pi^+\pi^-$ ($|P/T|\sim 0.5$[@Gronau:2004ej]). Eq. (\[bound\]) leads to an upper bound on $\alpha_{\rm eff}-\alpha$ in $B\to\rho\rho$, $0<\alpha_{\rm eff}-\alpha< 17^\circ$ (at $1\sigma$). The asymmetries for longitudinal $\rho$’s given in Table I imply $\alpha_{\rm eff} = (91.7^{+5.3}_{-5.2})^\circ$. Thus, one finds $\alpha = (83 \pm 10)^\circ$ or $\gamma = (76\pm 10)^\circ$ by adding errors in quadrature. A stronger bound on $|P/T|$ in $B^0\to\rho^+\rho^-$, leading to a more precise value of $\gamma$, may be obtained by relating this process to $B^+\to K^{*0}\rho^+$ within flavor SU(3). [@Beneke:2006rb] One uses the branching ratio and fraction of longitudinal rate measured for this process [@HFAG], $\b(K^{*0}\rho^+)=(9.2 \pm 1.5)\times 10^{-6}, f_L(K^{*0}\rho^+)= 0.48\pm 0.08$, to normalize the penguin amplitude in $B^0\to\rho^+\rho^-$. Including a conservative uncertainty from SU(3) breaking and smaller amplitudes, one finds a value \[gamma-rhorho\] = (71.4\^[+5.8]{}\_[-8.8]{} \^[+4.7]{}\_[-1.7]{})\^ , where the first error is experimental and the second one theoretical. The current small theoretical error in $\gamma$ requires including isospin breaking effects in studies based on isospin symmetry. The effect of electroweak penguin amplitudes on the isospin analyses of $B\to\pi\pi$ and $B\to \rho\rho$ has been calculated and was found to move $\gamma$ slightly higher by an amount $\Delta\gamma_{\rm EWP}=1.5^\circ$.[@Buras:1998rb; @Gronau:1998fn] Other corrections, relevant to methods using $\pi^0$ and $\rho^0$, includng $\pi^0$-$\eta$-$\eta'$ mixing, $\rho$-$\omega$ mixing, and a small $I=1$ $\rho\rho$ contribution allowed by the $\rho$-width, are each smaller than one degree.[@Gardner:1998gz; @Gronau:2005pq; @Falk:2003uq] [**Conclusion**]{}: Taking an average of the two values of $\gamma$ in (\[gamma-pipi\]) and (\[gamma-rhorho\]) obtained from $B^0\to\pi^+\pi^-$ and $B^0\to \rho^+\rho^-$, and including the above-mentioned EWP correction, one finds \[gamma\] =(73.55.7)\^ . A third method of measuring $\gamma$ (or $\alpha$) in time-dependent Dalitz analyses of $B^0\to (\rho\pi)^0$ involves a much larger error,[@Snyder:1993mx] and has a small effect on the overall averaged value of the weak phase. We note that $\sin\gamma$ is close to one and its relative error is only $3\%$, the same as the relative error in $\sin 2\beta$ and slightly smaller than the relative error in $\sin\beta$. Rates, asymmetries, and $\gamma$ in $B\to K\pi$ =============================================== Extracting $\gamma$ in $B\to K\pi$ ---------------------------------- The four decays $B^0\to K^+\pi^-, B^0\to K^0\pi^0, B^+\to K^0\pi^+, B^+\to K^+\pi^0$ involve a potential for extracting $\gamma$, provided that one is sensitive to interference between a dominant isoscalar penguin amplitude and a small tree amplitude contributing to these processes. This idea has led to numerous suggestions for determining $\gamma$ in these decays starting with a proposal made in 1994.[@Gronau:1994rj; @Gronau:1994bn] An interference between penguin and tree amplitudes may be identified in two ways: \(1) Two different properly normalized $B\to K\pi$ rates. \(2) Nonzero direct CP asymmetries. Current branching ratios and CP asymmetries are summarized in Table II.[@HFAG] Three ratios of rates, calculated using the ratio of $B^+$ and $B^0$ lifetimes, $\tau_+/\tau_0 = 1.076 \pm 0.008$,[@HFAG] are: R & & = 0.90 0.05 ,R\_c & & = 1.11 0.07 ,R\_n & & = 0.97 0.07 . The largest deviation from one, observed in the ratio $R$ at 2$\sigma$, is insufficient for claiming unambiguous evidence for a non-penguin contribution. An upper limit, $R< 0.965$ at $90\%$ confidence level, would imply $\gamma \le 79^\circ$ using $\sin^2\gamma \le R$,[@Fleischer:1997um] which neglects however “color-suppressed" EWP contributions.[@Gronau:1997an] As we will argue now, these contributions and “color-suppressed" tree amplitudes are actually not suppressed as naively expected. The nonzero asymmetry measured in $B^0\to K^+\pi^-$ provides first evidence for an interference between penguin ($P'$) and tree ($T'$) amplitudes with a nonzero relative strong phase. Such an interference occurs also in $B^+\to K^+\pi^0$ where no asymmetry has been observed. An assumption that other contributions to the latter asymmetry are negligible has raised some questions about the validity of the CKM framework. In fact, a color-suppressed tree amplitude ($C'$), also occurring in $B^+\to K^+\pi^0$,[@Gronau:1994rj] resolves this “puzzle" if this amplitude is comparable in magnitude to $T'$. Indeed, several studies have shown that this is the case,[@Chiang:2004nm; @Baek:2004rp; @Buras:2003dj; @Li:2005kt; @Beneke:2005vv] also implying that color-suppressed and color-favored EWP amplitudes are of comparable magnitudes.[@Gronau:1998fn] For consistency between the two CP asymmetries in $B^0\to K^+\pi^-$ and $B^+\to K^+\pi^0$, the strong phase difference between $C'$ and $T'$ must be negative and cannot be very small.[@Gronau:2006ha] This seems to stand in contrast to QCD calculations using a factorization theorem.[@Beneke:1999br; @Bauer:2004tj; @Beneke:2005vv] The small asymmetry $A_{CP}(B^+\to K^+\pi^0)$ implies bounds on the sine of the strong phase difference $\delta_c$ between $T'+C'$ and $P'$. The cosine of this phase affects $R_c-1$ involving the decay rates for $B^+\to K^0\pi^+$ and $B^+\to K^0\pi^+$. A question studied recently is whether the two upper bounds on $|\sin\delta_c|$ and $|\cos\delta_c|$ are consistent with each other or, perhaps, indicate effects of NP. Consistency was shown by proving a sum rule involving $A_{CP}(B^+\to K^+\pi^0)$ and $R_c-1$, in which an electroweak penguin (EWP) amplitude plays an important role. We will now present a proof of the sum rule, which may provide important information on $\gamma$.[@Gronau:2006ha] The two amplitudes for $B^+\to K^0\pi^+, K^+\pi^0$ are given in terms of topological contributions including $P', T'$ and $C'$, \[AmpKpi\] A(B\^+K\^0\^+) & = & (P’-P’\^c\_[EW]{}) + A’ ,A(B\^+K\^+\^0) & = & (P’-P’\^c\_[EW]{}) + (T’+P’\^c\_[EW]{}) +(C’+P’\_[EW]{}) +A’ , where $P'_{EW}$ and $P'^c_{EW}$ are color-favored and color-suppressed EWP contributions. The small annihilation amplitude $A'$ and a small $u$ quark contribution to $P'$ involving a CKM factor $V^*_{ub}V_{us}$ will be neglected ($|V^*_{ub}V_{us}|/|V^*_{cb}V_{cs}|=0.02$). Evidence for the smallness of these terms can be found in the small CP asymmetry measured for $B^+\to K^0\pi^+$. Large terms would require rescattering and a sizable strong phase difference between these terms and $P'$. Flavor SU(3) symmetry relates $\Delta I=1, I(K\pi)=3/2$ electroweak penguin and tree amplitudes through a calculable ratio $\delta_{EW}$ [@Gronau:1998fn; @Neubert:1998pt], \[eqn:delta\_EW\] T’ + C’ + P’\_[EW]{} + P’\^c\_[EW]{} & = & (T’ + C’)(1-\_[EW]{}e\^[-i]{})  ,\ \_[EW]{} & = & - = 0.60 0.05  . The error in $\delta_{EW}$ is dominated by the current uncertainty in $|V_{ub}|/|V_{cb}| = 0.104 \pm 0.007$ [@Yao:2006px], including also a smaller error from SU(3) breaking estimated using QCD factorization. Eqs. (\[AmpKpi\]) and (\[eqn:delta\_EW\]) imply [@Gronau:2001cj] \[eqn:Rc\] R\_c = 1 - 2 r\_c \_c (- \_[EW]{}) + r\_c\^2(1 - 2 \_[EW]{} + \_[EW]{}\^2) ,   \[eqn:Acp\] A\_[CP]{}(B\^+ K\^+ \^0) = - 2 r\_c \_c /R\_c  , where $r_c\equiv |T'+C'|/|P'-\frac{1}{3}P'^c_{EW}|$ and $\delta_c$ is the strong phase difference between $T'+C'$ and $P'-\frac{1}{3}P'^c_{EW}$. The parameter $r_c$ is calculable in terms of measured decay rates, using broken flavor SU(3) which relates $T'+C'$ and $T+C$ dominating $B^+\to \pi^+\pi^0$ by a factorization factor $f_K/f_\pi$ (neglecting a tiny EWP term in $B^+\to \pi^+\pi^0$),[@Gronau:1994bn] \[T’+C’\] |T’+C’| = |A(B\^+\^+\^0)|  . Using branching ratios from Tables I and II, one finds r\_c = = 0.198 0.008  . The error in $r_c$ does not include an uncertainty from assuming factorization for SU(3) breaking in $T'+C'$. While this assumption should hold well for $T'$, it may not be a good approximation for $C'$ which as we have mentioned is comparable in magnitude to $T'$ and carries a strong phase relative to it. Thus one should allow a $10\%$ theoretical error when using factorization for relating $B \to K \pi$ and $B \to \pi \pi$ $T+C$ amplitudes, so that r\_c =0.20 0.01 ([exp]{}) 0.02 ([th]{})  . Eliminating $\delta_c$ in Eqs. (\[eqn:Rc\]) and (\[eqn:Acp\]) by retaining terms which are linear in $r_c$, one finds \[eqn:sr\] ( )\^2 + ( )\^2 = (2r\_c)\^2 + [O]{}(r\_c\^3)  . This sum rule implies that at least one of the two terms whose squares occur on the left-hand-side must be sizable, of the order of $2r_c=0.4$. The second term, $|A_{CP}(B^+\to K^+\pi ^0)|/\sin\gamma$, is already smaller than $\simeq 0.1$, using the current $2\sigma$ bounds on $\gamma$ and $|A_{CP}(B^+\to K^+\pi^0)|$. Thus, the first term must provide a dominant contribution. For $R_c\simeq 1$, this implies $\gamma\simeq \arccos\delta_{EW} \simeq (53.1\pm 3.5)^\circ$. This range is expanded by including errors in $R_c$ and $A_{CP}(B^+\to K^+\pi^0)$. For instance, an upper bound $R_c < 1.1$ would imply an inportant upper limit, $\gamma < 70^\circ$. Currently one only obtains an upper limit $\gamma \le 88^\circ$ at $90\%$ confidence level.[@Gronau:2006ha] This bound is consistent with the value obtained in (\[gamma\]) from $B\to\pi\pi$ and $B\to\rho\rho$, but is not competitive with the latter precision. [**Conclusion**]{}: The current constraint obtained from $R_c$ and $A_{CP}(B^+\to K^+\pi^0)$ is $\gamma \le 88^\circ$ at $90\%$ confidence level. Further improvement in the measurement of $R_c$ (which may, in fact, be very close to one) is required in order to achieve a precision in $\gamma$ comparable to that obtained in $B\to\pi\pi, \rho\rho$. (A conclusion concerning the different CP asymmetries measured in $B^0\to K^+\pi^-$ and $B ^+\to K^+\pi^0$ will be given at the end of the next subsection.) Symmetry relations for $B\to K\pi$ rates and asymmetries -------------------------------------------------------- The following two features imply rather precise sum rules in the CKM framework, both for $B\to K\pi$ decay rates and CP asymmetries: \(1) The dominant penguin amplitude is $\Delta I=0$. \(2) The four decay amplitudes obey a linear isospin relation,[@Nir:1991cu] A(K\^+\^-) - A(K\^0\^+) - A(K\^+\^0) + A(K\^0\^0) . An immediate consequence of these features are two isospin sum rules, which hold up to terms which are quadratic in small ratios of non-penguin to penguin amplitudes,[@Gronau:1998ep; @Atwood:1997iw; @Gronau:2005gz] \[RSR\] (K\^+\^-) + (K\^0\^+) &=& 2(K\^+\^0) + 2(K\^0\^0) ,\ \[DSR\] (K\^+\^-) +(K\^0\^+) &=& 2(K\^+\^0) + 2(K\^0\^0) , where \[Delta\] (K)(|B|K|) - (BK ) . Quadratic corrections to (\[RSR\]) have been calculated in the SM and were found to be a few percent.[@Gronau:2003kj; @Beneke:2003zv; @Bauer:2005kd] This is the level expected in general for isospin-breaking corrections which must therefore also be considered. The above two features imply that these $\Delta I=1$ corrections are suppressed by a small ratio of non-penguin to penguin amplitudes and are therefore negligible.[@Gronau:2006eb] Indeed, this sum rule holds experimentally within a $5\%$ error.[@Gronau:2006xu] One expects the other sum rule (\[DSR\]) to hold at a similar precision. The CP rate asymmetry sum rule (\[DSR\]), relating the four CP asymmetries, leads to a prediction for the asymmetry in $B^0\to K^0\pi^0$ in terms of the other three asymmetries which have been measured with higher precision, \[ACPK0pi0\] A\_[CP]{}(B\^0K\^0\^0) = -0.1400.043 . While this value is consistent with experiment (see Table II), higher accuracy in this asymmetry measurement is required for testing this straightforward prediction. Relations between CP asymmetries in $B\to K\pi$ and $B\to\pi\pi$ following from approximate flavor SU(3) symmetry of QCD [@Zeppenfeld:1980ex] are not expected to hold as precisely as isospin relations, but may still be interesting and useful. An important question relevant to such relations is how to include SU(3)-breaking effects, which are expected to be at a level of 20-30$\%$. Here we wish to discuss two SU(3) relations proposed twelve years ago,[@Deshpande:1994ii; @Gronau:1995qd] one of which holds experimentally within expectation, providing some lesson about SU(3) breaking, while the other has a an interesting implication for future applications of the isospin analysis in $B\to \pi\pi$. A most convenient proof of SU(3) relations is based on using a diagramatic approach, in which diagrams with given flavor topologies replace reduced SU(3) matrix elements.[@Gronau:1994rj] In this language, the amplitudes for $B^0$ decays into pairs of charged or neutral pions, and pairs of charged or neutral $\pi$ and $K$, are given by: -A(B\^0\^+\^-) & = & T+ (P+2P\^c\_[EW]{}/3) + E + PA ,-A(B\^0\^0\^0) & = & C - (P-P\_[EW]{}-P\^c\_[EW]{}/3) - E - PA ,-A(B\^0K\^+\^-) & = & T’+ (P’+2P’\^c\_[EW]{}/3) ,-A(B\^0K\^0\^0) & = & C’ - (P’-P’\_[EW]{}-P’\^c\_[EW]{}/3) . The combination $E+PA$, representing exchange and penguin annihilation topologies, is expected to be $1/m_b$-suppressed relative to $T$ and $C$,[@Bauer:2004tj; @Blok:1997yj] as demonstrated by the small branching ratio measured for $B^0\to K^+K^-$.[@HFAG] This term will be neglected. Expressing topological amplitudes in terms of CKM factors, SU(3)-invariant amplitudes and SU(3) invariant strong phases, one may write T & & V\^\*\_[ub]{}V\_[ud]{}|[T]{}+[P]{}\_[uc]{}| ,      P+2P\^c\_[EW]{}/3 V\^\*\_[tb]{}V\_[td]{}|[P]{}\_[tc]{}|e\^[i]{} ,T’ & & V\^\*\_[ub]{}V\_[us]{}|[T]{}+[P]{}\_[uc]{}| ,      P’+2P’\^c\_[EW]{}/3 V\^\*\_[tb]{}V\_[ts]{}|[P]{}\_[tc]{}|e\^[i]{} ,\ C & & V\^\*\_[ub]{}V\_[ud]{}|[C]{}-[P]{}\_[uc]{}| ,      P-P\_[EW]{}-P\^c\_[EW]{}/3 V\^\*\_[tb]{}V\_[td]{}|\_[tc]{}|e\^[i]{} ,C’ & & V\^\*\_[ub]{}V\_[us]{}|[C]{}-[P]{}\_[uc]{}| ,      P’-P’\_[EW]{}-P’\^c\_[EW]{}/3 V\^\*\_[tb]{}V\_[ts]{}|\_[tc]{}|e\^[i]{} .Unitarity of the CKM matrix, $V^*_{cb}V_{cd(s)} = - V^*_{tb}V_{td(s)} - V^*_{ub}V_{ud(s)}$, has been used to absorb in $T^{(')}$ and $C^{(')}$ a penguin term ${\cal P}_{uc}\equiv {\cal P}_u-{\cal P}_c$ multiplying $V^*_{ub}V_{ud(s)}$, while ${\cal P}_{tc}\equiv {\cal P}_t-{\cal P}_c$ and $\tilde{\cal P}_{tc}\equiv \tilde{\cal P}_t- \tilde{\cal P}_c$ contain two distinct combinations of EWP contributions. Using the identity (V\^\*\_[ub]{}V\_[ud]{}V\_[tb]{}V\^\*\_[td]{}) = -[Im]{}(V\^\*\_[ub]{}V\_[us]{}V\_[tb]{}V\^\*\_[ts]{}) , one finds[@Deshpande:1994ii; @Gronau:1995qd] \[Delta+-\] (B\^0K\^+\^-) & = & -(B\^0\^+\^-) \ \[Delta00\] (B\^0K\^0\^0) & = & - (B\^0\^0\^0) , where $\Delta$ is the CP rate difference defined in (\[Delta\]). Quoting products of branching ratios and asymmetries from Tables I and II, Eq. (\[Delta+-\]) reads -1.88 0.24 = -1.96 0.37 . This SU(3) relation works well and requires no SU(3)-breaking. An SU(3) breaking factor $f_K/f_\pi$ in ${\cal T}$ but not in ${\cal P}$, or in both ${\cal T}$ and ${\cal P}$, are currently excluded at a level of $1.0\sigma$, or $1.75\sigma$. More precise CP asymmetry measurements in $B^0\to K^+\pi^-$ and $B^0\to \pi^+\pi^-$ are required for determining the pattern of SU(3) breaking in tree and penguin amplitudes. Using the prediction (\[ACPK0pi0\]) of the $B\to K\pi$ asymmetry sum rule, Eq. (\[Delta00\]) predicts \[ACPpi0pi0\] A\_[CP]{}(B\^0\^0\^0) = 1.07 0.38 . The error is dominated by current errors in CP asymmetries for $B^+\to K^0\pi^+$ and $B^+\to K^+\pi^0$, and to a less extent by the error in $\b(\pi^0\pi^0).$ SU(3) breaking in amplitudes could modify this prediction by a factor $f_\pi/f_K$ if this factor applies to ${\cal C}$, and less likely by $(f_\pi/f_K)^2$. A large positive CP asymmetry, favored in all three cases, will affect future applications of the isospin analysis in $B\to\pi\pi$. It implies that while the $\bar B$ isospin triangle is roughly equal-sided, the $B$ triangle is squashed. A twofold ambiguity in the value of $\gamma$ disappears in the limit of a flat $B$ triangle.[@Gronau:1990ra] [**Conclusion**]{}: The isospin sum rule for $B\to K\pi$ decay rates holds well, while the CP asymmetry sum rule predicts $A_{CP}(B^0\to K^0\pi^0)=-0.140\pm 0.043$. The different asymmetries in $B^0\to K^+\pi^-$ and $B^+\to K^+\pi^0$ can be explained by an amplitude $C'$ comparable to $T'$ and involving a relative negative strong phase, and should not be considered a “puzzle". An SU(3) relation for $B^0\to \pi\pi$ and $B^0\to K\pi$ CP asymmetries works well for charged modes. The corresponding relation for neutral modes predicts a large positive asymmetry in $B^0\to\pi^0\pi^0$. Improving asymmetry measurements can provide tests for SU(3) breaking factors. Tests for small New Physics effects =================================== Values of $\gamma$ ------------------ We have described three ways for extracting a value for $\gamma$ relying on interference of distinct pairs of quark amplitudes, $(b\to c\bar us,b\to u\bar c s), (b\to c\bar c s, b\to u\bar u s)$ and $(b\to c\bar c d, b\to u\bar u d)$. The three pairs provide a specific pattern for CP violation in the CKM framework, which is expected to be violated in many extensions of the SM. The rather precise value of $\gamma$ (\[gamma\]) extracted from $B\to \pi\pi, \rho\rho, \rho\pi$ is consistent with constraints on $\gamma$ from CP conserving measurements related to the sides of the unitarity triangle.[@Charles:2006yw; @Bona:2006ah] The values of $\gamma$ obtained in $B\to D^{(*)}K^{(*)}$ and $B\to K\pi$ are consistent with those extracted in $B\to \pi\pi, \rho\rho, \rho\pi$, but are not yet sufficiently precise for testing small NP effects in charmless $B$ decays. Further experimental improvements are required, in particular in the former two types of processes. While the value of $\gamma$ in $B\to D^{(*)}K^{*)}$ is not expected to be affected by NP, the other two classes of processes involving penguin loops are susceptible to such effects. The extraction of $\gamma$ in $B\to\pi\pi~\rho\rho$ assumes that $\gamma$ is the phase of a $\Delta I=3/2$ tree amplitude, while an additional $\Delta I=3/2$ EWP contribution is included using isospin. The extracted value could be modified by a new $\Delta I=3/2$ effective operator originating in physics beyond the SM, but not by a new $\Delta I=1/2$ operator. Similarly, the value of $\gamma$ extracted in $B\to K\pi$ is affected by a potential new $\Delta I=1$ operator, but not by a new $\Delta I=0$ operator, because the amplitude (\[eqn:delta\_EW\]), playing an essential role in this method, is pure $\Delta I=1$. $B\to K\pi$ sum rule -------------------- Charmless $|\Delta S|=1$ $B$ and $B_s$ decays are particularly sensitive to NP effects, as new heavy particles at the TeV mass range may replace the the $W$ boson and top-quark in the penguin loop dominating these amplitudes.[@Gronau:1996rv] The sum rule (\[RSR\]) for $B\to K\pi$ decay rates provides a test for such effects. However, as we have argued from isospin considerations, it is only affected by quadratic $\Delta I=1$ amplitudes including NP contributions. Small NP amplitudes, contributing quadratically to the sum rule, cannot be separated from SM corrections, which are by themselves at a level of a few percent. This is the level to which the sum rule has already been tested. We will argue below for evidence showing that potential NP contributions to $|\Delta S|=1$ charmless decays must be suppressed by roughly an order of magnitude relative to the dominant $b\to s$ penguin amplitudes. Values of $S,C$ in $|\Delta S|=1$ $B^0\to f_{CP}$ decays -------------------------------------------------------- A class of $b\to s$ penguin-dominated $B^0$ decays to CP-eigenstates has recently attracted considerable attention. This includes final states $XK_S$ and $XK_L$, where $X=\phi, \pi^0, \eta', \omega, f_0, \rho^0, K^+K^-, K_SK_S, \pi^0\pi^0$, for which measured asymmetries $-\eta_{CP}S$ and $C$ are quoted in Table III. \[The asymmetries $S$ and $C=-A_{CP}$ were defined in (\[Asym\]) for $B^0\to\pi^+\pi^-$. Observed modes with $K_L$ in the final states obey $\eta_{CP}(XK_L)=-\eta_{CP}(XK_S)$.\] In these processes, a value $S= -\eta_{CP}\sin2 \beta$ (for states with CP-eigenvalue $\eta_{CP}$) is expected approximately.[@London:1989ph; @Grossman:1996ke] These predictions involve hadronic uncertainties at a level of several percent, of order $\lambda^2,~\lambda \sim 0.2$. It has been pointed out some time ago[@Atwood:1997zr] that it is difficult to separate these hadronic uncertainties within the SM from NP contributions to decay amplitudes if the latter are small. In the next subsection we will discuss indirect experimental evidence showing that NP contributions to $S$ and $C$ must be small. Corrections to $S= -\eta_{CP}\sin2 \beta$ and values for the asymmetries $C$ have been calculated in the SM using methods based on QCD factorization[@Beneke:2005pu; @Cheng:2005bg] and flavor SU(3),[@Chiang:2004nm; @Grossman:2003qp; @Gronau:2003ep] and were found to be between a few percent up to above ten percent within hadronic uncertainties. Whereas the deviation of $S$ from $-\eta_{CP}\sin 2\beta$ is process-dependent, a generic result has been proven a long time ago for both $S$ and $C$, to first order in $|c/p|$,[@Gronau:1989ia] \[DeltaS\] S-\_[CP]{}S - 2 & = & 22 ,C & = & 2 . Here $p$ and $c$ are penguin and color-suppressed tree amplitudes involving a small ratio and relative weak and strong phases $\gamma$ and $\Delta$, respectively. This implies $\Delta S > 0$ for $|\Delta|<\pi/2$, which can be argued for several of the above decays using QCD arguments[@Beneke:2005pu; @Cheng:2005bg] or SU(3) fits.[@Gronau:2003ep] (Note that while $|p|$ is measurable in certain decay rates up to first order corrections, $|c|$ and $\Delta$ involve sizable hadronic uncertainties in QCD calculations.) In contrast to this expectation, the central values measured for $\Delta S$ are negative for all decays. (See Table III.) Consequently, one finds an averaged value $\sin 2\beta_{\rm eff}=0.53\pm 0.05$,[@HFAG] to be compared with $\sin 2\beta = 0.678\pm 0.025$. Two measurements which seem particularly interesting are $-\eta_{CP}S_{\phi K_S}=0.39\pm 0.18$, where a positive correction of a few percent to $\sin 2\beta$ is expected in the SM,[@Beneke:2005pu; @Cheng:2005bg] and $-\eta_{CP}S_{\pi^0 K_S}=0.33\pm 0.21$, where a rather large positive correction to $\sin 2\beta$ is expected shifting this asymmetry to a value just above $0.8$.[@Chiang:2004nm] While the current averaged value of $\sin 2\beta_{\rm eff}$ is tantalizing, experimental errors in $S$ and $C$ must be reduced further to make a clear case for physics beyond the SM. Assuming that the discrepancy between improved measurements and calculated values of $S$ and $C$ persists beyond theoretical uncertainties, can this provide a clue to the underlying New Physics? Since many models could give rise to a discrepancy,[@Gronau:1996rv; @Grossman:1996ke; @Ciuchini:1997zp] one would seek signatures characterizing classes of models rather than studying the effects in specific models. One way of classifying extensions of the SM is by the isospin behavior of the new effective operators contributing to $b\to s q\bar q$ transitions. Diagnosis of $\Delta I$ for New Physics operators ------------------------------------------------- Four-quark operators in the effective Hamiltonian associated with NP in $b \to s q \bar q$ transitions can be either isoscalar or isovector operators. We will now discuss a study proposed recently in order to isolate $\Delta I=0$ or $\Delta I=1$ operators, thus determining corresponding NP amplitudes and CP violating phases.[@Gronau:2007ut] We will show that since $S$ and $C$ in the above processes combine $\Delta I=0$ or $\Delta I=1$ contributions, separating these contributions requires using also information from other two asymmetries, which are provided by isospin-reflected decay processes. Two $|\Delta S|=1$ charmless $B$ (or $B_s$) decay processes, related by isospin reflection, $R_I: u\leftrightarrow d,~\bar u\leftrightarrow -\bar d$, can always be expressed in term of common $\Delta I=0$ and $\Delta I=1$ amplitudes $B$ and $A$ in the form: \[B+-A\] A(B\^+f) = B + A ,    A(B\^0R\_If)= (B - A) . A proof of this relation uses a sign change of Clebsch-Gordan coefficients under $m \leftrightarrow -m$.[@Gronau:2007ut] The description (\[B+-A\]) applies, in particular, to pairs of processes involving all the $B^0$ decay modes listed in Table III, and $B^+$ decay modes where final states are obtained by isospin reflection from corresponding $B^0$ decay modes. Decay rates for pairs of isospin-reflected $B$ decay processes, and for $\bar B$ decays to corresponding charge conjugate final states are therefore given by (we omit inessential common kinematic factors), \_+ & = & |B+A|\^2 ,       \_0 = |B-A|\^2 ,\_- & = & ||B + |A|\^2 ,        \_[|0]{} = ||B - |A|\^2 . The amplitudes $\bar B$ and $\bar A$ are related to $B$ and $A$ by a change in sign of all weak phases, whereas strong phases are left unchanged. For each pair of processes one defines four asymmetries: an isospin-dependent CP-conserving asymmetry, \[A\_I\] A\_I  , two CP-violating asymmetries for $B^+$ and $B^0$, \[CP-asym\] A\^+\_[CP]{}  ,      -CA\^0\_[CP]{} , and the time-dependent asymmetry $S$ in $B^0$ decays, \[eqn:S\] S =   ,   \_[CP]{} e\^[- 2 i ]{}   , In the Standard Model, the isoscalar amplitude $B$ contains a dominant penguin contribution, $B_P$, with a CKM factor $V^*_{cb}V_{cs}$. The residual isoscalar amplitude, BB-B\_P  , and the amplitude $A$, consist each of contributions smaller than $B_P$ by about an order of magnitude.[@Beneke:1999br; @Keum:2000ph; @Bauer:2004tj; @Ciuchini:1997hb; @Gronau:1994rj] These contributions include terms with a much smaller CKM factor $V^*_{ub}V_{us}$, and a higher order electroweak penguin amplitude with CKM factor $V^*_{tb}V_{ts}$. Thus, one expects \[hierarchy\] |B| |B\_P|  ,    |A||B\_P|  . Consequently, the asymmetries $A_I$, $A^{+,0}_{CP}$ and $\Delta S$ are expected to be small, of order $2|A|/|B|$ and $2|\Delta B|/|B_P|$. In contrast, potentially large contributions to $\Delta B$ and $A$ from NP, comparable to $B_P$, would most likely lead to large asymmetries of order one. An unlikely exception is the case when both $\Delta B/B_P$ and $A/B_P$ are purely imaginary, or almost purely imaginary. This would require very special circumstances such as fine-tuning in specific models. Excluding cancellations between NP and SM contributions in both CP-conserving and CP violating asymmetries, tests for the hierarchy (\[hierarchy\]) become tests for the smallness of corresponding potential NP contributions to $B$ and $A$. There exists ample experimental information showing that asymmetries $A^+_{CP}$ are small in processes related by isospin reflection to the decay modes in Table III. Upper limits on the magnitudes of most asymmetries are at a level of ten or fifteen percent \[e.g., $A^+_{CP}(K^+\phi)=0.034\pm 0.044$, $A^+_{CP}(K^+\eta')=0.031\pm 0.026$\], while others may be as large as twenty or thirty percent \[$A^+_{CP}(K^+\rho^0)=0.31^{+0.11}_{-0.10}$\]. Similar values have been measured for isospin asymmetries $A_I$ \[e.g., $A_I(K^+\phi)=-0.037\pm 0.077$, $A_I(K^+\eta')=-0.001\pm 0.033$, $A_I(K^+\rho^0)=-0.16\pm 0.10$\].[@Gronau:2007ut] Since these two types of asymmetries are of order $2|\Delta B|/|B_P|$ and $2|A|/|B_P|$, this confirms the hierarchy (\[hierarchy\]), which can be assumed to hold also in the presence of NP. We will take by convention the dominant penguin amplitude $B_P$ to have a zero weak phase and a zero strong phase, referring all other strong phases to it. Writing \[Bconvention\] B = B\_P + B  ,   |B = B\_P + |B  , and expanding the four asymmetries to leading order in $\Delta B/B_P$ or $A/B_P$, one has \[eqn:obs1\] S & = & 2   ,\ \[eqn:obs2\] A\_I &=&   ,\ \[eqn:obs3\] A\^+\_[CP]{} &=& +   ,\ \[eqn:obs4\] A\^0\_[CP]{} &=& - +   . The four asymmetries provide the following information: - The $\Delta I = 0$ and $\Delta I = 1$ contributions in CP asymmetries are separated by taking sums and differences, \[ACP0\] A\^[I=0]{}\_[CP]{} & & (A\^+\_[CP]{} + A\^0\_[CP]{}) =   ,\ \[ACP1\] A\^[I=1]{}\_[CP]{} & & (A\^+\_[CP]{} - A\^0\_[CP]{}) =   . - ${\rm Re}A/B_P$ and ${\rm Re} \bar A/B_P$ may be separated by using information from $A^{\Delta I=1}_{CP}$ and $A_I$. - $\Delta S$ is governed by an [*imaginary*]{} part of a combination of $\Delta I = 0$ and $\Delta I = 1$ terms which cannot be separated in $B$ decays. Such a separation is possible in $B_s$ decays to pairs of isospin-reflected decays, e.g. $B_s\to K^+K^-, K_SK_S$ or $B_s\to K^{*+}K^{*-}, K^{*0}\bar K^{*0}$, where $2\beta$ in the definition of $\Delta S$ (\[DeltaS\]) is now replaced by the small phase of $B_s$-$\bar B_s$ mixing. One may take one step further under the assumption that strong phases associated with NP amplitudes are small relative to those of the SM and can be neglected.[@Datta:2004re] This assumption, which must be confronted by data, is reasonable because rescattering from a leading $b\to sc\bar c$ amplitude is likely the main source of strong phases, while rescattering from a smaller $b\to sq\bar q$ NP amplitude is then a second-order effect. In the convention (\[Bconvention\]), where the strong phase of $B_P$ is set equal to zero, $\Delta B$ and $A$ have the same CP-conserving strong phase $\delta$, and involve CP-violating phases $\phi_B$ and $\phi_A$, respectively, \[DeltaB,A\] B = |B|e\^[i]{}e\^[i\_B]{}  ,      A = |A|e\^[i]{}e\^[i\_A]{}  . Since the four asymmetries (\[eqn:obs1\])-(\[eqn:obs4\]) are first order in small ratios of amplitudes, one may take $B_P$ in their expression to be given by the square root of $\Gamma_+$ or $\Gamma_0$, thereby neglecting second order terms. These four observables can then be shown to determine $|A|, \phi_A$ and $|\Delta B|\sin\phi_B$.[@Gronau:2007ut] The combination $|\Delta B| \cos \phi_B$ adds coherently to $B_P$ and cannot be fixed independently. The amplitudes $\Delta B$ and $A$ consist of process-dependent SM and potential NP contributions. Assuming that the former are calculable, either using methods based on QCD-factorization or by fitting within flavor SU(3) these and other $B$ decay rates and asymmetries, the four asymmetries determine the magnitude and CP violating phase of a $\Delta I=1$ NP amplitude and the imaginary part of a $\Delta I=0$ NP amplitude. In certain cases, e.g., $B\to \phi K$ or $B\to\eta'K_S$, stringent upper bounds on SM contributions to $\Delta B$ and $A$ may suffice if some of the four measured asymmetries are larger than permitted by these bounds. In the pair $B^+\to K^+\pi^0, B^0\to K^0\pi^0$, the four measured asymmetries \[using the predicted value (\[ACPK0pi0\])\] are $A_I=0.087 \pm 0.038, A^{\Delta I=0}_{CP}=-0.047\pm 0.025, A^{\Delta I=1}_{CP}=0.094\pm 0.025, \Delta S=-0.35\pm 0.21$. Some reduction of errors is required for a useful implementation of this method. [**Conclusion**]{}: There exists ample experimental evidence in pairs of isospin-reflected $b\to s$ penguin-dominated decays that potential NP amplitudes must be small. Assuming that these amplitudes involve negligible strong phases, and assuming that small SM non-penguin contributions are calculable or can be strictly bounded, one may determine the magnitude and CP violating phase of a NP $\Delta I=1$ amplitude, and the imaginary part of a NP $\Delta I=0$ amplitude in each pair of isospin-reflected decays. Null or nearly-null tests ------------------------- We have not discussed null tests of the CKM framework.[@Gershon:2006mt] Evidence for physics beyond the Standard Model may show-up as (small) nonzero asymmetries in processes where they are predicted to be extremely small in the CKM framework. A well-known example is $B^+\to \pi^+\pi^0$, where the CP asymmetry is expected to be a small fraction of a percent including EWP amplitudes.[@Buras:1998rb; @Gronau:1998fn] We have only discussed [*exclusive hadronic*]{} $B$ decays, where QCD calculations involve hadronic uncertainties. A more robust calculation exists for the direct CP asymmetry in [*inclusive radiative*]{} decays $B\to X_s\gamma$, found to be smaller than one percent.[@Soares:1991te] The current upper limit on this asymmetry is at least an order of magnitude larger.[@Aubert:2004hq] Time-dependent asymmetries in radiative decays $B^0\to K_S\pi^0\gamma$, for a $K_S\pi^0$ invariant-mass in the $K^*$ region and for a larger invariant-mass range including this region, are interesting because they test the photon helicity, predicted to be dominantly right-handed in $B^0$ decays and left-handed in $\bar B^0$ decays.[@Atwood:1997zr; @Atwood:2004jj] The asymmetry, suppressed by $m_s/m_b$, is expected to be several percent in the SM, and can be very large in extensions where spin-flip is allowed in $b\to s\gamma$. While dimensional arguments seem to indicate a possible larger asymmetry in the SM, of order $\Lambda_{\rm QCD}/m_b\sim 10\%$,[@Grinstein:2004uu] calculations using perturbative QCD[@Matsumori:2005ax] and QCD factorization[@Ball:2006cv] find asymmetries of a few percent. The current averaged values, for the $K^*$ region and for a larger invariant-mass range including this region, are $S((K_S\pi^0)_{K^*}\gamma)=-0.28\pm 0.26$ and $S(K_S\pi^0\gamma)= -0.09\pm 0.24$.[@HFAG; @Aubert:2005bu] These measurements must be improved in order to become sensitive to the level predicted in the SM, or to provide evidence for physics beyond the SM. Summary ======= The Standard Model passed with great success numerous tests in the flavor sector, including a variety of measurements of CP asymmetries related to the CKM phases $\beta$ and $\gamma$. Small potential New Physics corrections may occur in $\Delta S=0$ and $|\Delta S|=1$ penguin amplitudes, affecting the extraction of $\gamma$ and modifying CP-violating and isospin-dependent asymmetries in $|\Delta S|=1$ $B^0$ decays and isospin-related $B^+$ decays. Higher precision than achieved so far is required for claiming evidence for such effects and for sorting out their isospin structure. Similar studies can be performed with $B_s$ mesons produced at hadron colliders and at $e^+e^-$ colliders running at the $\Upsilon(5S)$ resonance. Time-dependence in $B_s\to D^-_sK^+$ and $B_s\to J/\psi\phi$ or $B_s\to J/\psi\eta$ measures $\gamma$ and the small phase of the $B_s$-$\bar B_s$ mixing amplitude.[@Aleksan:1991nh] Comparing time-dependence and angular analysis in $B_s\to J/\psi\phi$ with $b\to s$ penguin-dominated processes including $B_s\to \phi\phi, B_s\to K^{*+}K^{*-}, B_s\to K^{*0}\bar K^{*0}$ provides a methodic search for potential NP effects. Work on $B_s$ decays has just begun at the Tevatron.[@Paulini:2007mf] One is looking forward to first results from the LHC. Acknowledgments {#acknowledgments .unnumbered} =============== I am grateful to numerous collaborators, in particular to Jonathan Rosner whose collaboration continued without interruption for many years. This work was supported in part by the Israel Science Foundation under Grant No. 1052/04 and by the German-Israeli Foundation under Grant No. I-781-55.14/2003. [0]{} J. H. Christenson, J. W. Cronin, V. L. Fitch and R. Turlay, Phys. Rev. Lett.  [**13**]{}, 138 (1964). B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. Lett.  [**87**]{}, 091801 (2001); K. Abe [*et al.*]{} \[Belle Collaboration\], Phys. Rev. Lett.  [**87**]{}, 091802 (2001). A. B. Carter and A. I. Sanda, Phys. Rev. Lett.  [**45**]{}, 952 (1980); Phys. Rev. D [**23**]{}, 1567 (1981); I. I. Y. Bigi and A. I. Sanda, Nucl. Phys. B [**193**]{}, 85 (1981). M. Kobayashi and T. Maskawa, Prog. Theor. Phys.  [**49**]{}, 652 (1973). I. Dunietz and J. L. Rosner, Phys. Rev. D [**34**]{}, 1404 (1986); I. I. Y. Bigi and A. I. Sanda, Nucl. Phys. B [**281**]{}, 41 (1987). H. Albrecht [*et al.*]{} \[ARGUS Collaboration\], Phys. Lett. B [**192**]{}, 245 (1987); S. L. Wu, Nucl. Phys. Proc. Suppl.  [**3**]{}, 39 (1988). L. Wolfenstein, Phys. Rev. Lett.  [**51**]{}, 1945 (1983). We use a standard phase convention in which $V_{ub}$ and $V_{td}$ are complex, while all other CKM matrix elements are real to a good approximation. J. Charles [*et al.*]{} \[CKMfitter Collaboration\], eConf [**C060409**]{}, 043 (2006), presenting updated results periodically on the web site [http://www.slac. stanford.edu/xorg/ckmfitter/]{}. M. Bona [*et al.*]{} \[UTfit Collaboration\], JHEP [**0610**]{}, 081 (2006), presenting updated results periodically on the web site [http://www.utfit.org/]{}. V. M. Abazov [*et al.*]{} \[D0 Collaboration\], Phys. Rev. Lett.  [**97**]{}, 021802 (2006); A. Abulencia [*et al.*]{} \[CDF Collaboration\], Phys. Rev. Lett.  [**97**]{}, 242003 (2006). For a recent review see A. D. Dolgov, arXiv:hep-ph/0511213. See e.g. E. Gabrielli, A. Masiero and L. Silvestrini, Phys. Lett. B [**374**]{}, 80 (1996). This review, which is only 27 page long (the number of Hebrew alphabet letters) includes 120 references, as a Jewish blessing says “May you live to be 120!" It is too short to include other hundreds or thousands of relevant papers. I apologize to their many authors. M. Gronau, Phys. Rev. Lett.  [**63**]{}, 1451 (1989). H. Boos, T. Mannel and J. Reuter, Phys. Rev. D [**70**]{}, 036006 (2004). M. Ciuchini, M. Pierini and L. Silvestrini, Phys. Rev. Lett.  [**95**]{}, 221804 (2005). H. n. Li and S. Mishima, arXiv:hep-ph/0610120. B. Aubert [*et al.*]{} \[BABAR Collaboration\], arXiv:hep-ex/0607107. K. F. Chen [*et al.*]{} \[Belle Collaboration\], arXiv:hep-ex/0608039. E. Barbiero [*et al.*]{} \[Heavy Flavor Averaging Group\], hep-ex/0603003; updates are available at [http://www.slac.stanford.edu/xorg/hfag/]{}. B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. D [**71**]{}, 032005 (2005); R. Itoh [*et al.*]{} \[Belle Collaboration\], Phys. Rev. Lett.  [**95**]{}, 091601 (2005). P. Krokovny [*et al.*]{} \[Belle Collaboration\], Phys. Rev. Lett.  [**97**]{}, 081801 (2006). B. Aubert [*et al.*]{} \[BABAR Collaboration\], arXiv:hep-ex/0607105. R. Fleischer and T. Mannel, Phys. Lett. B [**506**]{}, 311 (2001). M. Gronau and D. London., Phys. Lett. B [**253**]{}, 483 (1991). M. Gronau and D. Wyler, Phys. Lett. B [**265**]{}, 172 (1991). D. London and R. D. Peccei, Phys. Lett. B [**223**]{}, 257 (1989). B. Grinstein, Phys. Lett.  B [**229**]{}, 280 (1989). M. Gronau and D. London, Phys. Rev. D [**55**]{}, 2845 (1997). M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda, Phys. Rev. Lett.  [**83**]{}, 1914 (1999); Nucl. Phys. B [**606**]{}, 245 (2001); Phys. Rev. D [**72**]{}, 098501 (2005). Y. Y. Keum, H. n. Li and A. I. Sanda, Phys. Lett. B [**504**]{}, 6 (2001); Phys. Rev. D [**63**]{}, 054008 (2001). C. W. Bauer, D. Pirjol, I. Z. Rothstein and I. W. Stewart, Phys. Rev. D [**70**]{}, 054015 (2004); C. W. Bauer, D. Pirjol, I. Z. Rothstein and I. W. Stewart, Phys. Rev. D [**72**]{}, 098502 (2005). M. Ciuchini, E. Franco, G. Martinelli and L. Silvestrini, Nucl. Phys. B [**501**]{}, 271 (1997); M. Ciuchini, R. Contino, E. Franco, G. Martinelli and L. Silvestrini, Nucl. Phys. B [**512**]{}, 3 (1998) \[Erratum-ibid. B [**531**]{}, 656 (1998)\]; M. Ciuchini, E. Franco, G. Martinelli, M. Pierini and L. Silvestrini, Phys. Lett. B [**515**]{}, 33 (2001). M. Gronau and D. London, Phys. Rev. Lett.  [**65**]{}, 3381 (1990). A. J. Buras and R. Fleischer, Eur. Phys. J. C [**11**]{}, 93 (1999). M. Gronau, D. Pirjol and T. M. Yan, Phys. Rev. D [**60**]{}, 034021 (1999) \[Erratum-ibid. D [**69**]{}, 119901 (2004)\]. S. Gardner, Phys. Rev. D [**59**]{}, 077502 (1999); S. Gardner, Phys. Rev. D [**72**]{}, 034015 (2005). M. Gronau and J. Zupan, Phys. Rev. D [**71**]{}, 074017 (2005). A. F. Falk, Z. Ligeti, Y. Nir and H. Quinn, Phys. Rev. D [**69**]{}, 011502 (2004). Y. Nir and H. R. Quinn, Phys. Rev. Lett.  [**67**]{}, 541 (1991); H. J. Lipkin, Y. Nir, H. R. Quinn and A. Snyder, Phys. Rev. D [**44**]{}, 1454 (1991); M. Gronau, Phys. Lett. B [**265**]{}, 389 (1991); See, however, N. G. Deshpande and X. G. He, Phys. Rev. Lett.  [**74**]{}, 26 (1995) \[Erratum-ibid.  [**74**]{}, 4099 (1995)\]. M. Neubert and J. L. Rosner, Phys. Lett. B [**441**]{}, 403 (1998); Phys. Rev. Lett. [**81**]{}, 5076 (1998). M. Neubert, JHEP [**9902**]{}, 014 (1999); M. Beneke and S. Jager, hep-ph/0610322. Y. Grossman and M. P. Worah, Phys. Lett. B [**395**]{}, 241 (1997). M. Ciuchini, E. Franco, G. Martinelli, A. Masiero and L. Silvestrini, Phys. Rev. Lett.  [**79**]{}, 978 (1997); R. Barbieri and A. Strumia, Nucl. Phys. B [**508**]{}, 3 (1997); S. A. Abel, W. N. Cottingham and I. B. Whittingham, Phys. Rev. D [**58**]{}, 073006 (1998); Y. Grossman, M. Neubert and A. L. Kagan, JHEP [**9910**]{}, 029 (1999); X. G. He, C. L. Hsueh and J. Q. Shi, Phys. Rev. Lett.  [**84**]{}, 18 (2000); G. Hiller, Phys. Rev.  D [**66**]{}, 071502 (2002); N. G. Deshpande and D. K. Ghosh, Phys. Lett.  B [**593**]{}, 135 (2004); V. Barger, C. W. Chiang, P. Langacker and H. S. Lee, Phys. Lett.  B [**580**]{}, 186 (2004); [*ibid.*]{} [**598**]{}, 218 (2004). M. Gronau and J. L. Rosner, Phys. Rev. D [**59**]{}, 113002 (1999); H. J. Lipkin, Phys. Lett. B [**445**]{}, 403 (1999). D. Atwood and A. Soni, Phys. Rev. D [**58**]{}, 036005 (1998); M. Gronau, Phys. Lett. B [**627**]{}, 82 (2005). A sum rule involving three asymmetries, based on the expectation that the asymmetry in $B^+\to K^0\pi^+$ should be very small, is discussed in M. Gronau and J. L. Rosner, Phys. Rev.  D [**71**]{}, 074019 (2005). D. London and A. Soni, Phys. Lett. B [**407**]{}, 61 (1997). M. Gronau and J. L. Rosner, arXiv:hep-ph/0702193, to be published in Phys. Rev. D. Y. Grossman, A. Soffer and J. Zupan, Phys. Rev. D [**72**]{}, 031501 (2005). Evidence for $D^0$-$\bar D^0$ mixing has been reported recently, B. Aubert [*et al.*]{} \[BABAR Collaboration\], arXiv:hep-ex/0703020; K. Abe [*et al.*]{} \[Belle Collaboration\], arXiv:hep-ex/0703036. M. Gronau, Phys. Rev. D [**58**]{}, 037301 (1998). Y. Grossman, Z. Ligeti and A. Soffer, Phys. Rev. D [**67**]{}, 071301 (2003) D. Atwood, I. Dunietz and A. Soni, Phys. Rev. Lett.  [**78**]{}, 3257 (1997); D. Atwood, I. Dunietz and A. Soni, Phys. Rev. D [**63**]{}, 036005 (2001). A. Giri, Y. Grossman, A. Soffer and J. Zupan, Phys. Rev. D [**68**]{}, 054018 (2003); A. Bondar, Proceedings of BINP Special Analysis Meeting on Data Analysis, 24–26 September 2002, unpublished. I. Dunietz, Phys. Lett. B [**270**]{}, 75 (1991). A. Bondar and T. Gershon, Phys. Rev. D [**70**]{}, 091503 (2004). W. M. Yao [*et al.*]{} \[Particle Data Group\], J. Phys. G [**33**]{}, 1 (2006). R. Aleksan, T. C. Petersen and A. Soffer, Phys. Rev. D [**67**]{}, 096002 (2003). M. Gronau, Phys. Lett. B [**557**]{}, 198 (2003). M. Gronau, Y. Grossman, N. Shuhmaher, A. Soffer and J. Zupan, Phys. Rev. D [**69**]{}, 113003 (2004). M. Gronau and J. L. Rosner, Phys. Lett. B [**439**]{}, 171 (1998); Z. z. Xing, Phys. Rev. D [**58**]{}, 093005 (1998); J. H. Jang and P. Ko, Phys. Rev. D [**58**]{}, 111302 (1998). B. Blok, M. Gronau and J. L. Rosner, Phys. Rev. Lett.  [**78**]{}, 3999 (1997). B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. D [**74**]{}, 031101 (2006). B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. D [**72**]{}, 071103 (2005). B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. D [**73**]{}, 051105 (2006). K. Abe [*et al.*]{} \[BELLE Collaboration\], Phys. Rev. D [**73**]{}, 051106 (2006). J. P. Silva and A. Soffer, Phys. Rev. D [**61**]{}, 112001 (2000); M. Gronau, Y. Grossman and J. L. Rosner, Phys. Lett. B [**508**]{}, 37 (2001). B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. D [**72**]{}, 032004 (2005). K. Abe [*et al.*]{} \[Belle Collaboration\], arXiv:hep-ex/0508048. B. Aubert [*et al.*]{} \[BABAR Collaboration\], arXiv:hep-ex/0607065. B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. D [**72**]{}, 071104 (2005). See also P. Krokovny [*et al.*]{} \[Belle Collaboration\], Phys. Rev. Lett.  [**90**]{}, 141802 (2003); K. Abe [*et al.*]{} \[Belle Collaboration\], arXiv:hep-ex/0408108. A. Poluektov [*et al.*]{} \[Belle Collaboration\], Phys. Rev. D [**73**]{}, 112009 (2006). B. Aubert [*et al.*]{} \[BABAR Collaboration\], arXiv:hep-ex/0607104. See also B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. Lett.  [**95**]{}, 121802 (2005). B. Aubert [*et al.*]{} \[BABAR Collaboration\], arXiv:hep-ex/0507101. M. Gronau, Y. Grossman, Z. Surujon and J. Zupan, arXiv:hep-ph/0702011, to be published in Phys. Lett. B. M. Gronau and J. L. Rosner, Phys. Lett.  B [**595**]{}, 339 (2004). M. Gronau, E. Lunghi and D. Wyler, Phys. Lett.  B [**606**]{}, 95 (2005). M. Gronau, D. London, N. Sinha and R. Sinha, Phys. Lett. B [**514**]{}, 315 (2001). For two somewhat weaker bounds, which are included in this bound, see Y. Grossman and H. R. Quinn, Phys. Rev. D [**58**]{}, 017504 (1998); J. Charles, Phys. Rev. D [**59**]{}, 054007 (1999). H. Ishino [*et al.*]{} \[Belle Collaboration\], BELLE-PREPRINT-2006-33. M. Gronau, Phys. Lett.  B [**300**]{}, 163 (1993). M. Gronau and J. L. Rosner, work in progress. M. Beneke, M. Gronau, J. Rohrer and M. Spranger, Phys. Lett.  B [**638**]{}, 68 (2006). A. E. Snyder and H. R. Quinn, Phys. Rev.  D [**48**]{}, 2139 (1993); A. Kusaka [*et al.*]{} \[Belle Collaboration\], arXiv:hep-ex/0701015; B. Aubert [*et al.*]{} \[BABAR Collaboration\], arXiv:hep-ex/0703008. M. Gronau, O. F. Hernandez, D. London and J. L. Rosner, Phys. Rev. D [**50**]{}, 4529 (1994); [*ibid*]{} [**52**]{}, 6374 (1995). M. Gronau, J. L. Rosner and D. London, Phys. Rev. Lett.  [**73**]{}, 21 (1994). R. Fleischer and T. Mannel, Phys. Rev.  D [**57**]{}, 2752 (1998). M. Gronau and J. L. Rosner, Phys. Rev.  D [**57**]{}, 6843 (1998). C. W. Chiang, M. Gronau, J. L. Rosner and D. A. Suprun, Phys. Rev. D [**70**]{}, 034020 (2004). S. Baek, P. Hamel, D. London, A. Datta and D. A. Suprun, Phys. Rev. D [**71**]{}, 057502 (2005). A. J. Buras, R. Fleischer, S. Recksiegel and F. Schwab, Phys. Rev. Lett.  [**92**]{}, 101804 (2004). H. n. Li, S. Mishima and A. I. Sanda, Phys. Rev. D [**72**]{}, 114005 (2005). M. Beneke and S. Jager, Nucl. Phys. B [**751**]{}, 160 (2006). M. Gronau and J. L. Rosner, Phys. Lett.  B [**644**]{}, 237 (2007). M. Gronau and J. L. Rosner, Phys. Rev. D [**65**]{}, 013004 (2002); \[Erratum-ibid. D [**65**]{}, 079901 (2002). M. Gronau and J. L. Rosner, Phys. Lett. B [**572**]{}, 43 (2003). M. Beneke and M. Neubert, Nucl. Phys. B [**675**]{}, 333 (2003). C. W. Bauer, I. Z. Rothstein and I. W. Stewart, Phys. Rev. D [**74**]{}, 034010 (2006). M. Gronau, Y. Grossman, G. Raz and J. L. Rosner, Phys. Lett. B [**635**]{}, 207 (2006). M. Gronau and J. L. Rosner, Phys. Rev. D [**74**]{}, 057503 (2006). D. Zeppenfeld, Z. Phys.  C [**8**]{}, 77 (1981); M. J. Savage and M. B. Wise, Phys. Rev.  D [**39**]{}, 3346 (1989) \[Erratum-ibid.  D [**40**]{}, 3127 (1989)\]; L. L. Chau, H. Y. Cheng, W. K. Sze, H. Yao and B. Tseng, Phys. Rev.  D [**43**]{}, 2176 (1991). \[Erratum-ibid.  D [**58**]{}, 019902 (1998)\]. N. G. Deshpande and X. G. He, Phys. Rev. Lett.  [**75**]{}, 1703 (1995); X. G. He, Eur. Phys. J.  C [**9**]{}, 443 (1999). M. Gronau and J. L. Rosner, Phys. Rev. Lett.  [**76**]{}, 1200 (1996); A. S. Dighe, M. Gronau and J. L. Rosner, Phys. Rev.  D [**54**]{}, 3309 (1996). D. Atwood, M. Gronau and A. Soni, Phys. Rev. Lett.  [**79**]{}, 185 (1997). M. Beneke, Phys. Lett.  B [**620**]{}, 143 (2005). H. Y. Cheng, C. K. Chua and A. Soni, Phys. Rev.  D [**72**]{}, 014006 (2005); H. Y. Cheng, C. K. Chua and A. Soni, Phys. Rev.  D [**72**]{}, 094003 (2005). Y. Grossman, Z. Ligeti, Y. Nir and H. Quinn, Phys. Rev.  D [**68**]{}, 015004 (2003); G. Engelhard, Y. Nir and G. Raz, Phys. Rev.  D [**72**]{}, 075013 (2005); G. Engelhard and G. Raz, Phys. Rev.  D [**72**]{}, 114017 (2005). M. Gronau and J. L. Rosner, Phys. Lett.  B [**564**]{}, 90 (2003); C. W. Chiang, M. Gronau and J. L. Rosner, Phys. Rev.  D [**68**]{}, 074012 (2003); C. W. Chiang, M. Gronau, Z. Luo, J. L. Rosner and D. A. Suprun, Phys. Rev.  D [**69**]{}, 034001 (2004); M. Gronau, J. L. Rosner and J. Zupan, Phys. Lett.  B [**596**]{}, 107 (2004); M. Gronau, J. L. Rosner and J. Zupan, Phys. Rev.  D [**74**]{}, 093003 (2006). A. Datta and D. London, Phys. Lett.  B [**595**]{}, 453 (2004); S. Baek, P. Hamel, D. London, A. Datta and D. A. Suprun, Phys. Rev.  D [**71**]{}, 057502 (2005); A. Datta, M. Imbeault, D. London, V. Page, N. Sinha and R. Sinha, Phys. Rev.  D [**71**]{}, 096002 (2005). T. Gershon and A. Soni, J. Phys. G [**33**]{}, 479 (2007). J. M. Soares, Nucl. Phys.  B [**367**]{}, 575 (1991); A. L. Kagan and M. Neubert, Phys. Rev.  D [**58**]{}, 094012 (1998). B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. Lett.  [**93**]{}, 021804 (2004); Phys. Rev. Lett.  [**97**]{}, 171803 (2006); S. Nishida [*et al.*]{} \[BELLE Collaboration\], Phys. Rev. Lett.  [**93**]{}, 031803 (2004). D. Atwood, T. Gershon, M. Hazumi and A. Soni, Phys. Rev.  D [**71**]{}, 076003 (2005). B. Grinstein, Y. Grossman, Z. Ligeti and D. Pirjol, Phys. Rev.  D [**71**]{}, 011504 (2005); B. Grinstein and D. Pirjol, Phys. Rev.  D [**73**]{}, 014013 (2006). M. Matsumori and A. I. Sanda, Phys. Rev.  D [**73**]{}, 114022 (2006). P. Ball and R. Zwicky, Phys. Lett.  B [**642**]{}, 478 (2006). B. Aubert [*et al.*]{} \[BaBar Collaboration\], Phys. Rev.  D [**72**]{}, 051103 (2005); Y. Ushiroda [*et al.*]{} \[Belle Collaboration\], Phys. Rev.  D [**74**]{}, 111104 (2006). R. Aleksan, I. Dunietz and B. Kayser, Z. Phys.  C [**54**]{}, 653 (1992). M. Paulini, arXiv:hep-ex/0702047; G. Punzi \[CDF - Run II Collaboration\], arXiv:hep-ex/0703029. [^1]: Based partially on review talks given at recent conferences.
--- abstract: 'Using the colored quark cluster model, we study magnetic moments of the octet baryons. We give the values of the magnetic moments of baryons $p$, $n$, $\Sigma^+$, $\Sigma^-$, $\Xi^0$, and $\Xi^-$. The results also show that the orbital motion has very significant effects on the spin and magnetic moments of those baryons and the strange component in the proton is very small.' author: - 'Qing-Wu Wang$^{1,3}$, Xi-Guo Lee$^{1,2}$[[^1]]{}, Shi-Zhong Hou$^{1,3}$, and Yuan-Jun Song$^{4}$' title: Baryon magnetic moments in colored quark cluster model --- INTRODUCTION ============ It is clear now that the quark sea in nucleon has a nontrivial contribution. The EMC effects have indicated that only a small amount of nucleon spin is carried by quark, and that the strange sea quark in the proton may have a negative contribution to the nucleon spin[@spin; @Bass; @Jaffe]. Recently, both deep inelastic scattering and Drell-Yan experiments show that there exists flavor asymmetry of light quarks in the nucleon sea[@drellyan; @Garvey]. Moreover, the parity-violating electron-nucleon experiments indicate that the strangeness electric form factor $ G_E^s (q^2) $ is negative while the strangeness magnetic form factor $ G_M^s (q^2)$ is positive[@Spayde; @Aniol; @Mass; @Armstrong]. These issues mean that the constituent valence quark model(CVQM) cannot explain the complicated quark structure of baryons completely. Beyond the CVQM, more interesting models about the substructure of baryons have been proposed. Such as quark-gluon hybrid model, diquark-quark model [@diquark], various meson cloud model[@mesoncloud; @brodsky; @klambda; @Chen], and so on. To investigate the possible strange component in nucleon, there have many theoretical approaches, such as the lattice QCD calculation [@lattice], chiral perturbation and dispersion relation[@chiral], GDP sum rule[@GDP], $K^+ \Lambda$ meson cloud model[@klambda], and various quark models[@quark] correlating the octet baryon magnetic moments by assuming SU(3) flavor symmetry[@su3; @karl]. Most of the theoretical analyses and calculations have given a negative sign for magnetic form factor of proton. Ref.[@Hong] has obtained a positive result, however, the positive contribution is believed being automatically included by a relativistic calculation[@Chen]. More recently, a new colored quark cluster model (CQCM) has been proposed. In this model, the $qqqq\bar q$ fluctuation tends to arrange itself into energetically more favorable states. Of the strange components, there is a unique $uuds\bar s$ configuration which can give the right signs of the strange magnetic, electric and axial form factors[@zou; @riska]. In this configuration, the $\bar s$ is in the ground state and the $uuds$ subsystem in the $P$-state. This configuration has the lowest energy of all configurations under the assumption that the hyperfine interaction between the quarks is spin dependent[@helminen]. This configuration may also give an explanation to the excess of $\bar d$ over $\bar u$[@Garvey]. Besides, the five-quark components of this configuration in the $\Delta(1232)$ give a significant contribution to $\Delta(1232)\to N\pi$ decay [@deltadecay]. The purpose of this paper is to study the sea quark contributions to baryon magnetic moments. In Sec.II, we obtain a general formula of the magnetic moments of octet baryons by using the CQCM. In Sec.III we give our numerical results by fitting to experimental values of octet baryon magnetic moments. The results show that the theoretical values of those magnetic moments from the CQCM is better than those from the CVQM, and that the strange component in the nucleon is small or zero. In Sec.IV, we have compared our results with the already existed experiments and other theoretical analyses. THE BARYON MAGNETIC MOMENT ========================== We now discuss the magnetic moment of octet baryons. In the CQCM of Ref.[@zou], the positive parity demands that the four-quark subsystem is orbitally excited to P-shell with a spatial symmetry $[31]_X $. In order to give a colored singlet state the four-quark subsystem has the color state $[211]_C $ since the anti-quark is in the $[11]_C$ representation. The Ref.[@zou] also indicates that configuration $[4]_{FS} [22]_F [22]_S $ is outstanding for its energy is some 140-200MeV lower than any other configuration if the hyperfine interaction between the quarks is described both by the flavor and spin dependent hyperfine interaction $-C\Sigma_{i<j}\vec \lambda {}_i \cdot \vec \lambda {}_j\vec \sigma {}_i \cdot \vec \sigma _j $, where $C$ is a constant with the value $\sim $20-30MeV. This hyperfine interaction has led to the empirical ordering of the baryon resonances[@helminen]. The total wave function of the 5-q state with spin +1/2 is written as $$\left| {B, + \frac{1}{2}} \right\rangle = A_5 \sum\limits_{abcde} {\sum\limits_{Ms_z^\prime ms_z } {C_{JM,\frac{1}{2}s_z^\prime }^{\frac{1}{2}\frac{1}{2}} C_{1m,Ss_z }^{JM} C_{[31]_a [211]_a }^{[1^4 ]} } } C_{[31]_b [FS]_c }^{[31]_a } C_{[F]_d [S]_e }^{[FS]_c }$$ $$\times[31]_{X,m} (b)[F]_d [S]_{s_z } (e)[211]_C (a)\bar \chi _{s_z^\prime } \varphi (\{ r_i \} ). \label{eq:wave}$$ Here we use Weyl tableaux to represent the flavor, spin and color state wave function[@chenjinquan]. The capital $C$ with superscripts and subscripts denotes the Clebsch-Gordan(CG) coefficient. The $\bar \chi _ {s_z^\prime}$ is the spin state of the anti-quark and $\varphi (\{ r_i \} )$ a symmetric function of the coordinates of the 5-q system. $A_5$ denotes the amplitude of the 5-q component. For the mixed flavor symmetry representation $[22]_F$ of the $uuds$ system, two independent flavor wave functions are written as $$\left| {[22]_{F1} } \right\rangle = \frac{1}{{\sqrt {24} }}[2\left| {uuds} \right\rangle + 2\left| {uusd} \right\rangle + 2\left| {dsuu} \right\rangle + 2\left| {sduu} \right\rangle$$ $$- \left| {duus} \right\rangle - \left| {udus} \right\rangle - \left| {sudu} \right\rangle - \left| {usdu} \right\rangle$$ $$- \left| {suud} \right\rangle - \left| {dusu} \right\rangle - \left| {usud} \right\rangle - \left| {udsu} \right\rangle ],$$ $$\left| {[22]_{F2} } \right\rangle = \frac{1}{{\sqrt 8 }}[\left| {udus} \right\rangle + \left| {sudu} \right\rangle + \left| {dusu} \right\rangle + \left| {usud} \right\rangle$$ $$- \left| {duus} \right\rangle - \left| {usdu} \right\rangle - \left| {suud} \right\rangle - \left| {udsu} \right\rangle ].$$ And the two spin functions of $[22]_S$ can be obtained by the substitutions $u \longleftrightarrow \uparrow $ and $d,s \longleftrightarrow \downarrow$ with a proper normalization factor. Using $0$ and $1$ to denote the ground-state and $P$-state wave functions for the constituent quarks, the spatial wave functions of $[31]_X$ are $$\left| {[31]_{X1} } \right\rangle = \frac{1}{{\sqrt {12} }}[3\left| {0001} \right\rangle - \left| {0010} \right\rangle - \left| {0100} \right\rangle - \left| {1000} \right\rangle ],$$ $$\left| {[31]_{X2} } \right\rangle = \frac{1}{{\sqrt 6 }}[2\left| {0010} \right\rangle - \left| {0100} \right\rangle - \left| {1000} \right\rangle ],$$ $$\left| {[31]_{X3} } \right\rangle = \frac{1}{{\sqrt 2 }}[\left| {0100} \right\rangle - \left| {1000} \right\rangle ].$$ From Ref.[@zou] we know that the $[4]_{FS} [22]_F [22]_S $ configuration does not allow $uudu\bar u$ in the proton. This is consistent with the observed excess of $\bar d$ over $\bar u$ [@Garvey]. The proton may also have an admixture with the flavor-spin symmetry $[4]_{FS} [31]_F [31]_S $, in which case no suppression exists. However, it is energetically less favorable. The empirical evidence for the large flavor asymmetry of the $q \bar q$ components[@Garvey] suggests that this configuration $[4]_{FS} [31]_F [31]_S $ should have a smaller probability than the favored one $[4]_{FS} [22]_F [22]_S $. We do not consider the meson cloud contribution here, and the quark wave function for proton may now be expressed as $$\left| p \right\rangle = A_3 \left| {uud} \right\rangle + A_{5d} \left| {uudd\bar d} \right\rangle + A_{5s} \left| {uuds\bar s} \right\rangle$$ with the normalization condition $\left| {A_3 } \right|^2 + \left| {A_{5d} } \right|^2 + \left| {A_{5s} } \right|^2 = 1$. The nonperturbative sea quark effects have also been studied in baryons other than nucleons[@susumu]. Taking the same consideration above for proton, we can write the wave functions of baryons $p,n,\Sigma^+,\Sigma^-,\Xi^0$, and $\Xi^-$ in a general form as $$\left|B \right\rangle = A_3 \left| {\alpha\alpha\beta} \right\rangle + A_{5\beta} \left| {\alpha\alpha\beta\beta\bar \beta} \right\rangle + A_{5\gamma} \left| {\alpha\alpha\beta\gamma\bar \gamma} \right\rangle,\label{eq:waveB}$$ where the $\alpha,\beta$, and $\gamma$ can be taken as $d, u$, and $s$ quarks for neutron, $u,s$, and $d$ for $\Sigma^+$, etc. The 5-q components only contain the configuration of $[4]_{FS} [22]_F [22]_S $ here and we use this representation throughout this paper. Because the $\Sigma^0$ and $\Lambda^0$ have three different valence quarks, we do not consider them here. In the non-relativistic quark model, the magnetic moment contribution of quark to the proton magnetic moment is defined as the expectation value of the following operator $$\begin{array}{*{20}c} {\hat \mu _i = \frac{{\hat Q_i }}{{2m_i }}(\hat l_i + \hat \sigma _i ),} & {} & {i = u,d,s.} \\ \end{array}$$ Here $\hat Q_i $ is the electrical charge operator, and $m_i $ the constituent quark mass. In the naive quark model the proton consisting of two $u$ quarks and one $d$ quark, all in a relative S-wave. Using the $ SU_6^{\sigma f} \supset SU_2^\sigma \times SU_3^f $ symmetrical wave function as an approximation, the magnetic moment contribution of the $uud$ component in unit of nuclear magneton (n.m) is $$\left\langle {uud} \right|\sum\limits_i {\hat \mu _i } \left| {uud} \right\rangle={4 \over 3}{e_u}{{m_p } \over {m_u }} - {1 \over 3}{e_d}{{m_p } \over {m_d }},$$ where the $e_u$ and $e_d$ denote the electric charges of $u$ quark and $d$ quark respectively, and in the following $e_q$($e_{\bar q}$) the corresponding quark(anti-quark) electric charge. $m_u$ and $m_d$ are the $u$ and $d$ quark mass, while $m_p$ the mass of proton. And for baryon $B = \alpha \alpha \beta $[@Brekke], $$\left\langle {\alpha\alpha\beta} \right|\sum\limits_i {\hat \mu _i } \left| {\alpha\alpha\beta} \right\rangle={4 \over 3}{e_\alpha}{{m_p } \over {m_{\alpha} }} - {1 \over 3}{e_\beta}{{m_p } \over {m_{\beta} }}.$$ Within the 5-q components, the 4-q subsystem gives no spin contribution to the magnetic moment because symmetry $[22]_s $ gives spin zero. But every quark in this subsystem has probability of being excited to P-state, which gives an orbital magnetic moment $$\mu ^{(l)}_q = \left\langle {qqqq\bar q, + \frac{1}{2}} \right|\frac{{\hat Q_q }}{{2m_q }}\hat l\left| {qqqq\bar q, + \frac{1}{2}} \right\rangle={ e_q \over 6}{{m_p } \over {m_q }}P_{5q}.$$ Here $P_{5q} = \left| {A_{5q} } \right|^2 $ is the probability of 5-q components. The anti-quark in its ground state gives a magnetic contribution $$\mu _{\bar q} = \left\langle {qqqq\bar q, + \frac{1}{2}} \right|\frac{{\hat Q_q }}{{2m_{\bar q} }}\hat \sigma \left| {qqqq\bar q, + \frac{1}{2}} \right\rangle=-{e_{\bar q} \over 3}{{m_p } \over {m_{\bar q} }}P_{5q}.$$ Besides these diagonal contributions from quark spin and orbital motion, the transitions or non-diagonal matrix elements between the $qqqq\bar q$ and $qqq$ components may have some contributions, too. However, these non-diagonal contributions depend both on the explicit wave function model and the model for $q\bar q \to \gamma$ vertices. Cases will be more complicated to take into account the confining interaction between the quarks that leads to bound state wave functions[@riska]. With a lot of unexplicit parameters, the results will be very ambiguous. So, for simplicity, we neglect the contributions of these transition matrix elements in this paper. Then, by adding all the spin and orbital angular contributions to the magnetic moment, the total magnetic moment of polarized proton is $$\mu _p = P_{3}({4 \over 3}{e_u}{{m_p } \over {m_u }} - {1 \over 3}{e_d}{{m_p } \over {m_d }})+ P_{5d} (\sum\limits_{q = u,u,d,d} {e_q \over 6}{{m_p } \over {m_q }} - {e_{\bar d} \over 3}{{m_p } \over {m_{\bar d }}}) + P_{5s} (\sum\limits_{q = u,u,d,s} {{e_q \over 6}{{m_p } \over {m_q }}} - {e_{\bar s} \over 3}{{m_p } \over {m_{\bar s }}}). \label{eq:moment}$$ From Eq., we obtain the general form of the magnetic moment for the six baryons as $$\mu _B = P_{3}({4 \over 3}{e_\alpha}{{m_p } \over {m_{\alpha} }} - {1 \over 3}{e_\beta}{{m_p } \over {m_{\beta} }}) + P_{5\beta} (\sum\limits_{q = \alpha,\alpha,\beta,\beta} {{e_q \over 6}{{m_p } \over {m_q }}} - {e_{\bar \beta} \over 3}{{m_p } \over {m_{\bar \beta }}}) + P_{5\gamma} (\sum\limits_{q = \alpha,\alpha,\beta,\gamma} {{e_q \over 6}{{m_p } \over {m_q }}} - {e_{\bar \gamma} \over 3}{{m_p } \over {m_{\bar \gamma }}}), \label{eq:moment2}$$ with the normalization condition $P_{3}+P_{5 \beta}+P_{5\gamma}=1$. THE GLOBAL FIT AND RESULTS ========================== The Eq. means that the baryon magnetic moments depend on the quark masses and the probabilities of those 5-q components. To reduce the number of these parameters, we assume that those $P_3$ are equal for the six baryons, ie, $P^{p}_3=P^{n}_3=P^{\Sigma^+}_3=P^{\Sigma^-}_3=P^{\Xi^0}_3=P^{\Xi^-}_3$, and we hold the same assumption for $P_{5\beta}$ and $P_{5\gamma}$. As in Ref.[@karl], we use a fit method to discuss these parameters. In order to reduce these parameters further, we take $m_u=m_d$, $m_q=m_{\bar q}$. As a result, we see that the six baryon magnetic moments from Eq. contain only four parameters now, ie, $m_u$, $m_s$, $P_{5\beta}$, and $P_{5\gamma}$. To give the concrete values of these parameters, we consider the relatively simple but commonly used method, namely, to minimize the following function[@karl]: $$\chi ^2 = \sum\limits_{k = 1}^m {\frac{{(T_k - E_k )^2 }}{{\sigma _k^2 }}}, \label{eq:chi}$$ where $E_k $ is the measured value, and $T_k $ the corresponding theoretical value. m, the number of the baryons, is six here. The error $\sigma^2 _k $ is taken to be the addition of a theoretical error and experimental error in quadrature as in Ref.[@karl]. The theoretical error comes from a comparison of the sum rule $$\mu (n) - \mu (p) + \mu (\Sigma ^ + ) - \mu (\Xi ^0 ) + \mu (\Xi ^ - ) - \mu (\Sigma ^ - ) = 0$$ with experimental data. The left hand of this equation is actually $ - 0.49 \pm 0.03\ n.m$. If the errors are equally shared among the six baryons, the theoretical error may be $0.49/6 \sim 0.08 \ n.m$. In Table \[table1\], we list the experimental data from PDG[@pdg]. we firstly examine the value of $P_{5 \gamma}$ given in Ref.[@riska]. In that work, an analysis for the preferred configuration $[4]_{FS} [22]_F [22]_S $ shows that the qualitative features of empirical strangeness form factors may be described with a $ ~15\% $ admixture of $uuds\bar s$ in the proton. Fixing the $P_{5\gamma}$ with this value and taking only the $P_{5\beta}$ as variable, the minimum of $\chi ^2 $ happens at $P_{5\beta}=0.13$, ie, the probability of $uudd\bar d$ in proton is 0.13. We express the minimum of $\chi ^2 $ as $\chi _m ^2 $. The fitting result $\chi_m ^2 = 18.07$ is not better than that from the CVQM. The fitting results of the CVQM $(P_{5\alpha}=P_{5\beta}=0)$ are showed in the Table \[table1\] with quark masses being $m_u=344.03$MeV and $m_s=544.76$MeV. Then, we take both the probabilities of 5-q components $P_{5\beta}$ and $P_{5 \gamma}$ as variables to minimize the Eq.. And we find that there are two areas where the minimum can occur: the 3-q component or the 5-q component dominant in the baryons. As we know, however, the CVQM has success in low-lying baryon spectroscopy and magnetic moment. The probabilities of non-perturbative sea-quarks may not be very large. So, we only consider the case that the probabilities of 5-q components are small. The mathematical minimum of $\chi$ is $\chi _m ^2 = 13.36$ with $P_{5\gamma}=0$ and $P_{5\beta}=0.08$, which is inconsistent with the observed excess of $\bar d$ over $\bar u$ in proton with $\bar d - \bar u = 0.12$ [@Garvey]. Considering this excess, the best fitting result is $\chi_m ^2 = 14.57$, with $P_{5\gamma}=0$, $P_{5\beta}=0.12$. In this case, we obtain the magnetic moments of baryons labelled as CQCM in Table.\[table1\]. The corresponding quark masses are $ m_u=300.84$MeV and $m_s=463.74$MeV. The $\chi_m ^2 $ as a function of $P_{5\beta}$ with $P_{5\gamma}=0$ is presented on Fig.\[fig1\]. And we find, if $P_{5\gamma} $ is not zero and to ensure the $\chi_m ^2 $ less than that deduced from the naive quark model, the value of $P_{5\gamma} $ needs to be less than $ 10\% $. This means that the the probability of strange component in the proton cannot be more than $ 10\% $. On Fig.\[fig2\], we plot $\chi_m^2$ as function of $P_{5\gamma} $ at some fixed points of $P_{5\beta} $. Both form Fig.\[fig1\] and Fig.\[fig2\] we can see that the up limit of the probability $P_{5\gamma} $ is about $14\%$, ie, the probability of the $uudd\bar d$ may not be more than $14\%$ in proton. In Table \[table1\], the theoretical errors are computed as $(E-T)/E$, where $E$ is the experimental magnetic moment value, and $T$, the corresponding theoretical value. From the Table \[table1\], we can see, except the error of the neutron, that all errors given by the CQCM are less than $9\%$, which is much better than those given by the CVQM. Besides, adding the theoretical errors in quadrature we get $\sigma_{CQCM}^2=0.046$, so small than $\sigma_{CVQM}^2=0.12$. p n $\Sigma ^ + $ $\Sigma ^ - $ $\Xi ^0 $ $\Xi ^ - $ ------------- --------- --------- ---------------- ---------------- ----------- ------------- -------------- exp 2.793 -1.913 2.458 -1.160 -1.250 -0.651 - error 0 0 0.01 0.025 0.014 0.0025 - total error 0.08 0.08 0.0806 0.0838 0.0812 0.08 - CVQM 2.727 -1.818 2.616 -1.021 -1.372 -0.462 $\chi _m ^2$ error -0.0235 -0.0794 0.0641 -0.120 0.0972 -0.290 16.46 CQCM 2.745 -1.705 2.668 -1.118 -1.261 -0.597 $\chi _m ^2$ error -0.0173 -0.174 0.0849 -0.0361 0.00950 -0.0837 14.574 \[table1\] : Magnetic moments (in unit of nucleon magnetic moment) of the six baryons. The total errors are given by adding to the experimental error a theoretical error 0.08 in quadrature. The theoretical results including and not including the sea quark contributions are listed in the lines of CQCM and CVQM respectively. []{data-label="table1"} The early EMC experiment results of quark spin contributions to the proton are very rough. The missing spin in this experiment may come from the gluon polarizations, or orbital angular momentum of quarks and gluon. But how large they are is still an open question. In the non-relative quark model, there is no room for gluons. In the CQCM model with only $12\%$ of $uudd\bar d$ in the proton, the spin contributions from quark spin and orbital angular moment are $\Delta u={\textstyle{4 \over 3}}P_3=1.173$, $\Delta d=-{\textstyle{1 \over 3}}$, and $\Delta l={\textstyle{4 \over 3}}P_{5d}=0.16$. Their sum is equal to 1, which is guarantied by the wave function Eq. to give the total spin of proton. And we see that the orbital angular moment contributes a lot. DISCUSSIONS AND CONCLUSION ========================== To find the effective degrees of freedom is at the first stage for studying baryon’s structure. The above numerical results have indicated that the non-perturbative effects of strangeness component in the nucleon are small. The strangeness content of the nucleon is purely a sea quark effect and therefor is a clean and important window to look into the nucleon internal structure and dynamics. The magnetic moment contribution of strange quark to nucleon is equal to the measured strange form factor at $Q^2=0$. The empirical value of strange form factor $ G_M^s (Q^2 = 0.1) = 0.37 \pm 0.20 \pm 0.26 \pm 0.15$ can not give a compelling evidence for nonzero strange quark effects of proton owing to the wide uncertainties[@Spayde]. The experiment at Mainz with result $G_E^s + 0.106G_M^s = 0.071 \pm 0.036$ at $Q^2 = 0.108(GeV/c)^2$ [@Mass] and the G0 experiment at Jefferson Lab [@Armstrong] may indicate nonzero $ G_E^s$ and $ G_M^s$. Because these experiments are carried out all in some special $Q^2$,the results are still very ambiguous when exploited to $Q^2=0$. For the strange electric form factor of the proton, the recent empirical value is $ G_E^s(Q^2=0.1) = - 0.038 \pm 0.042_{(stat)} \pm 0.010_{(syst)}$, which is really consistent with zero[@eform]. A recent lattice result show that the strange electric form factor is $ G_E^s(Q^2=0.1) = - 0.009 \pm 0.005 \pm 0.003\pm 0.027$ [@Leinweber]. These may indicate that the pairs of strange and anti-strange quarks popping out of the sea cancel each other so effectively, that they have almost zero contributions to the proton’s magnetic moment, charge, or mass. This agrees with our numerical results. A very recent analysis of the complete world set of parity-violating electron scattering data also gives a result that the strange form factors are consistent with zero. Further more, recent experiments show that the strangeness contribution to the proton spin is very small [@smallspin]. Besides, we like to note that there is another difference between the CQCM and the $K^+\Lambda$ meson cloud model, apart form the sign of the strangeness magnetic moment of proton. In the $ K^+\Lambda $ model, the s quark normalized to the probability $P_{K^ + \Lambda }$ of the $K^ + \Lambda$ configuration yields a fractional contribution $\Delta S_s= - {\textstyle{1 \over 3}}P_{K^ + \Lambda } $ to the proton spin. Although the $K^+$, composed of $u\bar s$ quarks in valence quark model, is unpolarized, other mechanism may yield $\bar s$ quark polarized parallel to the initial proton spin[@burkardt]. These results contradict the colored quark cluster model in which only the $\bar s$ quark give a negative contribution to the proton spin, while the s quark is unpolarized because the spin symmetry of subsystem is $[22]_S$. Unfortunately, it is unable to measure the polarization of single quark experimentally nowadays. More deeply theoretical analyses are needed. In the end, the observed non-perturbative quark sea in the nucleon leads us to reexamine the low lying baryon magnetic moments. We have deduced these magnetic moments in the CQCM and have give a discussion of the possible strange component in the nucleon. We see that the origin of the anomalous moments of quarks discussed in many works [@anomalous] may be from the sea quark contributions, that the probability of strange component $uuds\bar s$ in the proton is less than $10\%$ and the probability of $uudd\bar d$ less than $14\%$, and that the orbital motion has very significant effects on the spin and magnetic moment in the non-relative CQCM. Whether there has polarization and about its sign of strange quark in the proton is still under debate. Our numerical results favor the non polarization of strange sea quark in the proton. We hope future experiments will give us more clues of sea quarks in these baryons. We are grateful to Prof. B. S. Zou for useful discussion. This work was supported by Chinese Academy of Sciences Knowledge Innovation Project (KJCX2-SW-No16;KJCX2-SW-No2), National Natural Science Foundation of China(10435080;10575123). [1]{} J. Ashman et al., (EMC Collaboration), *Phys, Lett. B* **206**, 364 (1988); J. Ashman et al., (EMC Collaboration), *Nucl. Phys. B* **328**, 1 (1989). S. D. Bass, *Rev. Mod. Phys* **77**, 1257 (2005). R. L. Jaffe, *arXiv:hep-ph/0102281*, v1 (2001). M. Arneodo et al., (NMC Collaboration), *Phys. Rev. D* **50**, R1 (1994); D. M. Alde et al., (E772 Collaboration), Phys. Rev. Lett. **64**, 2479 (1990); A. Baldit et al., (NA51 Collaboration *Phys. Lett. B* **332**, 244 (1994); E. A. Hawker et al., (E866 Collaboration), *Phys. Rev. Lett* **80**, 3715 (1998). G. T. Garvey and J. C. Peng, *Prog. Part. Nucl. Phys.* **47**, 203 (2001). D. T. Spayde et al., *Phys. Lett. B* **583**, 79 (2004). K. Aniol et al., *Phys. Rev. C* **69**, 065501 (2004). F. Mass et al., *Phys. Rev. Lett.* **94**, 152001 (2005). D. S. Armstrong et al., *Phys. Rev. Lett.* **95**, 092001 (2005). M. Cristoforetti, P. Faccioli, G. Ripka, and M. Traini1, *Phys. Rev. D* **71**, 114010 (2005); V. Keiner, *Phys. Rev. C* **54**, 3232 (1996); M. Nzar and P. Hoodbhoy, *Phys. Rev. D* **51**, 32 (1995). A. W. Thomas, *Phys. Lett.B* **126**, 97 (1983); S. Kumano, *Phys. Rep.* **303**, 183 (1998); H. Holtzman, A. Sxczurek, and J. Speth, *Nucl. Phys. A* **596**, 631 (1996); N. N. Nikolaev, W. chaefer, A. Szczurek, and J. Speth, *Phys. Rev. D* **60**, 014004 (1999); A. W. Thomas, W. Melnitchouk, and F. M. Steffens, *Phys. Rev. Lett.* **85**, 2829 (2000). S. J. Brodsky and B. Q. Ma, *Phys. lett. B* **381**, 317 (1996). M. Musolf and M. Burkhardt, *Z. Phys. C* **61**, 433 (1984); H. Forkel, F. S. Navarra, and Nielsen, *Phys. Rev. C* **61**, 055206 (2000); L. Hannelius and D. O. Riska, *Phys. Rev. C* **62**, 045204 (2000). X. S. Chen, R. G. E. Timmenrmans, W. M. Sun, H. S. Zong, and F. wang, *Phys. Rev. C* **70**, 015201 (2004). R. Lewis, W. Wilcox, and R. M. Woloshyn, *Phys. Rev. D* **67**, 013003 (2003); D. B. Leinweber, *Phys. Rev. D* **69**, 014005 (2004); S. J. Dong, K. F. Liu, and A. G. Williams, *Phys. Rev. D* **58**, 074504 (1998); H. W. Hammer, S. J. Pglia, M. J. Ramsey-Musolf, and S. L. Zhu, *Phys. Lett. B* **562**, 208 (2003). H. W. Hammer, S. J. Pglia, M. J. Ramsey-Musolf, and S. L. Zhu, *Phys. Lett. B* **562**, 208 (2003); H. W. Hammer, U.-G. Meissner, and D. Drechsel, *Phys. Lett. B* **367**, 323 (1996). S. B. Gerasimov, *arXiv:hep-ph/0208049*, v1 (2002); *Phys. Lett. B* **357**, 666 (1995). V. E. Lyubovitskiy, P. Wang, T. Gutsche, and A. Faessler, *Phys. Rev. C* **66**, 055204 (2002); L. Hannelius, D. O. Riska, and L. Ya. Glozman, *Nucl. Phys. A* **665**, 353 (2000); P. Geiger and N. Isgur, *Phys. Rev. D* **55**, 299 (1997); B. Q. Ma, Phys. Lett. B **408**, 387 (1997). T. P. Cheng and L. F. Li, *Phys. Lett. B* **366**, 365 (1996); R. Nag, *Prog. Theor. Phys.* **91**, 409 (1994). G. Karl, *Phys. Rev. D* **45**, 247 (1992). S. T. Hong and Y. J. Park, *Phys. Rep.* **358**, 143 (2002); S. T. Hong, B. Y. Park, and D. P. Min, *Phys. Lett. B* **414**, 229 (1997). B. S. Zou and D. O. Riska, *Phy. Rev. Lett.* **95**, 072001 (2005); C. S. An, D. O. Riska, and B. S. Zou, *Phy. Rev. C* **73**, 035207 (2006). D. O. Riska and B. S. Zou *Phy. Lett. B* **636**, 265 (2006). C. Helminen and D. O. Riska, *Nucl. Phys. A* **699**, 624 (2002). Q. B. Li and D. O. Riska, *Phys. Rev. C* **73**, 035201 (2006). J. Q. Chen, *Group Representation Theory for Physicists* (World Scientific, Singapore, 1989). K. Susumu, *Nucl. Phy. B* **526**, 445 (1998). L. Brekke and J. L. Rosner, *Comm. Nucl. Part. Phys.* **18**, 83 (1988); D. H. Perkins, *Introduction to High Energy Physics*, (Addison-Wesley, Reading, MA, 1987). W. -M. Yao et al., (Particle Data Group), *J. Phys. G* **33**, 1 (2006), and relevant references therein. K. A. Anipl et al., *Phys. Rev. Lett.* **96**, 022003 (2006). D. B. Leinweber et al., *Phys. Rev. Lett* **97**, 022001 (2006). A. Airapetian et al., (HERMES Collaboration), *Phys. Rev. D* **71**, 012003 (2005); B. W. Filippone and X. D. Ji, *Adv. Nucl. Phys.* **26**, 1 (2001); D. de Florian, G. A. Navarro, and R. Sassot, *Phys. Rev. D* **71**, 094018 (2005). M. Burkardt and B. J. Warr, *Phys. Rev. D* **45**, 958 (1992). Massimo Casu and L. M. Sehgal, *Phys. Rev.C* **55**, 2644 (1997). [^1]: Email: [email protected]
--- abstract: 'Potential condensed clouds of gas in the Galactic halo are examined in the context of the recent models of cooling, fragmenting clouds building up the baryonic mass of the Galaxy. 582 high-velocity clouds (HVCs) are defined as the potential infalling, condensed clouds and the sample’s spatial and velocity distribution are presented. With the majority of the hydrogen in the clouds ionized ($\sim 85$%), the clouds at a distribution of distances within 150 kpc, and their individual total masses below $10^{7}$ , the total mass in potentially condensed clouds is $1.1 - 1.4 \times 10^{9}$ . If the tighter distance constraint of $< 60$ kpc is adopted this mass range drops to $4.5 - 6.1 \times 10^{8}$ . The implications on the condensing cloud models, as well as feedback and additional accretion methods, are discussed.' author: - 'M. E. Putman' title: Potential Condensed Fuel for the Milky Way --- Introduction ============ The range of stellar ages and metallicities in galaxies like the Milky Way indicate fresh star formation fuel must fall in throughout their lives (e.g., Rocha-Pinto et al. 2000; Renda et al. 2005). The gas accretion process has traditionally been thought to proceed via shock-heated halo gas from the intergalactic medium cooling and falling in to feed the star formation process (e.g., White & Rees 1978; White & Frenk 1991). Recently these models have been extended from all of the gas within a certain radius collapsing monolithically, to including fragmentation as the hot gas cools, forming pressure supported warm clouds (e.g., Maller & Bullock 2004, hereafter MB04; Kaufmann et al. 2005; Sommer-Larsen 2006). Models which include fragmentation do not have the “over-cooling problem” the monolithic collapse models have. In another words, all of the gas does not cool and fall in at once, a large fraction remains in the halo in a warm/hot phase and excessive feedback is not necessary to explain the observed baryonic mass of the galaxies. MB04 predict at any given time during a Milky Way-like galaxy’s recent evolution, several thousand condensed clouds with a total mass on the order of $2 \times 10^{10}$  can be found within a $\sim$150 kpc radius. These clouds are pressure confined by the hot gaseous halo remaining and should currently be found in the Galactic halo as evidence of this ongoing gas accretion. Likely candidates for these condensed infalling clouds are the high-velocity clouds (HVCs) of neutral hydrogen surrounding our Galaxy. Oort (1966) was the first to propose this type of origin for HVCs and MB04 discuss the similarities between the HVCs and the condensed clouds in their model. HVCs range in size from ultra-compact ($< 20$ arcmin$^{2}$; e.g., Brüns & Westmeier 2004) to hugely extended (1800 deg$^{2}$; Wakker & van Woerden 1997) and have typical peak column densities of approximately $10^{19}$ cm$^{-2}$ (Putman et al. 2002). Their velocities generally extend from 90 $< |V_{LSR}| <$ 450 , or $-300$ $< V_{GSR} < 300$ . Recent progress on the distances to HVCs allows origins such as the condensing cloud model to be seriously considered and constrained. The direct distance constraints involve looking for absorption lines in the spectra of halo stars at known distance and generally provide lower limits on the order of $> 5$ kpc, but also include upper limits for 3 clouds of $< 10$ kpc (summarized by Wakker 2001; Thom et al. 2006). Indirect distance constraints include deep HI observations of systems similar to the Milky Way and Local Group ($< 160$ kpc; e.g., Pisano et al. 2004; Zwaan 2001), H$\alpha$ observations indicating the HVCs are being ionized by photons escaping from the Milky Way ($< 40$ kpc for some clouds; e.g., Putman et al. 2003a), and constraints on the properties of the compact HVCs (CHVCs; $\theta < 1$) when subject to the extragalactic ionizing background radiation ($< 200$ kpc; e.g., Maloney & Putman 2003). In addition, recent work surveying the environment of M31 does not find the M31 analogs of CHVCs beyond 60 kpc (Westmeier, Brüns & Kerp 2005). All of these distance constraints place the clouds within the extended Galactic halo and appear to be consistent with the distances expected for the condensing, infalling clouds in the models. Given the recent developments both theoretically and observationally regarding the gaseous distribution about galaxies, the time is right to address the properties of potential condensed clouds currently in the Galactic halo. This paper addresses the observational constraints on the HVC population and how this can be put together with the models to form a consistent picture of condensed, infalling clouds feeding the Milky Way’s star formation. The selection of high-velocity clouds is presented in the next section, followed by the properties of this sample in the context of the condensed cloud model. Finally, the results are discussed and summarized. Data: HVC Selection =================== Potential condensed clouds were selected from an updated version of the all-sky HVC catalog of Wakker & van Woerden (1991; hereafter WvW91). The catalog has been updated by including clouds from the catalog of HVC components by Morras et al. (2000) for declinations $< -23$ and includes clouds with $|V_{\rm LSR}| > 90$  (see Wakker 2004 for more information on the catalog properties). The updated WvW91 catalog of 626 clouds is not exactly high resolution (0.5 at best), but covers the entire sky with a detection limit of $2-3 \times 10^{18}$ cm$^{-2}$ ($\Delta$v = 25 ). The only selection criteria for the HVCs to be potentially condensed halo clouds was the exclusion of clouds associated with the Magellanic System (e.g., the Magellanic Stream and Leading Arm; Putman et al. 2003b) and the Outer Arm Complex. The Outer Arm Complex is a large low latitude structure that is consistent with being a warped section of the outer Galactic Disk, or a high-z spiral arm (e.g., Wakker & van Woerden 1997). The remaining 582 clouds have a mean velocity relative to the Galactic Standard of Rest of V$_{GSR}$ = -43 . The clouds in the condensing cloud models are predicted to have a range of velocities as they form and move through the halo, but should have a net infall in agreement with this mean V$_{GSR}$ (Bullock pers. comm.). The clouds have a total HI flux of 993,557 Jy . The largest contributors to this total flux are Complex C (209,590 Jy ) and then Complex H (98,040 Jy ). Results ======= The spatial distribution of the potential infalling HVCs is shown in Figure \[spatial\] with the symbol representative of a positive (star) or negative (triangle) V$_{\rm GSR}$ cloud. The clouds have V$_{\rm GSR}$’s between approximately $+/-$ 300 . The clouds are distributed throughout the sky with positive and negative velocity clouds intermingled. The largest concentrations of clouds are towards the anti-center in the southern Galactic hemisphere and around $l = 260$  in the northern Galactic hemisphere. These clouds are thus found looking away from the majority of Galactic disk, however there is also a number of clouds found in both hemispheres from $l = 0-45$. The clouds with V$_{\rm GSR}$ more negative than -100   are also concentrated between $l = 0-45$  and around $l = 180$. The total mass of the entire population of potentially infalling clouds as a function of average distance is shown in Figure \[himass\]. This plot assumes the fraction of the cloud that is detectable as neutral hydrogen is 10.5% of the total mass of the cloud; 70% of the cloud is hydrogen and 15% of that hydrogen has cooled to be detectable as HI. This 15% is justified by assuming the extragalactic ionizing radiation field is the dominant factor in ionizing the clouds (e.g., Maloney & Putman 2003; see Section 4.3). The HI mass was calculated using, M$_{HI} = 2.36 \times 10^{5}$ I (Jy) D$^{2}$(Mpc), with I = 993,557 Jy for all of the clouds selected according to the previous section. The total mass, M$_{tot}$, is then M$_{HI}/0.105$. With this constraint, Figure 2 shows that if the high velocity HI flux is at an average distance of 80 kpc the total cloud mass is $\sim2 \times 10^{10}$ , at 40 kpc the total mass would be $\sim6 \times 10^{9}$ , and $\sim9 \times 10^{8}$  at an average distance of 20 kpc. There are several clouds that contribute significantly to the total HI flux in halo clouds of 993,557 Jy, and it is more realistic to place the clouds at a distribution of distances rather than an average distance. For instance one of the larger clouds which is part of Complex A has a flux of 52,210 Jy  and is known to be between $4.0-9.9$ kpc (van Woerden et al. 1999). In addition, total individual cloud masses between $10^{5-7}$  are considered the most likely initial mass for each cloud in the models given the constraints on the formation of the clouds (i.e., the ability to fragment and the conduction limit), cloud survival (i.e., Kelvin-Helmholtz Instability), and cloud motion (i.e., ambient drag) (MB05; Kaufmann et al. 2005). Placing the clouds at a random distribution of distances $<150$ kpc and ensuring that the total mass of each individual cloud remains below $10^{7}$  (again with the HI 10.5% of the total mass) results in a range of total masses in condensed clouds between $1.1 - 1.3 \times 10^{9}$ . All of the range of masses are presented at the 95% confidence level and were found after running over 100 random distributions of distances. The lower limit on the mass of inidvidual clouds is left open, as the clouds are expected to be disrupted as they come into close proximity with the Galaxy. An example of the distribution of distances and total individual cloud masses generated is shown in Figure 3. If we adopt the tighter distance constraint of $< 60$ kpc for all clouds, this range of total mass decreases to $4.5 - 5.7 \times 10^{8}$ . If all of the clouds were actually given the range of masses between $10^{5-7}$  and kept within 150 kpc, the total mass in potentially condensed clouds goes up somewhat to $1.2-1.4 \times 10^{9}$ . If all clouds are given the same mass there is no direct correlation between the clouds’ resulting distances and their observed GSR velocities. This might be expected if the small clouds represent distant clouds not yet affected by the Galaxy’s halo medium or gravitational pull. Clouds that have not yet been detected by existing surveys will have low HI fluxes and most likely small masses, but if numerous they could significantly change the total mass in condensed clouds in the halo. The effect these yet undetected clouds may have on the total mass has been tested by extrapolating the HVC flux distribution function of log N(S) $= -1.44$ log S + 3.91 (Wakker 2004), where N(S) is the number of clouds with a given flux S, measured in 10 Jy  bins. This distribution holds to approximately 25 Jy  for the updated WvW91 catalog and the fact that it does not continue further is at least partially due to the completeness limits of the catalog, as also found in the southern HIPASS HVC catalog (Putman et al. 2002). When the HVC flux distribution function is extrapolated to continue to 1 Jy  and these low flux clouds are assigned with the same random range of distances within 150 kpc, the total mass in clouds increases only slightly to $1.1 - 1.4 \times 10^{9}$ . Keeping the clouds within 60 kpc and adopting this flux distribution function results in a total mass range of $4.5 - 6.1 \times 10^{8}$ . Thus, with a fixed ionized component, the potentially existing lower flux clouds will not add significantly to the total mass in condensed clouds. If a model is adopted in which the smallest flux clouds ($< 100$ Jy ; a flux cut that encompasses the majority of the compact HVCs; Putman et al. 2002) are at closer distances ($< 20$ kpc) and the rest of the HVC population extends to 150 kpc, the range of total mass in condensed clouds is $0.9 - 1.1 \times 10^{9}$ . If on the other hand the clouds with small HI fluxes ($< 100$ Jy  again) are presumed to be at distances greater than 60 kpc (but $< 150$ kpc) the total mass in condensed clouds increases to $1.3 - 1.6 \times 10^{9}$ . Finally, if all of these clouds were actually given a mass range between $10^{5-7}$  (somewhat unrealistic, as it places the clouds with fluxes of 1-2 Jy  between 150-200 kpc), the total mass reaches a similar $1.4-1.6 \times 10^{9}$ . Keeping all of the clouds within 150 kpc requires lowering the bottom of the cloud mass range to $5 \times 10^{4}$  and the total mass in condensed clouds lowers to $1.3-1.5 \times 10^{9}$ . Discussion ========== The total mass of potentially condensed clouds in the Galactic halo is limited to be at most approximately 1.6 $\times$ $10^{9}$  in all of the above cloud distributions that are consistent with the distance constraints and keeping 15% of the hydrogen in each cloud neutral. This upper mass limit is set by extrapolating the flux distribution function to encompass yet undetected HI clouds, placing the clouds at a random distribution of distances below 150 kpc, and constraining their individual total mass to be below $10^{7}$ . This mass is over a factor of 2 lower if the clouds are kept within 60 kpc. The 3 main factors affecting this total mass are the distances, the limit on the total mass of individual clouds, and the percentage of the cloud that is neutral. Each of these factors are discussed here, followed by a discussion of the impact of these results on the gas accretion models. Distances to HVCs ----------------- As discussed in the introduction, there are now a large number of constraints on the distances to HVCs. All of the constraints are consistent with placing the clouds within 150 kpc and some of the constraints place the clouds at closer distances. If the HVCs extend only out to 60 kpc and not 150 kpc and the clouds are kept at 15% ionized, the total mass in condensed clouds changes from $1.1-1.4 \times 10^{9}$   to $4.5 - 6.1 \times 10^{8}$ . If we refer back to Figure 2, this type of total mass would place the majority of the HI flux in the halo at an average distance of approximately 21 kpc or 12 kpc, respectively. In either case, the upper limit on the total mass in condensed clouds is dependent on how many clouds are at largest distances. This will be constrained further with ongoing distance determination programs (e.g., Thom et al. 2006). The limit on the total mass in clouds if they are within 60 kpc is consistent with the findings for M31 of $3-4 \times 10^{8}$  (assuming 10.5% HI again) in clouds within 25 kpc of this galaxy (Thilker et al. 2004). Closer distances for the clouds in the models may infer the cooling times are longer than initially presumed and/or the densities of the fragments in the outer Galactic halo are not high enough. Individual Cloud Masses ----------------------- The mass of individual clouds as the condense within the hot halo depends on a number of factors and is not currently tightly constrained. MB04 find a range of masses are suitable, with $10^{5-7}$  being the most likely given the constraints of conduction, evaporation, Kelvin-Helmhotz instability, and ram pressure drag. Kaufmann et al. (2005) and Sommer-Larsen (2006) find a similar range of masses for individual clouds. Tidal disruption is a factor that could disrupt the clouds within approximately 13 kpc and lead to some small clouds that no longer have typical masses in this range (MB04). $10^{7}$  is therefore adopted as the upper limit on the total mass of the individual clouds, and since some HVCs may lie at distances below 13 kpc and represent condensed clouds that have been disrupted, the lower limit on the mass of individual clouds is left open, as indicated in Fig. 3. If the small clouds, or lower flux clouds, are presumed to lie at distances below 20 kpc (rather than extending out to 150 kpc) while the rest of the clouds are allowed to extend out to 150 kpc (as long as their mass remains below $10^{7}$ ) the total mass in clouds drops to $0.9 - 1.1 \times 10^{9}$ . One could also argue that the small clouds should be more distant however; simply based on their angular size. Placing the small clouds at distances greater than 60 kpc, but within 150 kpc, increases the total mass in condensed clouds to $1.3-1.6 \times 10^{9}$. The middle ground of placing the clouds at a range of distances appears to be the best approach. In contrast to the small clouds, the largest HVC complexes are unlikely to be beyond 20 kpc, and this is kept the case by constraining a cloud’s total mass to be below $10^{7}$ . Neutral Fraction ---------------- This paper uses the nominal value of 15% of the hydrogen in each cloud being neutral based on the impact of the ionizing flux from the extragalactic background light ($10^{4}$ photons cm$^{-2}$ s$^{-1}$; Maloney & Putman 2003) and the majority of the clouds lying at distances between 60-100 kpc. The possibility that a larger percentage of most clouds are ionized should be considered. A larger ionized component could be due to many of the clouds lying at closer distances and being subject to the ionizing radiation from our Galaxy (e.g., Putman et al. 2003a) or via collisional ionization as the HVCs interact with the remaining hot halo medium (e.g., Sembach et al. 2003). It is also possible some of the small clouds are more distant and a lower fraction of the cloud has cooled to be observable in HI. If only 1% of the hydrogen in each cloud is detectable in HI, the total mass of clouds in the halo would reach approximately $1.7 - 2.1 \times 10^{10}$  with the clouds within 150 kpc and $6.8 - 9.2 \times 10^{9}$  with the clouds within 60 kpc. A 1% neutral component is possible for some clouds given the discovery of highly ionized HVCs (Sembach et al. 1999; Fox et al. 2005), but is unlikely for the majority of the HVCs given the results of pointed H$\alpha$ measurements (e.g., Tufte, Reynolds & Haffner 1998; Putman et al. 2003a). In fact the current limited H$\alpha$ measurements generally show a larger fraction of neutral material than ionized for HI HVCs. There are some HI HVCs that show evidence for extended ionized components (e.g., Sembach et al. 2003; Haffner 2005), but others do not (Putman et al. 2003a). Though proximity to the Galaxy may result in more of a cloud being ionized, this may also be offset by the clouds closer to the Galaxy harboring higher density material. Estimating the actual fraction of ionized material relative to neutral is difficult given the limited pencil beam sightlines used to probe the ionized component. 15% neutral is a reasonable estimate based on the theoretical predictions and current observational constraints. Future sensitive Fabry-Perot H$\alpha$ observations should help to clarify the full extent of the ionized component of HVCs. The fraction of each cloud that is neutral will also depend on the amount of metals in the gas. As with the direct distances, there are very few HVC metallicity determinations. The metallicity determinations for the clouds in the sample presented here are almost solely limited to the giant Complex C and generally range from 0.1 - 0.3 solar (e.g., Collins et al. 2002; Tripp et al. 2003). The model of MB04 uses a metallicity of 0.1 solar, so consistent with, but on the low end of the limited HVC metallicity estimates. If they included more metal-rich gas, the gas would cool more efficiently at lower densities and larger radii from the Galaxy. The HVCs would then be expected to lie at even larger distances ($\sim 200$ kpc) which seems unlikely given the HVC distance constraints. Higher metallicities in the MB04 model would also most likely result in an increased total mass of condensed halo clouds. Implications on Gas Accretion ----------------------------- Disk galaxy formation models generally predict the existence of a hot halo with a baryonic mass of a few $\times\ 10^{10}$   (e.g., MB04; Fukugita & Peebles 2005; Sommer-Larsen 2006). Hot halo gas has recently been detected in the vicinity of our Galaxy (Sembach et al. 2003; Rasmussen et al. 2002) and around other spirals (Pedersen et al. 2006) in possible support of these models. This hot gas gradually cools as clouds and fuels the galaxy’s star formation, but the specifics of the process vary by model. The model of MB04 predicts that cloud formation and infall balances at early times, with the balance of condensed clouds in the halo approximately 2 $\times\ 10^{10}$ . The analysis of the observed HI halo clouds presented here indicates the total mass in potentially condensed clouds in the halo is at least an order of magnitude below this. The $< 6 \times 10^{8}$   range found here for the halo clouds within 60 kpc is consistent with the simulations of Kaufmann et al. (2005) and Sommer-Larsen (2006). The reason for less mass in condensed halo clouds than found by MB04 may be due to the clouds falling in rapidly after they are formed, and thus less clouds are visible in the halo at a given time. If clouds fall in rapidly, feedback from the Galaxy may need to be considered to keep most of the halo material in a warm-hot phase and not over-produce the baryonic mass of the Galaxy. A Galactic fountain is one possible feedback mechanism in which the hot gas from multiple supernovae rises into the halo (e.g., Bregman 1980). The Galactic fountain could inject a large amount of enriched hot gas into the lower Galactic halo which may then mix with the massive hot halo material and cool as the intermediate velocity clouds (IVCs) found around our Galaxy. IVCs are much closer to disk (0.5 - 2 kpc) than HVCs and are also of higher metallicity (0.5-2 solar; Wakker 2001). The limited mass in condensed clouds forming and falling on to the Galaxy indicates additional accretion methods are necessary to explain the Galaxy’s observed baryonic mass. Several of these additional accretion methods are directly evident as our Galaxy destroys smaller galactic systems such as the Sagittarius dwarf and Magellanic Clouds. The Magellanic System itself will eventually bring on the order of 10$^{9}$   of HI into the Milky Way (Putman et al. 2003b). The Magellanic System is an example of a less frequent, large accretion event and could also potentially disrupt the process of cloud formation at the current epoch. Finally, though cold accretion is unlikely to dominate at low redshifts and for galaxies as massive as the Milky Way, this method of gas accretion in which the incoming gas is never heated to the virial temperature of the galaxy halo, may also need to be considered (e.g., Keres et al. 2005). In any case, in the model of Keres et al. the smaller galaxies currently being accreted by the Galaxy obtained the bulk of their mass via cold accretion, indicating the Galaxy is indirectly obtaining mass in this fashion. There are several properties of the observed HI clouds in the halo that can be used to constrain the models as they are developed further besides the mass limits. As discussed in the results section and shown in Fig. 1, most of the observed clouds are found at latitudes below 60 and there are several clusters of smaller clouds in specific directions that may represent the preferred accretion axes of the Galaxy. The mean negative V$_{\rm GSR}$ of the cloud population (-43 ) is suggestive of a net infall, but the mix of positive and negative V$_{\rm GSR}$ clouds found throughout the sky suggests it is not a simple model. Since we are only measuring one component of the cloud’s velocity, some clouds with negative or positive V$_{\rm GSR}$’s may be moving away or towards the disk, respectively. In any case, after forming the clouds appear to be on a variety of orbits, which after collisions and ram pressure drag eventually lead to infall (MB04). There is no correlation between the GSR velocity of the cloud and distance if all of the clouds are given similar masses, indicating that the clouds do not all have similar masses or reflecting the complex motions of the halo clouds. If the ongoing distance determination programs continue to place the HVCs within 60 kpc of the Galactic disk, explanations will need to be made for the reason the HI is only found out to this radii and what percentage of the baryonic mass the HI represents. Future HI surveys being completed by the Galactic Arecibo L-Band Feed Array (GALFA) consortium (e.g., Stanimirovic et al. 2006) and the Galactic All-Sky Survey (GASS; McClure-Griffiths et al. 2006), will be important for characterizing the properties of the clouds and their relationship to the Galactic disk. The GALFA surveys, with their increased sensitivity and resolution, will be particularly important for examining the flux distribution function and assessing how the condensed clouds interact with the diffuse hot halo as they are assimilated into the Galactic disk. This analysis will also allow for an estimate of the mass of the elusive diffuse hot halo. Thanks to James Bullock, Ari Maller, and Jesper Sommer-Larsen for very useful discussions, to the referee for insightful comments, and to Bart Wakker for providing an updated version of the WvW91 catalog. Bregman, J.N. 1980, ApJ, 236, 577 Brüns, C. & Westmeier, T. 2004, A&A, 426, 9 Collins, J., Shull, M. & Giroux, M. 2003, ApJ, 585, 336 Fox, A.J., Wakker, B.P., Savage, B.D., Tripp, T.M., Sembach, K.R., Bland-Hawthorn, J. 2005, ApJ, 630, 332 Fraternali, F., van Moorsel, G., Sancisi, R. & Oosterloo, T. 2002, AJ, 123, 3124 Fukugita, M. & Peebles, P.J.E. 2005 (astro-ph/0508040) Haffner, L.M. 2005, in Extra-Planar Gas, ASP Conf. Proc. V. 331, ed. R. Braun, 25 Hulsbosch, A.N.M. & Wakker, B.P. 1988, A&AS, 75, 191 Kauffman, T., Mayer, L., Wadsley, J., Stadel, J. & Moore, B. 2005, MNRAS, submitted (astro-ph/0507296) Keres, D., Katz, N., Weinberg, D.H., & Davé, R. 2005, MNRAS, 363, 2 McClure-Griffiths, N. et al. 2006, ApJ, 638, 196 Maller, A.H. & Bullock, J.S. 2004, MNRAS, 355, 694 Maloney, P.R. & Putman, M.E. 2003, ApJ, 589, 270 Morras, R., Bajaja, E., Arnal, E.M., Poppel, W.G.L. 2000, A&AS, 142, 25 Oort, J.H., 1966, Bull. Astron. Inst. Netherlands, 18, 421 Pedersen, K., Rasmussen, J., Sommer-Larsen, J. Toft, S., Benson, A.J., Bower, R.C. 2006, New Astronomy, in press (astro-ph/0511682) Pisano, D.J., Barnes, D.G., Gibson, B.K., Staveley-Smith, L., Freeman, K.C., & Kilborn, V. 2004, ApJ, 610, L17 Putman, M.E. et al. 2002, AJ, 123, 873 Putman, M.E., Bland-Hawthon, J., Veilleux, S., Gibson, B.K., Freeman, K.C. & Maloney, P.R. 2003a, ApJ, 597, 948 Putman, M.E., Staveley-Smith, L., Freeman, K.C., Gibson, B.K., & Barnes, D.G. 2003b, ApJ, 586, 170 Renda, A., Kawata, D., Fenner, Y. & Gibson, B.K. 2005, MNRAS, 356, 1071 Rasmussen, A., Kahn, S., Paerels, F. 2003, in The IGM/Galaxy Connection, eds. J. Rosenberg & M. Putman, Kluwer, 281, 109 Rocha-Pinto, H.J., Scalo, J., Maciel, W.J., & Flynn, C. 2000, A&A, 358, 869 Sembach, K.R., Savage, B.D., Lu, L., & Murphy, E.M. 1999, ApJ, 515, 108 Sembach, K.R. et al. 2003, ApJS, 146, 165 Sommer-Larsen, J. 2006, ApJL, submitted (astro-ph/0602595) Stanimirovic, S., Putman, M.E., Heiles, C., Goldston, J. et al. 2006, ApJ, in preparation Thilker, D. et al. 2004, ApJL, 601, 39 Thom, C., Putman, M.E., Gibson, B.K., Christleib, N., Flynn, C., Beers, T.C., Wilhelm, R., Lee, Y.S. 2006, ApJL, 638, 97 Tripp, T. et al. 2003, AJ, 125, 3122 Tufte, S.L., Haffner, L.M., & Reynolds, R.J. 1998, ApJ, 508, 200 van Woerden, H. et al. 1999, Nature, 400, 138 Wakker, B.P. 2004, in High-Velocity Clouds, eds. H. van Woerden, B.P. Wakker, Schwarz, U.J., & de Boer, K.S., Kluwer, 312, 25 Wakker, B.P. 2001, ApJS, 136, 463 Wakker, B.P. & van Woerden, H. 1997, ARA&A, 35, 217 Wakker, B.P. & van Woerden, H. 1991, A&A, 250, 509 (WvW91) Westmeier, T., Brüns, C. & Kerp, J. in Extra-Planar Gas, ASP Conf. Proc. V. 331, eds. R. Braun, 105 White, S.D.M. & Rees, M.J. 1978, MNRAS, 183, 341 White, S.D.M. & Frenk, C.S. 1991, ApJ, 379, 52 Zwaan, M. 2001, MNRAS, 325, 1142 ![Spatial distribution of the 582 potentially condensed high velocity clouds on the sky in Galactic coordinates with the negative V$_{\rm GSR}$ clouds labeled as triangles and the positive V$_{\rm GSR}$ clouds labeled as stars.\[spatial\]](f1.ps) ![The total mass of the entire population of potentially infalling clouds if they are all given the typical distance on the x axis and are assumed to be 10.5% observable neutral hydrogen (see text). The HI flux used to calculate this total mass is dominated by some of the large complexes like Complex C, so it is more realistic to place the clouds at a distribution of distances which keeps the individual cloud masses below $10^{7}$   as discussed in the text. \[himass\]](f2.ps) ![An example of the total masses of individual clouds if they are assigned a random distribution of distances below 150 kpc, are 10.5% observable neutral hydrogen, and are confined to have a total mass below $10^{7}$ . The total mass in condensed clouds with this data is typically $1.1-1.3 \times 10^{9}$ .](f3.ps)
--- author: - | Muhammad Abdullah Jamal^1^Matthew Brown^3^ Ming-Hsuan Yang^2,3^ Liqiang Wang^1^ Boqing Gong^3^\ ^1^University of Central Florida ^2^University of California at Merced^3^Google bibliography: - 'references.bib' title: | Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition\ from a Domain Adaptation Perspective ---
--- author: - 'L.L. Kiss' - 'A. Bódi' date: July 2017 title: Amplitude variations of modulated RV Tauri stars support the dust obscuration model of the RVb phenomenon --- Introduction ============ The RV Tauri variables constitute a small group of pulsating stars with some dozen known members in the Milky Way and a similar number of variables in the Magellanic Clouds. They are F, G and K-type supergiants that form the high-luminosity extension of Population II Cepheids in the classical instability strip (Wallerstein 2002). The rarity of RV Tau-type variables can be explained by their evolutionary state given that these objects are post-AGB stars rapidly evolving in the Hertzprung-Russell diagram blueward from the Asymptotic Giant Branch (AGB) on the timescales of 10$^3$–10$^4$ yrs (Blöcker 1995). During this evolution, post-AGB stars cross the instability strip, where they become pulsationally unstable (Fokin 1994, Fokin et al. 2001, Aikawa 2010). The most regular, though far from being Cepheid-like, high-amplitude light curves belong to the RV Tauri stars, which are located in cooler part of the post-AGB instability strip, between 5000 K and 6000 K (Kiss et al. 2007, Bódi et al. 2016). The most distinct feature of the RV Tau-type variables is the presence of alternating minima of the pulsations (meaning that every second light curve minimum is shallower), with typical (double) periods from 30 days to 90 days. The periodicity is not strict, as the cycle-to-cycle variations can be quite significant, and in some cases, it has been shown to originate from low-dimensional chaos (e.g. Buchler et al. 1996). In addition to the pulsations, some RV Tau stars show long-term modulation of the mean brightness, with periods of 700-2500 days. The absence or presence of the slow modulation is the basis for classifying the stars into the RVa and RVb photometric subclasses, respectively. The RVb phenomenon has been connected to the fact that these stars are typically redder than the RVa-type variables and the role of circumstellar dust shells was presumed (Lloyd Evans 1985). More recent studies interpret the RVb phenomenon with periodic obscuration events in binary systems (Fokin 1994, Pollard et al. 1996, Van Winckel et al. 1999, Fokin et al. 2001, Maas et al. 2002, Gezer et al. 2015). In that picture all RVb stars are binaries, surrounded by a large opaque screen, relative to which the pulsating component changes its position during the orbit. The presence of a dusty disk is indeed seen in the infrared part of the Spectral Energy Distribution (see Gezer et al. 2015 for the latest analysis that used WISE data and references therein). In addition to the disk, it has also been speculated that interactions of the components may lead to changes in the pulsation amplitude (Pollard et al. 1996, Maas et al. 2002, Pollard et al. 2006), which could further complicate the models of these stars. The fact that the pulsations in the light curve appear to visually decrease in amplitude in the faint states of some RVb variables has been regularly noted in the literature. Fokin (1994) compiled data for a dozen stars and noted that all showed systematically lower primary light amplitudes in the minimum of the secondary variation. Pollard et al. (1996) discussed in details the well-expressed pulsation amplitude decrease in the faint states of U Mon and AI Sco (i.e. around the RVb-type minima), which they assumed to be caused by some sort of dynamical interaction between the pulsating and the companion stars, which affects both the pulsations and the mass-loss processes. Pollard et al. (1996) explicitly argued that pure obscuration by dust should not decrease the amplitude of the pulsations. This line of argumentation was later adopted by Van Winckel et al. (1999) and Maas et al. (2002), who claimed that the detected amplitude change is difficult to account for in a simple geometric picture. Later, Pollard et al. (2006) reiterated the main conclusion that enhanced mass-loss or binary interaction would be needed around the periastron of U Mon to explain the apparent amplitude damping. Most recently, Percy (2015) investigated the amplitude variability of 42 RV Tau variables and found that the pulsation amplitude can change by factors of up to 10, on median time-scales of about 22 pulsation periods. He concluded that the cause of the pulsation amplitude variations remains unknown. Despite the efforts in the past decades, high-quality photometry of RV Tau-type stars is still very rare. In the original [*Kepler*]{} field, there is only one star, DF Cygni, a moderately bright RVb-type variable, for which independent analyses were recently published by Bódi et al. (2016) and Vega et al. (2017). Both studies pointed out that DF Cygni shows very rich behaviour on all timescales. While Bódi et al. (2016) put their emphasis on detecting evidence of strong non-linear effects that are directly observable in the [ *Kepler*]{} light curve, Vega et al. (2017) used [*Kepler*]{} data to argue for binarity as the main cause of long-period variability in DF Cyg. The latter authors also noted that when the light curve is measured in fluxes, the reduction of the pulsation amplitude in the faint state of DF Cyg is exactly the same $\sim$90% than the overall fading by 90% during the RVb minimum. The exact correspondence of the two can be naturally explained by 90% obscuration of the pulsating stellar disk by a very large opaque screen during every orbit, so that the local changes around the mean (i.e. the short-period pulsations) are in fact constant when considered relative to the mean brightness of the system. This idea is the very inspiration of the present paper. We believe the earlier studies overlooked a very simple and yet important point by sticking to the inverse logarithmic magnitude system in the variable star analyses, rather than using the physically meaningful flux units, which scale linearly with the photon counts. As we will demonstrate for the best observed RVb-type variables, a very convincing and ubiquitous correlation exists between the flux amplitudes and the mean flux levels for all RVb stars, which indicates that dust obscuration, most likely by circumbinary disks, can indeed be the universal explanation for the RVb phenomenon. Data and methods ================ Given the rarity of the RV Tau-type stars, there are not many variables with extensive observations. We have surveyed the The International Variable Star Index (VSX database[^1]) and the literature for well observed and characterized targets. We ended up with three major sources of RVb photomeric data. First, we checked the database of visual observations of the American Association of Variable Star Observers (The AAVSO International Database). In total, we found eight stars with several cycles of the RVb variability and duty cycle in excess of 75%, which was found to be necessary to measure the amplitudes of individual pulsation cycles. Then we checked the online catalogue of the All Sky Automated Survey (ASAS, Pojmanski 2002) and found $V$-band data for three stars. Finally, we surveyed the database of the Optical Gravitational Lensing Experiment (OGLE) project, where the OGLE-III Catalog of Variable Stars contains Type II Cepheids (including RV Tau-type variables) in the Large Magellanic Cloud (Soszynski et al. 2008), Small Magellanic Cloud (Soszynski et al. 2010) and the Galactic Bulge (Soszynski et al. 2011, 2013). Here we found useful $I$-band data for six stars in the Bulge and one star in the LMC, all catalogized as ‘RVb’ variables by the OGLE team. --------------- --------------- --------------- --------------- --------------- --------------- Name T$_{\rm obs}$ N$_{\rm obs}$ P$_{\rm pul}$ P$_{\rm mod}$ Source (d) (d) (d) IW Car 18120 4685 71.98 1449 AAVSO 3300 2179 72.2 1470 ASAS SX Cen 22409 1320 32.88 602 AAVSO 3296 1153 33.01 610 ASAS DF Cyg 17074 5924 49.82 780 AAVSO 1470 66533 49.84 786 [ *Kepler*]{} SU Gem 14783 2228 49.92 682 AAVSO U Mon 46283 48019 91.48 2451 AAVSO AR Pup 14998 1450 76.66 1194 AAVSO 3299 1086 76.34 1178 ASAS AI Sco 19538 1408 71.64 977 AAVSO RV Tau 40020 14976 78.48 1210 AAVSO BLG-T2CEP-177 2836 742 92.44 2970 OGLE BLG-T2CEP-215 2829 814 55.74 958 OGLE BLG-T2CEP-345 2830 1344 73.64 1100 OGLE BLG-T2CEP-350 2776 1026 87.20 722 OGLE BLG-T2CEP-352 4404 978 103.78 543 OGLE BLG-T2CEP-354 3232 533 66.46 951 OGLE LMC-T2CEP-200 4494 917 69.86 850 OGLE --------------- --------------- --------------- --------------- --------------- --------------- : The studied sample of RVb stars. T$_{\rm obs}$ and N$_{\rm obs}$ are the time-span and the total number of observations. P$_{\rm pul}$ and P$_{\rm mod}$ refer to the periods of pulsation and RVb-type modulation, respectively, as measured from the analysed data. The OGLE variable names were shortened by omitting “OGLE-” in front of the shown identifiers.[]{data-label="table:stars"} The final sample contains 19 datasets for 15 RVb stars, which is - to our knowledge - the most extensive collection of this kind in the literature. In Table \[table:stars\] we list the main characteristics of the data: the total time-span of the individual light curves, the number of measurements per star and two period values: one for the pulsations and one for the RVb modulation. The listed values were measured with standard Fourier analysis, using Period04 of Lenz & Breger (2005). Because of the alternating minima, the highest peak in the Fourier spectra always corresponded to the half-period of the RV Tau cycle, hence we doubled the adopted period of pulsations. The only exception is IW Car, whose light curve does not show any alternation in the minima, hence we adopted the single cycle period for its pulsations. The typical uncertainties in the quoted values are in the order of the last digit shown in Table \[table:stars\]. Note that in principle, the pulsation period could be determined to more decimals, however, the seemingly irregular variations in the period/phase (Bódi et al. 2016) would pose strong limitations to all future phase predictions. Our measured values are all in good agreement with previous period determinations in the literature (e.g. Pollard et al. 1996, Kiss et al. 2007, Percy et al. 2015). ![image](aavso_all_with_OGLE.eps){width="18cm"} The visual data have first been binned to average out the errors of the individual observations (typically $\pm$0.2-0.3 mag per point). Because of the short pulsation periods, the bin sizes were selected between 3 days and 5 days, depending the pulsation period. This way we avoided strong phase smearing due to the binning, that could have led to undersampled light curves of pulsations. The CCD observations were only checked for outliers, identified by visual inspection of the data. Both ASAS and OGLE have daily cadence and no binning was applied to those data. ![A short subset of the AAVSO light curve of U Mon (black dots), overplotted with the fitted linear term (blue lines) and the sum of two sine waves (red dotted lines). The damping of the pulsation amplitude during the minimum phase (JD 48,600-48,800) of the RVb variation clearly visible.[]{data-label="umon"}](flux_variation.eps){width="9cm"} Following the idea outlined by Vega et al. (2017), we converted every light curve to flux units with the conversion formula $f=10^{-0.4\times({\rm magn.}-25)}$, adapting 25.0 as the arbitrary zero-point. To demonstrate the overall continuity and typical duty cycle of the data, in Fig. \[aavso\] we show a gallery of the full sample in normalized flux units (unit flux corresponds to the global maximum brightness for each star). While we plotted the entire ASAS, [ *Kepler*]{} and OGLE light curves, the AAVSO data are only shown partially, in 10,000-days long subsets. For the overwhelming majority of stars the amplitude variability of the short-period pulsations is very much apparent, with the highest amplitudes always appearing in the bright phases of the RVb variability. Exceptions are IW Car, AR Pup (in the ASAS data) and OGLE-BLG-T2CEP-215, which seem to have more stable pulsations with RVb modulations in the same range as the pulsation amplitudes. We proceeded in the analysis by measuring the local amplitudes of each pulsation cycle. For this, each dataset was split into subsets that covered exactly one pulsation cycle (two consecutive maxima and minima of the alternating depths) and then we fitted the following function to each of the subsets: $$f(t-t_0)=f_0+k (t-t_0) + \sum_{n=1}^{2}A_n \sin \Bigg ( \frac{2\pi (t-t_0)}{n P_{\rm pul}}+\varphi_n \Bigg )$$ where $t$ and $t_0$ are time and the first time epoch of the subset, respectively, $f_0$, $k$, $A_n$ and $\varphi_n$ ($n=1,2$) are the fitted parameters (for IW Car, $n=1$ was used). The zero-point and the linear term were introduced to include the local variation of the slow RVb modulation, while the two-component Fourier-polynomial is the approximation of the light curve shape of the pulsations. The latter is admittedly a simplified description of the light curve, however, here we are more interested in the global average changes of the pulsation amplitudes than in the perfect fits of the individual cycles. We have also experimented with higher-order polynomials, but none of the results changed significantly, hence we restricted the analysis to the simplest form. An illustration of the fitting procedure is shown in Fig. \[umon\], where the red curve shows the individual fits of Eq. 1 throughout a whole ascending branch of the RVb cycle. The blue line separately indicates the linear term. The main conclusion here is that the two-component harmonic fit describes the observations very well. After fitting Eq. 1 to a given subset, we subtracted the linear term and computed the two extrema of the residual polynomial (except for the [*Kepler*]{} data of DF Cyg, where the extrema were determined from the residual of the observational points). Their difference was taken as the full pulsation amplitude in the given subset, while the median of the subset was chosen to represent the average flux of the star. The relationship that was found between these amplitudes and mean fluxes and its properties represent the main results of this paper. Results ======= ![Pulsation amplitudes as function of the mean brightness for the [*Kepler*]{} data of DF Cyg. [*Top panel:*]{} instantaneous flux amplitudes vs. mean fluxes in linear scaling. The dotted red line shows a linear fit to the data, which was used to estimate the $f_\lambda^{\rm max}$ and $a_\lambda^{\rm max}$ parameters. The formal error bars are smaller than the symbol sizes. [*Bottom panel:*]{} the relative pulsation amplitude vs. the relative mean flux in equidistant bins. Here the vertical error bars show the standard deviations of the mean, while the horizontal error bars indicate the width of the bins (note the missing bin at 0.7 due to lack of points). The diagonal black lines show the line of equality in both panels. See text for the details.[]{data-label="kepler-dfcyg"}](flux_difference_log_df_cyg_errorbars_fit.eps "fig:"){width="9cm"} ![Pulsation amplitudes as function of the mean brightness for the [*Kepler*]{} data of DF Cyg. [*Top panel:*]{} instantaneous flux amplitudes vs. mean fluxes in linear scaling. The dotted red line shows a linear fit to the data, which was used to estimate the $f_\lambda^{\rm max}$ and $a_\lambda^{\rm max}$ parameters. The formal error bars are smaller than the symbol sizes. [*Bottom panel:*]{} the relative pulsation amplitude vs. the relative mean flux in equidistant bins. Here the vertical error bars show the standard deviations of the mean, while the horizontal error bars indicate the width of the bins (note the missing bin at 0.7 due to lack of points). The diagonal black lines show the line of equality in both panels. See text for the details.[]{data-label="kepler-dfcyg"}](flux_difference_normed_2_df_cyg_errorbars.eps "fig:"){width="9cm"} We treated the four types of data (visual, OGLE $I$-band, ASAS $V$-band, [ *Kepler*]{}) separately, partly because of the wavelength dependence of the pulsation amplitudes (Pollard et al. 1996), partly to avoid misleading effects from mixing heterogeneous data. ![Pulsation amplitude as function of the mean brightness for the eight RVb stars from the AAVSO database (for each star, one point corresponds to one full pulsation cycle). Note the logarithmic scaling in the upper panel which allows easy comparison of the bright and the faint stars. The bottom panel is the same as in Fig. \[kepler-dfcyg\].[]{data-label="ampli-aavso"}](flux_difference_log_all_errorbars.eps "fig:"){width="9cm"} ![Pulsation amplitude as function of the mean brightness for the eight RVb stars from the AAVSO database (for each star, one point corresponds to one full pulsation cycle). Note the logarithmic scaling in the upper panel which allows easy comparison of the bright and the faint stars. The bottom panel is the same as in Fig. \[kepler-dfcyg\].[]{data-label="ampli-aavso"}](flux_difference_normed_2_all_errorbars.eps "fig:"){width="9cm"} First we show the raw correlations between the mean fluxes and the corresponding full amplitudes in the upper panels of Figs. \[kepler-dfcyg\]-\[ampli-asas\]. For the sake of comparing bright and faint stars in Figs. \[ampli-aavso\]-\[ampli-asas\], we set doubly logarithmic scaling of the axes. To guide the eye, we have also drawn the lines of equality in these plots. While [*Kepler*]{} and visual data lie close to the equality, the OGLE and ASAS points all form parallel sequences falling below the diagonal lines, meaning that the correlations are close to linear but the mean slopes are less than one. The formal Pearson’s $r$ correlation coefficients are greater than 0.8 for most of the stars except those three already mentioned in relation to Fig. \[aavso\] (IW Car: $r= 0.53$ (AAVSO), AR Pup: $r=0.21$ (AAVSO), OGLE-BLG-T2CEP-215: $r=0.34$). The well expressed linear correlation between the amplitudes $a_\lambda$ and mean fluxes $f_\lambda$ for all types of the observations led us to fit the data in the upper panels of Figs. \[kepler-dfcyg\]-\[ampli-asas\] with the simplest $a_\lambda=\alpha f_\lambda + \beta$ ($\lambda$=visual, $V$ and $I$) linear term. What we found is that the $\alpha$ slope parameters for almost all stars agreed within the error bars for each $\lambda$. Namely, the visual observations show a slope parameter around unity (e.g. AI Sco: 0.95$\pm$0.10; RV Tau: 0.89$\pm$0.08; SU Gem: 1.21$\pm$0.14; U Mon: 0.93$\pm$0.06), the OGLE $I$-band observations around 0.6 (e.g. BLG-177: 0.59$\pm$0.06; BLG-215: 0.68$\pm$0.09; LMC-200: 0.60$\pm$0.04), while the ASAS $V$-band data resulted in a meaningful correlation for SX Cen only, with a slope of 0.62$\pm$0.08. The [*Kepler*]{} data of DF Cygni show very similar linear amplitude vs. flux scaling throughout the whole RVb cycle than what the visual data imply. As we will demonstrate later, the wavelength dependence of the slope parameter follows that of the intrinsic pulsation amplitude (i.e. the smaller $I$-band slope parameter is a direct consequence of the pulsation amplitude being smaller in $I$ than in $V$). ![The same as in Fig. \[ampli-aavso\] for the seven OGLE stars.[]{data-label="ampli-ogle"}](flux_difference_ogle_log_all_errorbars.eps "fig:"){width="9cm"} ![The same as in Fig. \[ampli-aavso\] for the seven OGLE stars.[]{data-label="ampli-ogle"}](flux_difference_ogle_normed_2_all_errorbars.eps "fig:"){width="9cm"} ![The same as in Fig. \[ampli-aavso\] for the three ASAS stars.[]{data-label="ampli-asas"}](flux_difference_asas_log_all_errorbars.eps "fig:"){width="9cm"} ![The same as in Fig. \[ampli-aavso\] for the three ASAS stars.[]{data-label="ampli-asas"}](flux_difference_asas_normed_2_all_errorbars.eps "fig:"){width="9cm"} In the next step we determined the relative flux change and the relative pulsation amplitude change as follows. For the fluxes, we defined their relative change as $(f_\lambda^{\rm max}-f_\lambda)/f_\lambda^{\rm max}$, where $f_\lambda^{\rm max}$ is the pulsation-averaged flux when the star is the brightest. This has defined the full range of relative flux changes, where 0.0 corresponds to the global maximum. To calculate the relative amplitude changes, we had to take into account the intrinsic cycle-to-cycle variability of the RV Tau light curve shapes (well documented for RVa-type stars, e.g. R Sct – Buchler et al. 1996, AC Her – Kolláth et al. 1998). Because of that, it would have been misleading to take the single amplitude value that corresponds to the brightest flux point as the maximum amplitude; instead of that we fitted a line to the amplitude vs. flux plot and then took the fit’s value for the largest flux as the hypothetic mean maximum amplitude $a_\lambda^{\rm max}$ (see the upper panel of Fig. \[kepler-dfcyg\] for a visualisation of these two parameters). Then the relative change of the amplitude was defined similarly to that of the flux, namely as $(a_\lambda^{\rm max}-a_\lambda)/a_\lambda^{\rm max}$. Finally, we calculated the mean relative amplitude change as a function of ten equally spaced flux bins that were selected between the minimum and maximum relative flux changes. The results are shown in the lower panels of Figs. \[kepler-dfcyg\], \[ampli-aavso\], \[ampli-ogle\] and \[ampli-asas\] for the [*Kepler*]{}, AAVSO, OGLE and ASAS data, respectively. Here (0,0) corresponds to the brightest state with the largest amplitude. Note that the reason why generally there is no point in the origin is twofold: (i) the bins are centered on their midpoints and (ii) the definition of the maximum amplitude implies individual points in the bright states to scatter around the origin. Two major conclusions can be drawn from these plots. First, within the error bars, the relative amplitude changes scale perfectly with the relative flux changes, which means that the pulsation amplitude, in fact, remains constant throughout the RVb cycle when compared to the overall system flux. The diagonal lines in the panels are not fits but indicate the lines of equality. All but a few RVb stars show the same proportionality than the one found for DF Cygni by Vega et al. (2017) and this scaling is valid not only for the extrema (what actually was noticed by those authors) but it holds throughout the whole RVb cycle. Second, the scatter in the plots is rather dominated by the stars than by the observational uncertainties. This is illustrated by the [*Kepler*]{} plots in Fig. \[kepler-dfcyg\], where the individual points in the upper panel have practically no measurement errors. It is the RV Tau-type pulsation that changes seemingly irregularly (presumably due to strong non-linear effects, Bódi et al. 2016) in the same range than the apparent scatter around the diagonal lines in Figs. \[kepler-dfcyg\]-\[ampli-asas\]. What these mean is that the simplest explanation shall invoke a mechanism that equally affects the mean brightness and the apparent amplitude. Discussion ========== Our results show a general pattern for the overwhelming majority of the sample, that is a linear correlation between the mean brightness and the pulsation amplitude, when everything measured in flux units. The one-to-one correspondence of the relative changes, depicted in the lower panels of Figs. \[kepler-dfcyg\]-\[ampli-asas\], is actually rephrasing the linear correlation differently, which can be understood as follows. Let us consider the equality of the relative changes of the pulsation amplitudes and the mean fluxes $$\frac{\Delta a_\lambda}{a_\lambda}=\frac{\Delta f_\lambda}{f_\lambda}$$ as an approximation of the following differential equation $$\frac{da_\lambda}{a_\lambda}=\frac{df_\lambda}{f_\lambda}.$$ This can be easily integrated as $$\log a_\lambda=\log f_\lambda + c_\lambda,$$ where the integration constant $c_\lambda$ can be expressed as $$c_\lambda=\log \frac{a_\lambda^{\rm max}}{f_\lambda^{\rm max}}.$$ Here $a_\lambda^{\rm max}$ and $f_\lambda^{\rm max}$ are the amplitude and the mean flux values when the star is the brightest. Substituting $c_\lambda$ into Eq. 4 and rearranging leads to the following formal solution: $$a_\lambda=\frac{a_\lambda^{\rm max}}{f_\lambda^{\rm max}} f_\lambda,$$ which is exactly the same linear relationship between the amplitudes and the fluxes as we have seen in the empirical data. The $\alpha$ slope parameter turns out to be the ratio of the maximal amplitudes and fluxes, which can be determined for each star and each photometric band separately. Given that the RV Tau pulsations follow the same wavelength dependence in the photometric amplitudes as, for example, the Cepheid variables, the smaller $I$-band slope parameters found for the OGLE-stars are a direct consequence of having smaller pulsation amplitudes in $I$ than in $V$ (e.g. Pollard et al. 1996). Having established these very simple properties of the flux-amplitude relationship, one can ask about the implications. It has been noticed very early for individual stars that the pulsation amplitude shows some correlation with the long-term changes (for instance, O’Connell 1946 already noted for IW Car that the pulsations have larger amplitudes when the star was brighter). A connection between the circumstellar material and the RVb phenomenon was proposed by Lloyd Evans (1985) who argued against the role of binarity, preferring unstable, potentially pulsation-induced mass-loss that can lead to R Coronae Borealis-like obscuration events. Fokin (1994) considered the problem of the secondary variability in RVb stars and, after discussing the difficulties of modelling it with some sort of pulsations, concluded that binarity shall play an important role instead, with quasi-periodic eclipses by a circumstellar dust cloud (similarly to the proposal of Waelkens & Waters 1993). Zsoldos (1996) too argued against pulsational origin of the RVb phenomenon in RV Tau but noted that simple binarity is also excluded because of the long-term changes of the RVb cycles. Pollard et al. (1996), based on extensive long-term multicolour photometry of RV Tau stars, noted various features of the RVb phenomenon. Two of their stars, U Mon and AI Sco, exhibited well-expressed ‘amplitude damping’ during the RVb minima. While binarity was suggested for these stars, they also noted as an argument against dust obscuration that “pure obscuration by dust will give a reddening and a dimming of the obscured star but should not decrease the amplitude of the pulsations or make the deep-shallow alternations less distinct”. This line of argumentation was adopted by later authors, like Van Winckel et al. (1999) and Maas et al. (2002). We think it is clear that the recurrent reasoning in the literature against dust obscuration is not correct. It was only very recently, that Vega et al. (2017) pointed out that this kind of amplitude damping is actually what one expects for the pulsating stellar disk being occulted during the long-period minima by a very large, opaque screen, which, they argue, should be a circumbinary dusty disk around the entire binary system. Our results presented in this paper indicate that amplitude variability is ubiquituous and follows the same linear scaling with the mean flux in almost every RVb star. In other words, the pulsation amplitude remains constant (with some intrinsic small-scale variability due to the non-linear nature of the pulsations), when measured relative to the overall system brightness. One of the reasons why this has not yet been found is the general use of the magnitude system in variable star analyses: the inverse logarithmic nature hid the simple relationship between the mean brightness and the pulsation amplitude. Also, the fluctuating amplitude variability of RV Tau-type stars did not help either: one needs very good duty cycle and long time-span to average out the effects of the cycle-to-cycle changes that are inherent to the RV Tau-type pulsations. Finally, we briefly turn to those stars that have marginally similar behaviour than the whole sample. These are IW Car, AR Pup and OGLE-BLG-TCEP2-215. The classification of RV Tau stars have long been known to be very difficult (e.g. Zsoldos 1998), with semiregular (SR) variables frequently mistaken for other types of luminous variables. The periods, amplitudes and systematic amplitude changes of SR variables (Kiss et al. 2000) can indeed by similar photometrically, hence further information is always important. For IW Car, a rotating and expanding post-AGB nebula was recently resolved by ALMA (Bujarrabal et al. 2017), which is the latest update to its long known post-AGB status. However, its missing light curve alternation means that the star is more like a general pulsating post-AGB star than a classical RV Tau-like variable. This is in line with the fact that Giridhar et al. (1994) derived a spectroscopic effective temperature of 6700 K, which is much hotter than typical RV Tau stars (Kiss et al. 2007). AR Pup and its post-AGB disk was observed interferometrically by Hillen et al. (2017), who noted that for this star, the total infrared luminosity dominates over the dereddened optical fluxes, which is indicative of the disk close to edge-on. The least is known for the OGLE star, for which only the very red colour ($V-I\approx3.3$ mag) is listed in the catalogues, which could indicate both high interstellar reddening and instrinsically red colour. We speculate that these three stars may have different geometry and/or circumstellar extinction than the rest of the sample. Even for IW Car and AR Pup, there are cycles of the RVb variability when there are hints of the amplitudes following the variations of the mean flux levels. The RVb cycles are not strictly repetitive in other stars either (Zsoldos 1996), implying that the obscuring clouds are changing over the time-scale of the orbital periods of the systems. All in all, while these stars exhibit a somewhat noisier relationship than the other stars, fundamentally they still exhibit a similarly linear relation between pulsation amplitude and system flux, and therefore fit the proposed general picture. Summary ======= The main results of the paper can be summarized as follows: 1. We have compiled light curves for a sample of RVb-type variables, using visual observations, ground-based CCD photometric measurements and ultra-precise data from the [*Kepler*]{} space telescope. 2. We found a ubiquitous linear correlation between the pulsation amplitude and the mean brightess, when both are measured in flux units. There is a one to one correspondance between their relative changes, meaning that the pulsation amplitude actually remains constant throughout the RVb cycle, when measured relative to the system flux level. 3. The properties of the correlation can be naturally explained by a mechanism that equally affects the mean flux and the apparent amplitude, so that the whole light curve is scaled by a time-dependent factor. Periodically variable obscuration by a large opaque screen, presumably corresponding to a circumbinary dust disk, provides the required mechanism. 4. We conclude that the light variations of RVb-type stars can be fully explained phenomenologically by the combination of time-dependent non-linear pulsations and the dust obscuration model of the RVb phenomenon. This work has been supported by the NKFIH K-115709 and the GINOP-2.3.2-15-2016-00003 grants of the Hungarian National Research, Development and Innovation Office, and the Hungarian Academy of Sciences. This research has made use of the International Variable Star Index (VSX) database, operated at AAVSO, Cambridge, Massachusetts, USA. We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research. This paper includes data collected by the Kepler mission. Funding for the Kepler mission is provided by the NASA Science Mission directorate. We thank an anonymous referee for his/her useful comments and suggestions. [label=]{} Aikawa, T., 2010, A&A, 514, A45 Blöcker, T., 1995, A&A, 299, 755 Bódi, A., Szatmáry, K., Kiss, L.L., 2016, A&A, 596, A24 Buchler, J.R., Kolláth, Z., Serre, T., Mattei, J., 1996, ApJ, 462, 489 Bujarrabal, V., Castro-Carrizo, A., Alcolea, J., Van Winckel, H., Sánchez Contreras, C., Santander-García, M., 2017, A&A, 597, L5 Fokin, A.B., 1994, A&A, 292, 133 Fokin, A.B., Lèbre, A., Le Coroller, H., Gillet, D., 2001, A&A, 378, 546 Gezer, I., Van Winckel, H., Bozkurt, Z., De Smedt, K., Kamath, D., Hillen, M., Manick, R., 2015, MNRAS, 453, 133 Giridhar, S., Rao, N. K., Lambert, D. L., 1994, ApJ, 437, 476 Hillen, M. et al., 2017, A&A, 599, A41 Kiss, L. L., Szatmáry, K., Szabó, Gy., Mattei, J.A., 2000, A&AS, 145, 283 Kiss, L. L., Derekas, A., Szabó, Gy. M., Bedding, T. R., Szabados, L., 2007, MNRAS, 375, 1338 Kolláth, Z., Buchler, J. R., Serre, T., Mattei, J., 1998, A&A, 329, 147 Lenz, P., Breger, M., 2005, CoAst, 146, 53 Lloyd Evans, T., 1985, MNRAS, 217, 493 Maas, T., Van Winckel, H., Waelkens, C., 2002, A&A, 386, 504-516 O’Connell, D., 1946, Publications of the Riverview College Observatory, 2, 46 Percy, J. R., 2015, JAAVSO, 43, 176 Pojmanski, G., 2002, AcA, 52, 397 Pollard, K. R., Cottrell, P. L., Kilmartin, P. M., Gilmore, A. C., 1996, MNRAS, 279, 949 Pollard, K. R., Cottrell, P. L., Lawson, W. A., Albrow, M. D., Tobin, W., 1997, MNRAS, 286, 1 Pollard, K. R., McSaveney, J. A., Cottrelll, P. L., 2006, MemSAIt, 77, 527 Soszynski, I., et al., 2008, AcA, 58, 293 Soszynski, I., et al., 2010, AcA, 60, 91 Soszynski, I., et al., 2011, AcA, 61, 285 Soszynski, I., et al., 2013, AcA, 63, 37 Van Winckel, H., Waelkens, C., Fernie, J.D., Waters, L.B.F.M., 1999, A&A, 343, 202 Vega, L.D., Stassun, K.G., Montez, R., Jr., Boyd, P.T., Somers, G., 2017, ApJ, 839, id. 48 Wallerstein, G., 2002, PASP, 114, 689 Waelkens, C., Waters, L. B. F. M., 1993, in Sasselov D.O., ed., ASP Conf. Ser. Vol. 45, Luminous High-Latitude Stars. Astron. Soc. Pac., San Francisco, p.219 Zsoldos, E., 1996, A&AS, 119, 431 Zsoldos, E., 1998, AcA, 48, 775 [^1]: http://www.aavso.org/vsx/
--- abstract: 'In the context of the Heston model, we establish a precise link between the set of equivalent martingale measures, the ergodicity of the underlying variance process and the concept of asymptotic arbitrage proposed in Kabanov-Kramkov [@Kabanov] and in Föllmer-Schachermayer [@Fol].' address: - 'Department of Mathematics, Université Tunis El Manar' - 'Department of Mathematics, Imperial College London' author: - Fatma Haba - Antoine Jacquier title: Asymptotic arbitrage in the Heston model --- Introduction ============ The concept of arbitrage is the cornerstone of modern mathematical finance, and several versions of the so-called fundamental theorem of asset pricing have been proved over the past two decades, see for instance [@Del] for an overview. A version of it essentially states that absence of arbitrage is equivalent to the existence of an equivalent martingale measure under which discounted asset prices are true martingales. This then allows the use of ‘martingale models’ (either continuous or with jumps) as underlying dynamics for option pricing. In practice, should short-term arbitrages arise—due to some market discrepancies—they are immediately exploited by traders, and market liquidity therefore acts as an equilibrium agent, to prevent them occurring significantly. It can be argued, however, that one may generate long-term riskless profit, when the time horizon tends to infinity. This turns out to hold in most models used in practice. The existence and nature of such infinite horizon asymptotic arbitrage opportunities have been studied in a handful of papers, for example [@Du; @Ire; @Rok]. Among the plethora of models used and analysed both in practice and in theory, stochastic volatility models have proved to be very flexible and suitable for pricing and hedging. Due to its affine structure, the Heston model [@Heston] has gained great popularity among practitioners for equity and FX derivatives modelling, see in particular [@Gatheral; @Fou] for a detailed account of this fame. Because of the correlation between the asset price and the underlying volatility, the market is incomplete, and the Heston model admits an infinity of equivalent martingale measures. Its affine structure allows us to study precisely the existence (or absence) of asymptotic arbitrage. Specifically, we shall endeavour to understand how the parameters of the model influence the nature—such as its speed and existence—of the asymptotic arbitrage. Of particular interest will be the link between asymptotic arbitrage and the ergodicity of the underlying variance process. In  [@Fol] the authors proved under suitable regularity conditions that price processes with a non-trivial market price of risk (see Definition \[def:MarketPrice\]) allow for asymptotic arbitrage (with linear speed). Using the theory of large deviations, we shall show that $S$ may allow for such arbitrage even if it does not admit an average squared market price of risk. The organisation of this paper is as follows: all the notations and definitions are given in Section \[sec:Notations\]. Asymptotic arbitrage in the Heston model is studied in Section \[sec:Main\]; the main contribution of this paper is Theorem \[thm:AsymptArb\], which identifies sufficient (and sometimes necessary) conditions on the set of equivalent martingale measures under which asymptotic arbitrage occur with linear speed. These conditions are different from those in Proposition \[prop:limProbErgodicResult\] in which we study how the ergodicity of the variance process plays an important role to prove the existence of asymptotic arbitrage with slower speed. Notations and definitions {#sec:Notations} ========================= Let $(\Omega,\mathcal{F},\mathbb{F},\mathbb{P})$ be a filtered probability space where the filtration $\mathbb{F}=(\mathcal{F}_t)_{t\geq0}$ satisfies the usual conditions and let $S={\mathrm{e}}^{X}$ model a risky security under an equivalent martingale measure. Let $\mathcal{H}$ denote the class of predictable, $S$-integrable admissible processes. We define for each $t>0$ the sets $K_t:=\left\{\int_0^t H_s {\mathrm{d}}S_s: H\in \mathcal{H}\right\}$ and $\mathcal{M}_t^{e}(S):= \left\{\mathbb{Q}\sim\mathbb{P}\text{ such that }(S_u)_{0\leq u\leq t}\text{ is a local } \mathbb{Q}\text{-martingale}\right\}$. We shall always assume that $\mathcal{M}_t^{e}(S)$ is not empty for all $t\geq 0$. Furthermore, for any set $A$ in $\Omega$, we shall denote by $A^c:=\Omega\setminus A$ its complement. Asymptotic arbitrage -------------------- The following definition of a long-term arbitrage is taken from [@Fol]: \[def1\] The process $S$ admits an $(\varepsilon_1,\varepsilon_2)$-arbitrage up to time $t$ if for $(\varepsilon_1,\varepsilon_2)\in (0,1)^2$, there exists $X_t \in K_t$ such that (i) $X_t \geq-\varepsilon_2$ $\mathbb{P}$-almost surely; (ii) $\mathbb{P}(X_t\geq 1-\varepsilon_2)\geq1-\varepsilon_1$. This means that the maximal loss of the trading strategy, yielding the wealth $X_t$ at time $t$, is bounded by $\varepsilon_2$ and with probability $1-\varepsilon_1$ the terminal wealth $X_t$ equals at least $1-\varepsilon_2$. We shall be interested here in the following characterisation of long-term arbitrage, namely the notion of asymptotic exponential arbitrage with exponentially decaying failure probability, first proposed in [@Fol] and later in [@Bidima] and [@Du]. \[def2\] The process $S$ allows for asymptotic exponential arbitrage with exponentially decaying failure probability if there exist $t_0\in(0,\infty)$ and constants $C,\lambda_1,\lambda_2>0$ such that for all $t\geq t_0$, there is $X_t\in K_t$ satisfying (i) $X_t \geq -{\mathrm{e}}^{-\lambda_2 t}$ $\mathbb{P}$-almost surely; (ii) $\mathbb{P}(X_t \leq {\mathrm{e}}^{\lambda_2 t}) \leq C{\mathrm{e}}^{-\lambda_1 t}$. Asymptotic exponential arbitrage with exponentially decaying failure probability can be interpreted as a strong and quantitative form of long-term arbitrage. In particular, let $\lambda_1,\lambda_2>0$, $\varepsilon_2:={\mathrm{e}}^{-\lambda_2 t}$ and $\varepsilon_1:=C{\mathrm{e}}^{-\lambda_1 t}$, then Definitions \[def1\] and \[def2\] are equivalent In [@Fol], Föllmer and Schachermayer showed that this strong form of asymptotic arbitrage was actually a consequence, under some assumptions (see Theorem 1.4 therein), of the following concept: \[def:MarketPrice\] Let $f:{\mathbb{R}}_+^*\to{\mathbb{R}}_+^*$ be a smooth function such that $\lim_{t\nearrow_+\infty}f(t)=+\infty$. The process $S$ is said to have an average squared market price of risk $\gamma_i$ ($i=1,2$) above the threshold $c_i>0$ with speed $f(t)$ if $\mathbb{P}\left(f(t)^{-1}\int_{0}^{t}\gamma_i^2(s) {\mathrm{d}}s<c_i\right)$ tends to zero as $t$ tends to infinity. Stochastic volatility models ---------------------------- We consider here the Heston stochastic volatility model, namely the unique strong solution to the stochastic differential equations  below. As is well-known (see [@Ber] for example), there may not be a unique risk-neutral martingale measure for such models. The following SDEs are therefore understood under one such risk-neutral measure $\mathbb{Q}$. $$\label{eq:SDEHeston} \begin{array}{rll} {\mathrm{d}}S_t / S_t & = \mu {\mathrm{d}}t+ \sqrt{V_t}\left(\rho {\mathrm{d}}W_1(t)+\sqrt{1-\rho^{2}}{\mathrm{d}}W_2(t)\right),\quad & S_0=1\\ {\mathrm{d}}V_t & = (a-bV_t){\mathrm{d}}t+\sqrt{2\sigma V_t}{\mathrm{d}}W_1(t),\quad & V_0>0, \end{array}$$ where $W_1$ and $W_2$ are independent $\mathbb{Q}$-Brownian motions, $\mu,a,\sigma>0$, $b\in\mathbb{R}$ and $|\rho|<1$. The class of equivalent martingale measures $\mathbb{Q}$ can be considered in terms of the Radon-Nikodym derivatives $$\label{eq:Z} Z_t = \left.\frac{{\mathrm{d}}\mathbb{Q}}{{\mathrm{d}}\mathbb{P}}\right|_{|\mathcal{F}_t} = \exp\left\{-\left(\int_0^t\gamma_1(s){\mathrm{d}}W_1(s)+\int_0^t\gamma_2(s) {\mathrm{d}}W_2(s)\right) -\frac{1}{2}\left(\int_0^t\gamma_1^{2}(s){\mathrm{d}}s+\int_0^t\gamma_2^{2}(s){\mathrm{d}}s\right)\right\}.$$ The condition $\mu-r=\sqrt{V_t}\left(\rho\gamma_1(t)+\sqrt{1-\rho^2}\gamma_2(t)\right)$ is necessary for an equivalent local martingale measure to exist, and ensures that the discounted stock price is a local martingale. Since $Z$ is a positive local martingale with $Z_0=1$, it is a supermartingale, and a true martingale if and only if $\mathbb{E}(Z_t)=1$. For the Heston stochastic volatility model we obtain, for any any real constant $\lambda$, $$\label{eq:DefGamma} \gamma_1(t) = \lambda\sqrt{V_t} \qquad\text{and}\qquad \gamma_2(t) = \frac{1}{\sqrt{1-\rho^2}}\left(\frac{\mu-r}{\sqrt{V_t}}-\lambda \rho\sqrt{V_t}\right).$$ Main results {#sec:Main} ============ For any $(\alpha, \beta,\delta)\in{\mathbb{R}}^3$, we introduce the process $(X_t^{\alpha,\beta,\delta})_{t\geq 0}$ defined (pathwise) by $$\label{eq:XProcess} X_t^{\alpha,\beta,\delta}:=\alpha V_t+\beta\int_0^t V_s{\mathrm{d}}s+\delta\int_0^t V_s^{-1}{\mathrm{d}}s, \quad\text{for any }t\geq 0,$$ where $V$ is the Feller diffusion for the variance in . Define the real interval ${\mathcal{D}}_{\beta,\delta}$ by $$\label{eq:DomainD} {\mathcal{D}}_{\beta,\delta}= \left\{ \begin{array}{ll} \displaystyle \left[\frac{(a-\sigma)^2 }{4\sigma\delta}, \frac{b^2 }{4\sigma\beta}\right], & \text{if } \beta>0,\;\delta<0,\\ \displaystyle \left(-\infty,\frac{(a-\sigma)^2 }{4\sigma\delta}\wedge\frac{b^2 }{4\sigma\beta}\right], & \text{if } \beta>0,\; \delta>0,\\ \displaystyle \left[\frac{b^2 }{4\sigma\beta}, \frac{(a-\sigma)^2 }{4\sigma\delta}\right], & \text{if } \beta<0,\; \delta>0,\\ \displaystyle \left[\frac{(a-\sigma)^2}{4\sigma\delta}\vee\frac{b^2}{4\sigma\beta},+\infty\right), & \text{if } \beta<0, \;\delta<0. \end{array} \right.$$ Whenever $\beta\delta=0$, we define ${\mathcal{D}}_{\beta,\delta}$ by taking the limits of the interval (a closed bound becoming open if it becomes infinite), where we use the slight abuse of notation $"1/0=\infty"$, i.e. ${\mathcal{D}}_{\beta,\delta}=\left(-\infty,\frac{b^2}{4\sigma\beta}\right]$ if $\beta>0$ and $\delta=0$, ${\mathcal{D}}_{\beta,\delta}=\left[\frac{b^2}{4\sigma\beta} ,+\infty\right)$ if $\beta<0$ and $\delta=0$, and ${\mathcal{D}}_{\beta,\delta}={\mathbb{R}}$ if $\beta=\delta=0$. Let us further define the function $\Lambda^{\beta,\delta}:{\mathcal{D}}_{\beta,\delta}\to{\mathbb{R}}$ by $$\label{eq:Lambda} \Lambda^{\beta,\delta} (u) = \left\{ \begin{array}{ll} \displaystyle \frac{ba}{2\sigma}-\frac{1}{2\sigma}\sqrt{((a-\sigma)^2-4\sigma\delta u)(b^2-4\sigma\beta u)}-\frac{1}{2}\sqrt{b^2-4\sigma\beta u}, & \text{if }\delta\ne 0,\\ \\ \displaystyle \frac{a}{2\sigma}\left(b-\sqrt{b^2-4\sigma\beta u}\right), & \text{if }\delta=0. \end{array} \right.$$ In the case $\delta \ne 0$ above, we further impose the condition $a>\sigma$ for the definition of the function $\Lambda^{\beta,\delta}$. It may be surprising at first that the function $\Lambda^{\beta,\delta}$ related—in some sense defined precisely below—does not depend on $\alpha$. This function actually describes the large-time behaviour of the process $X^{\alpha,\beta,\delta}$. Since the variance process $V$ is strictly positive almost surely (by the Feller condition imposed above), the term $\int_0^t V_s {\mathrm{d}}s$ clearly dominates $V_t$ for any $t$, which explains why $\alpha$ bears no influence on $\Lambda^{\beta,\delta}$. The condition $a>\sigma$ imposed above in the case $\delta\ne 0$ should not surprise the reader since this is nothing else than the Feller condition, ensuring that the variance process never touches the origin almost surely. We further define the Fenchel-Legendre transform $\Lambda_{\beta,\delta}^*:{\mathbb{R}}\to{\mathbb{R}}_+$ of $\Lambda^{\beta,\delta}$ by $$\label{eq:LambdaStar} \Lambda_{\beta,\delta}^*(x):=\sup_{u\in{\mathcal{D}}_{\beta,\delta}}\{ux-\Lambda^{\beta,\delta}(u)\}.$$ Whenever $\beta=0$ or $\delta=0$, we shall drop the subscript and write respectively $\Lambda^\delta$ or $\Lambda^\beta$. The same rule will be followed for the domains and the Fenchel-Legendre transforms. In the general case, $\Lambda^*_{\beta,\delta}$ does not have a closed-form representation. In the particular case where $\delta$ is null—which shall be of interest for us—it actually does, and a straightforward computation shows that $$\label{eq:Lambda*Delta0} \Lambda^*_{\beta}(x) = \frac{(bx-a\beta )^2}{4\sigma |\beta x|}, \quad\text{for all }x\in{\mathbb{R}}^*.$$ In that case, the function $\Lambda^*_{\beta}$ is strictly convex on ${\mathbb{R}}_+^*$ (respectively on ${\mathbb{R}}_-^*$) with a unique minimum attained at $|a\beta/b|$ (resp. at $-|a\beta/b|$). In particular on ${\mathbb{R}}_+^*$, if $b\beta\leq 0$ then $\Lambda^*_{\beta}$ is strictly decreasing and strictly positive on ${\mathbb{R}}_+^*$. Otherwise, if $b\beta>0$, then $\Lambda^*_{\beta}(|a\beta/b|) = 0$ and $\Lambda^*_{\beta}(x)>0$ for all $x\in{\mathbb{R}}_+^*\setminus\{|a\beta/b|\}$. Symmetric statements hold on ${\mathbb{R}}_-$. The large deviations case {#sec:Limitl} ------------------------- In this section, we shall be interested in proving asymptotic arbitrage results for the stock price process when the speed is linear. We shall in particular observe that the ergodicity of the variance process plays a key role. We first start though with the following two technical lemmas, which will be used heavily in the remaining of the paper, and the proofs of which can be found in Appendix \[App:Appendix\]. \[lem:LDPV\] For any $(\alpha,\beta)\in\mathbb{R}^2$, the family $(t^{-1} X_t^{\alpha,\beta,0})_{t>0}$ satisfies a large deviations principle on ${\mathbb{R}}_+^*$ if $\beta>0$ and on  ${\mathbb{R}}_-^*$ if $\beta<0$ with speed $t^{-1}$ and rate function $\Lambda^*_{\beta}$ characterised in . \[lem:triplet\] The family $(t^{-1}X_t^{\alpha,\beta,\delta})_{t\geq0}$ satisfies (i) a full LDP (on ${\mathbb{R}}$) if $\beta\delta<0$; (ii) a partial LDP on $\left(2\sqrt{\delta\beta},+\infty\right)$ if $\beta>0$ and $\delta>0$; (iii) a partial LDP on $\left(-\infty,-2\sqrt{\delta\beta}\right)$ if $\beta<0$ and $\delta<0$; (iv) a partial LDP if $\beta=0$ or $\delta=0$ on the domain given by taking the limit in (ii) or (iii). In each case, the rate function is $\Lambda^*_{\beta,\delta}$ and the (partial) LDP holds with speed $t^{-1}$. In [@Fol Theorem 1.4], Föllmer and Schachermayer proved that if the stock price process has an average market price of risk above a threshold then asymptotic arbitrage holds. Using the large deviations principle proved above, we first show that $S$ does not always admit an average market price of risk for $\gamma_1$ (Proposition \[prop:AvgSqGamma1\]) or $\gamma_2$ (Proposition \[prop:AvgSqGamma2\]) above any threshold. This is in particular so when the variance process is not ergodic ($b\leq 0$). This however—as proved in Theorem \[thm:AsymptArb\] below—does not preclude absence of asymptotic arbitrage. \[prop:AvgSqGamma1\] Fix $\lambda\geq 0$ and $c>0$. The stock price process does not satisfy an average squared market price of risk $\gamma_1$ above the threshold $c$ if either (i) $b\leq 0$ or (ii) $b>0$ and $c > a\lambda^2/b$. Note first that $\lambda=0$ implies $\gamma_1\equiv 0$ and hence $\mathbb{P}\left(t^{-1}\int_0^t \gamma_1^2(s){\mathrm{d}}s< c\right)=1$ for all $t>0$, so that the proposition is trivial. Assume from now on that $\lambda\ne 0$ and let $c$ be an arbitrary strictly positive real number. The definition of $\gamma_1$ in  implies $\mathbb{P}(t^{-1}\int_0^t \gamma_1^2(s){\mathrm{d}}s\geq c) = \mathbb{P}(t^{-1}\int_0^t V_s{\mathrm{d}}s\geq c/\lambda^2) =\mathbb{P}(t^{-1}X_t^{0,1} \geq c/\lambda^2)$. From Lemma  \[lem:LDPV\], the family $(t^{-1}X_t^{0,1})_{t\geq0}$ satisfies a LDP on $\mathbb{R}_+^*$ with rate function $\Lambda^*_{1,0}$. Hence $$\limsup_{t\nearrow+\infty}\frac{1}{t}\log\mathbb{P}\left(X_t^{0,\beta} \geq \frac{c}{\lambda^2}\right)\leq -\inf_{\left\{x\geq c/\lambda^2\right\}}\Lambda^*_{1,0}(x) = \left\{ \begin{array}{ll} -\Lambda^*_{1.0}(c/\lambda^2)<0, & \text{if } c > a\lambda^2 / |b|,\\ 0, & \text{if } c \leq a\lambda^2 / |b|, \end{array} \right.$$ When $b\leq 0$,$\Lambda^*_{1.0}(c/\lambda^2)$ is strictly positive for all $c>0$. Thus $\mathbb{P}(t^{-1}X_t^{0,1}\geq c/\lambda^2)$ converges to zero as $t$ tends to infinity, which in turn implies that $\mathbb{P}(t^{-1}\int_0^t \gamma_1^2(s){\mathrm{d}}s< c)$ converges to $1$ as $t$ tends to infinity, and statement (i) in the proposition follows. When $b>0$, consider the case $c > a\lambda^2/b$. There exists $\bar{t}>0$ such that for all $t\geq \bar{t}$ , $\mathbb{P}(t^{-1}X_t^{0,1} \geq c/\lambda^2)\leq \exp(-\Lambda^*_{1.0}(c/\lambda^2)t)$, and hence $\mathbb{P}(t^{-1}X_t^{0,1}\geq c/\lambda^2)$ converges to zero as $t$ tends to infinity, which again proves statement (ii) in the proposition. \[prop:AvgSqGamma2\] Fix $\lambda\geq 0$ and let $c>0$. The stock price process does not satisfy an average squared market price of risk $\gamma_2$ above the threshold $c$ if any of the following conditions hold: (i) $\lambda\rho(\mu-r)> 0$; (ii) $\lambda\rho(\mu-r)<0$ and $c>-4\lambda\rho(\mu-r)/(1-\rho^2)$; (iii) $\lambda\rho\ne 0$, $\mu=r$ and $b\leq 0$; (iv) $\lambda\rho\ne 0$, $\mu=r$, $b>0$ and $c>a\lambda^4\rho^2/(b(1-\rho^2))$; (v) $\lambda\rho = 0$; Note that the case of a complete market ($\rho=0$) is included in case (v) of the proposition. Let $c$ be an arbitrary strictly positive real number. Note first that if $\lambda\rho=0$, and $\mu=r$, then $\gamma_2\equiv 0$ and hence $\mathbb{P}\left(t^{-1}\int_0^t \gamma_2^2(s){\mathrm{d}}s< c\right)=1$ for all $t>0$. If $\mu\ne r$, then $$\mathbb{P}\left(\frac{1}{t}\int_0^t\gamma_2^2(s){\mathrm{d}}s\geq c\right) =\mathbb{P}\left(\frac{1}{t}\int_0^t\frac{{\mathrm{d}}s}{V_s}\geq\frac{1-\rho^2}{(\mu-r)^2}c\right) =\mathbb{P}\left(\frac{X_t^{0,0,1}}{t}\geq\frac{1-\rho^2}{(\mu-r)^2}c\right),$$ and Lemma \[lem:ratefunction\] implies that $\Lambda^*_{0,1}$ is strictly positive, so that (v) follows. Assume now that $\lambda\rho\ne 0$ and $\mu\ne r$. The definition of $\gamma_2$ in  implies that $$\begin{aligned} \mathbb{P}\left(\frac{1}{t}\int_0^t\gamma_2^2(s){\mathrm{d}}s\geq c\right) & = \mathbb{P}\left(\frac{(\mu-r)^2}{1-\rho^2}\frac{1}{t}\int_0^t\frac{{\mathrm{d}}s}{V_s} +\frac{\lambda^2\rho^2}{1-\rho^2}\frac{1}{t}\int_0^t V_s {\mathrm{d}}s \geq c+\frac{2\rho\lambda(\mu-r)}{1-\rho^2}\right)\\ & = \mathbb{P}\left(\frac{X_t^{0,\beta,\delta}}{t}\geq c+\frac{2\rho\lambda(\mu-r)}{1-\rho^2}\right),\end{aligned}$$ where $\beta=\frac{(\mu-r)^2}{1-\rho^2}>0$, $\delta=\frac{\lambda^2\rho^2}{1-\rho^2}>0$, and where $X^{0,\beta,\delta}$ is defined in . By Lemma \[lem:triplet\], the family $(X_t^{0,\beta,\delta}/t)_{t > 0}$ satisfies a large deviations principle on $(2\sqrt{\delta\beta}, +\infty)$ with rate function $\Lambda^*_{\beta,\delta}$, i.e. $$\limsup_{t\nearrow+\infty}t^{-1}\log\mathbb{P}\left(\frac{1}{t}\int_0^t\gamma_2^2(s){\mathrm{d}}s\geq c\right) \leq -\inf\left\{\Lambda^*_{\beta,\delta}(x): x\geq c+\frac{2\rho\lambda(\mu-r)}{1-\rho^2}\right\}.$$ When $\lambda\rho(\mu-r) > 0$, $\left[c+\frac{2\rho\lambda(\mu-r)}{1-\rho^2},+\infty\right)$ is a subset of $\left(2\sqrt{\beta\delta},+\infty\right)$, and (i) follows immediately from Lemma \[lem:ratefunction\]. When $\lambda\rho(\mu-r)<0$, the interval $\left[c+\frac{2\rho\lambda(\mu-r)}{1-\rho^2},+\infty\right)$ is a subset of $\left(2\sqrt{\beta\delta},+\infty\right)$ if and only if $c>-\frac{4\rho\lambda(\mu-r)}{1-\rho^2}>0$. Since $\beta\delta=\frac{\lambda^2\rho^2(\mu-r)^2}{(1-\rho^2)}>0$, Lemma  \[lem:ratefunction\] implies that $\Lambda^*_{\beta,\delta}(x)>0$ for any $x>2\sqrt{\beta\delta}=\frac{2|\lambda\rho(\mu-r)|}{1-\rho^2}$. Therefore, $\mathbb{P}\left(X_t^{0,\beta,\delta}/t\geq c+\frac{2\rho\lambda(\mu-r)}{1-\rho^2}\right)$ converges to zero as $t$ tends to infinity. Then $\mathbb{P}(t^{-1}\int_0^t \gamma_2^2(s){\mathrm{d}}s< c)$ converges to one as $t$ tends to infinity. Assume that $\lambda\rho\ne 0$ and $\mu=r$. The definition of $\gamma_2$ in  implies that $$\mathbb{P}\left(\frac{1}{t}\int_0^t\gamma_2^2(s){\mathrm{d}}s\geq c\right) =\mathbb{P}\left(\frac{1}{t}\int_0^t V_s{\mathrm{d}}s\geq \frac{1-\rho^2}{\lambda^2\rho^2}c\right) =\mathbb{P}\left(\frac{X_t^{0,1,0}}{t}\geq \frac{1-\rho^2}{\lambda^2\rho^2}c\right),$$ and (iii) and (iv) then from Proposition \[prop:AvgSqGamma1\]. We can now move on to our main theorem. \[thm:AsymptArb\] Let $\varepsilon\in(0,1)$, $\gamma>0$ and define the set $A_{\lambda,t}:=\{Z_t\geq {\mathrm{e}}^{-\gamma t}\}\in\mathcal{F}_t$. Then $S$ allows for strong asymptotic arbitrage (with speed $t$) with exponentially decaying probability (in the sense of Definition \[def2\]) with $C=\exp\left(\lambda V_0/\sqrt{2\sigma}\right)$, $\lambda_1 = -\left(\frac{a\lambda}{\sqrt{2\sigma}} + \gamma + \Lambda^{\alpha,\beta}(1)\right)$ and $\lambda_2 = \gamma$ if (i) $\lambda\in \mathbb{R}\setminus\left(-\frac{b}{\sqrt{2\sigma}}-\frac{\gamma}{a\zeta_+}, -\frac{b}{\sqrt{2\sigma}}+\frac{\gamma}{a\zeta_-}\right)$, when $\sqrt{2\sigma}>1$; (ii) $\lambda<-\frac{b}{\sqrt{2\sigma}}-\frac{\gamma}{a\zeta_+}$, when $\sqrt{2\sigma}\leq 1$, where we define $\zeta_{\pm}:=\sqrt{2\sigma}\pm 1/\sqrt{2\sigma}$. \[rem:Lambda0\] - Note that the sufficient condition is not necessary. Consider for instance $\lambda=0$ and $\mu=r$. Then clearly $Z_t = 1$ almost surely for all $t\geq 0$, and $\mathbb{P}\left(Z_t\geq {\mathrm{e}}^{-\gamma t}\right)=1$ for any $\gamma>0$. - Let $f:{\mathbb{R}}_+\to{\mathbb{R}}_+$ is a continuous function such that $t/f(t)$ tends to infinity as $t$ tends to infinity, then for any $\gamma>0$ and $t$ large enough, ${\mathrm{e}}^{-\gamma f(t)} \geq {\mathrm{e}}^{-\gamma t}$. Therefore $\mathbb{P}\left(Z_t\geq{\mathrm{e}}^{-\gamma f(t)}\right)$ tends to zero as well as $t$ tends to infinity; We cannot however conclude that Theorem \[thm:AsymptArb\] holds, i.e. that $S$ allows asymptotic arbitrage with speed $f(t)$, since this does not give us any information about the behaviour of $\mathbb{Q}\left(Z_t\geq{\mathrm{e}}^{-\gamma f(t)}\right)$. Let $\gamma>0$ and define the set $A_{\lambda,t}:=\{Z_t\geq {\mathrm{e}}^{-\gamma t}\}\in\mathcal{F}_t$. Since the processes $W_2$ and $V$ are independent, the tower property for conditional expectation implies $\mathbb{E}(Z_t)=\mathbb{E}\left({\mathrm{e}}^{-\int_0^t\gamma_1(s){\mathrm{d}}W_1(s)-\frac{1}{2}\int_0^t\gamma_1^2(s){\mathrm{d}}s}\right)$. Markov’s inequality therefore yields $$\begin{aligned} \mathbb{P}(A_{\lambda,t}) & \leq\frac{\mathbb{E}(Z_t)}{\exp(-\gamma t)} = \frac{\mathbb{E}\left[\exp\left(-\int_0^t\gamma_1(s){\mathrm{d}}W_1(s)-\frac{1}{2}\int_0^t\gamma_1^2(s){\mathrm{d}}s\right)\right]}{{\mathrm{e}}^{-\gamma t}}\\ & =\exp\left(\frac{\lambda V_0}{\sqrt{2\sigma}}+\frac{a\lambda t}{\sqrt{2\sigma}} +\gamma t\right) \mathbb{E}\left[\exp\left(-\frac{\lambda V_T}{\sqrt{2\sigma}}-\left(\frac{b\lambda}{\sqrt{2\sigma}}+\frac{\lambda^2}{2}\right)\int_0^tV_s{\mathrm{d}}s\right)\right]\\ & =\exp\left[\frac{\lambda V_0}{\sqrt{2\sigma}}+\left(\frac{a\lambda}{\sqrt{2\sigma}}+\gamma \right)t\right] \Lambda_t^{\alpha,\beta}(t),\end{aligned}$$ where $\alpha=-\frac{\lambda}{\sqrt{2\sigma}}$ and $\beta=-\frac{b\lambda}{\sqrt{2\sigma}}-\frac{\lambda^2}{2}$. From the proof of Lemma \[lem:LDPV\], we know that $t^{-1}\log\Lambda^{\alpha,\beta}_t(t)$ converges to $ \Lambda^{\alpha,\beta}(1)$. This implies that for any $\delta>0$ there exists $\tilde{t}>0$ such that for any $t>\tilde{t}$, we have $${\mathrm{e}}^{\left(\Lambda^{\alpha,\beta}(1)-\delta\right)t} \leq \Lambda_t^{\alpha,\beta}(t) \leq {\mathrm{e}}^{\left(\Lambda^{\alpha,\beta}(1)+\delta\right)t}.$$ We then deduce that for any $t>\tilde{t}$, $$\exp\left[\frac{\lambda V_0}{\sqrt{2\sigma}}+\left(\frac{a\lambda}{\sqrt{2\sigma}}+\gamma + \Lambda^{\alpha,\beta}(1)-\delta\right)t\right] \leq \mathbb{P}(Z_t\geq {\mathrm{e}}^{-\gamma t}) \leq \exp\left[\frac{\lambda V_0}{\sqrt{2\sigma}}+\left(\frac{a\lambda}{\sqrt{2\sigma}}+\gamma + \Lambda^{\alpha,\beta}(1)+\delta\right)t\right].$$ Since $\delta$ can be chosen as small as desired, we simply need to prove that $\frac{a\lambda}{\sqrt{2\sigma}}+\gamma + \Lambda^{\alpha,\beta}(1)<0$. Now, $$\Lambda^{\alpha,\beta}(1) = \frac{ab}{2\sigma}-a\sqrt{b^2-4\sigma\beta} = \frac{ab}{2\sigma}-a\sqrt{b^2+4\sigma\lambda\left(\frac{b}{\sqrt{2\sigma}}+\frac{\lambda}{2}\right)} = \frac{ab}{2\sigma}-a\left|\lambda\sqrt{2\sigma}+b\right|,$$ which is always well defined. Therefore, we are left to prove that $\left|\lambda\sqrt{2\sigma}+b\right|> \frac{\gamma}{a}+\frac{b}{2\sigma}+\frac{\lambda}{\sqrt{2\sigma}}$. This is a piecewise linear inequality in $\lambda$, which is clearly satisfied if and only if (i) $\lambda\in \mathbb{R}\setminus\left(-\frac{b}{\sqrt{2\sigma}}-\frac{\gamma}{a\zeta_+}, -\frac{b}{\sqrt{2\sigma}}+\frac{\gamma}{a\zeta_-}\right)$, when $\sqrt{2\sigma}>1$; (ii) $\lambda<-\frac{b}{\sqrt{2\sigma}}-\frac{\gamma}{a\zeta_+}$, when $\sqrt{2\sigma}\leq 1$. In the first case, the interval is never empty. Let $(\varepsilon_1,\varepsilon_2):=\left(\exp\left[\frac{\lambda V_0}{\sqrt{2\sigma}}+\left(\frac{a\lambda}{\sqrt{2\sigma}}+\gamma + \Lambda^{\alpha,\beta}(1)\right)t\right], {\mathrm{e}}^{-\gamma t}\right)$. Define now the probability measure $\mathbb{Q}$ via the Radon-Nikodym theorem by $\mathbb{Q}(B):=\mathbb{E}(B Z_t)$, for any $B\in\Omega$. Clearly $\mathbb{Q}\in\mathcal{M}_t^{e}(S)$ and therefore there exists $t_0>0$ such that for any $t\geq t_0$, $A_{t,\lambda}$ satisfies $\mathbb{P}(A_{t,\lambda})\leq\varepsilon_1$ and $\mathbb{Q}(A_{t,\lambda})\geq 1-\varepsilon_2$ . Proposition 2.1 in [@Fol] implies that $S$ allows for $(\varepsilon_1,\varepsilon_2)$-arbitrage in the sense of Definition \[def1\]. Arbitrage with decaying failure as in the theorem immediately follows. Case $t/f(t)$ tends to infinity as $t$ tends to infinity -------------------------------------------------------- Let $b>0$, in which case the variance process is ergodic and its stationary distribution $\pi$ is a Gamma law with shape parameter $a/\sigma$ and scale parameter $\sigma/b$; namely $t^{-1}\int_0^t h(V_s) {\mathrm{d}}s$ converges to $\int_{\mathbb{R}}h(x)\pi({\mathrm{d}}x)$ almost surely for any $h\in L^1(\pi)$ (see [@Craddock] and [@Yur]). In this section, we consider a continuous function $f:{\mathbb{R}}^*_{+}\rightarrow {\mathbb{R}}_{+}$ such that $t/f(t)$ tends to infinity as $t$ tends to infinity. We shall prove below that (under some conditions on the risk parameter $\lambda$) the ergodicity of the variance ensures that $S$ allows an asymptotic arbitrage with sublinear speed $f(t)$. \[prop:limProbErgodic\] The stock price process $S$ in  has an average squared market price of risk $\gamma_1$ above the threshold $a\lambda^2/b$ with speed $f(t)$. If furthermore $a>\sigma$ and $\lambda\rho(\mu-r)\leq 0$, then there exists $c_2>0$ such that $S$ has an average squared market price of risk $\gamma_2$ above the threshold $c_2$ with speed $f(t)$. \[rem:C2\] As the proof shows, we can actually be more precise regarding the threshold $c_2$: - if $\mu=r$, then $c_2 = \frac{a\lambda^2\rho^2}{b(1-\rho^2)}$; - if $\mu\ne r$ and $\rho\lambda<0$, then no further condition on $c_2$ is needed; - if $\mu\ne r$ and $\rho\lambda=0$, then $c_2=\frac{(\mu-r)^2b}{(a-\sigma)(1-\rho^2)}$; It is rather interesting to compare this result with those of Proposition \[prop:AvgSqGamma1\] and Proposition \[prop:AvgSqGamma2\]. Indeed, when $b>0$, if $f(t)\equiv t$ then the stock price process does not satisfy an average squared market price of risk $\gamma_1$ above the threshold $a\lambda^2/b$. However, when $t/f(t)$ tends to infinity, then $S$ has an average squared market price of risk $\gamma_1$ above the threshold $a\lambda^2/b$. When $b>0$, $\lambda\rho\ne0$ and $\mu=r$, if $f(t)\equiv t$ then the stock price process does not satisfy an average squared market price of risk $\gamma_2$ above the threshold $\frac{a\lambda^4\rho^2}{b(1-\rho^2)}$, but does so above the threshold $\frac{a\lambda^2\rho^2}{b(1-\rho^2)}$ when $t/f(t)$ tends to infinity. Finally, when $b>0$, $\lambda\rho=0$ and $\mu\ne r$ the stock price process never satisfies an average squared market price of risk $\gamma_2$ with speed $f(t)\equiv t$, but does above the threshold $\frac{b(\mu-r)^2}{(1-\rho^2)(a-\sigma)}$ whenever $t/f(t)$ tends to infinity. Let $f$ be as stated in the proposition. For $b>0$, the variance process is ergodic and its stationary distribution is a Gamma law with shape parameter $a/\sigma$ and scale parameter $\sigma/b$ (see  [@Yur]). In particular, $t^{-1}\int_0^t V_s {\mathrm{d}}s$ converges in probability to $a/b$ as $t$ tends to infinity, and hence for any $c_1\in(0,a \lambda^2/b)$, $$\label{eq:Gamma1bPos} \lim_{t\nearrow+\infty}\mathbb{P}\left(\frac{1}{t}\int_0^t\gamma_1^2(s) {\mathrm{d}}s<c_1\right)=0, \qquad\text{and hence}\qquad \lim_{t\nearrow+\infty}\mathbb{P}\left(\frac{1}{f(t)}\int_0^t\gamma_1^2(s) {\mathrm{d}}s<c_1\right)=0,$$ which proves the first part of the proposition. Consider now $\gamma_2$. When $\mu=r$, the definitions  implies that $\gamma_2=-\rho\gamma_1/\sqrt{1-\rho^2}$, and hence $$\lim_{t\nearrow+\infty}\mathbb{P}\left(\frac{1}{f(t)}\int_0^t\gamma_2^2(s) {\mathrm{d}}s<c_2\right) = \lim_{t\nearrow+\infty}\mathbb{P}\left(\frac{1}{f(t)}\int_0^t\gamma_1^2(s) {\mathrm{d}}s<\frac{(1-\rho^2)c_2}{\rho^2}\right)$$ is equal to zero if and only if $\left(1-\rho^2\right)c_2/\rho^2\in (0,a\lambda^2/b)$, and the proposition follows. We now assume that $\mu\ne r$. If $a>\sigma$ we further know that (see proposition 4 in  [@Ala]) $t^{-1}\int_0^tV_s^{-1}{\mathrm{d}}s$ converges in probability to $b/(a-\sigma)$ as $t$ tends to infinity. Therefore for any $c\in(0,b/(a-\sigma))$ we have $$\label{eq:Gamma2bPos} \lim_{t\nearrow\infty}\mathbb{P}\left(\frac{1}{t}\int_0^t \frac{{\mathrm{d}}s}{V_s}<c\right)=0, \qquad\text{and hence}\qquad \lim_{t\nearrow+\infty}\mathbb{P}\left(\frac{1}{f(t)}\int_0^t \frac{{\mathrm{d}}s}{V_s}<c\right)=0.$$ Let $c_2, c'_1, c'_2$ be three strictly positive numbers such that $c_2=c'_1+c'_2$. The definition of $\gamma_2$ in  implies $$\begin{aligned} \mathbb{P}\left(\frac{1}{f(t)}\int_0^t\gamma_2^2(s){\mathrm{d}}s<c_2\right) & = \mathbb{P}\left(\frac{1}{f(t)}\frac{(\mu-r)^2}{1-\rho^2}\int_0^t\frac{{\mathrm{d}}s}{V_s} -\frac{2\rho\lambda(\mu-r)}{1-\rho^2}\frac{t}{f(t)} +\frac{1}{f(t)}\frac{\lambda^2\rho^2}{1-\rho^2}\int_0^t V_s {\mathrm{d}}s<c_2\right)\\ & \leq \mathbb{P}\left(\frac{1}{f(t)}\frac{\lambda^2\rho^2}{1-\rho^2}\int_0^t V_s{\mathrm{d}}s<c'_1\right) + \mathbb{P}\left(\frac{1}{f(t)}\frac{(\mu-r)^2}{1-\rho^2}\int_0^t\frac{{\mathrm{d}}s}{V_s}-\frac{2\rho\lambda(\mu-r)}{1-\rho^2}\frac{t}{f(t)} <c'_2\right)\\ & = \mathbb{P}\left(\frac{1}{f(t)}\int_0^t \gamma_1^2(s) {\mathrm{d}}s<c_1\right) + \mathbb{P}\left(\frac{1}{f(t)}\int_0^t\frac{{\mathrm{d}}s}{V_s} < \frac{1-\rho^2}{(\mu-r)^2} \left[c'_2+\frac{2\rho\lambda(\mu-r)}{1-\rho^2}\frac{t}{f(t)}\right]\right)\end{aligned}$$ with $c'_1=\frac{\rho^2}{1-\rho^2}c_1>0$. As long as $c_1\in (0,a\lambda^2/b)$, the first probability tends to zero as $t$ tends to infinity by . Now, when $\rho\lambda(\mu-r)<0$, then since $t/f(t)$ tends to infinity, the second probability tends to zero (as $t$ tends to infinity) by  because $c'_2+\frac{2\rho\lambda(\mu-r)}{1-\rho^2}\frac{t}{f(t)}$ tends to $-\infty$ (and because the variance process is non-negative almost surely). No condition on $c'_2$ is needed here. When $\rho\lambda=0$, then the first line of the equation above simplifies to $$\mathbb{P}\left(\frac{1}{f(t)}\int_0^t\gamma_2^2(s){\mathrm{d}}s<c_2\right) = \mathbb{P}\left(\frac{1}{f(t)}\frac{(\mu-r)^2}{1-\rho^2}\int_0^t\frac{{\mathrm{d}}s}{V_s}<c_2\right).$$ From , it tends to zero as $t$ tends to infinity when $0< c_2 <\frac{(\mu-r)^2 b}{(1-\rho^2)(a-\sigma)}$, and hence the proposition follows from Definition \[def:MarketPrice\]. We now state and prove our final result, namely a strong asymptotic arbitrage statement for the stock price process when the speed is sublinear. However it is not as clear as in Theorem \[thm:AsymptArb\] how to prove the exponentially decaying failure probability. \[prop:limProbErgodicResult\] Assume that $a>\sigma$ and $\lambda\rho(\mu-r)\leq 0$, and let $\varepsilon\in (0,1)$, $\gamma>0$. Then $S$ satisfies a strong asymptotic arbitrage with speed $f(t)$ and with $(\varepsilon_1, \varepsilon_2) = (\varepsilon,{\mathrm{e}}^{-\gamma f(t)})$, - if and only if $\lambda\in {\mathbb{R}}\setminus\left[-\sqrt{\frac{2b\gamma(1-\rho^2)}{a\rho^2}},\sqrt{\frac{2b\gamma(1-\rho^2)}{a\rho^2}}\right]$ when $\mu = r$ and $\rho^2\leq 1/2$; - if and only if $\lambda\in{\mathbb{R}}\setminus \left[-\sqrt{2b\gamma/a},\sqrt{2b\gamma/a}\right]$ when $\mu = r$ and $\rho^2 \geq 1/2$; - if $\mu\ne r$ and $\rho\lambda<0$; - if $\mu\ne r$, $\rho\lambda=0$ and $\gamma<\frac{(\mu-r)^2 b}{2(a-\sigma)(1-\rho^2)}$. Recall that we are in the framework of Proposition \[prop:limProbErgodic\], so that $c_1>0$ and $c_2>0$ are the thresholds for $\gamma_1$ and $\gamma_2$ above which $S$ has an average squared market price of risk. In this proof, we follow steps similar to those in [@Fol]. For any $\varepsilon>0$, fix $0<\gamma<\bar{\gamma}<\gamma'<c_1/2=\frac{a\lambda^2}{2b}$ and $t_0>8\gamma'/[(\gamma'-\gamma+\bar{\gamma})^2\varepsilon]$ such that for any $t\geq t_0$ we have $\mathbb{P}\left(f(t)^{-1}\int_0^t\gamma_1^2(s){\mathrm{d}}s\leq 2\gamma'\right) < \varepsilon/4$. Define the stopping time $\tau_1 := t\wedge\inf\left\{s\in[0,t]: \int_0^s\gamma_1^2(u) {\mathrm{d}}u \geq 2\gamma'f(t)\right\}$. Using the fact that $\int_0^{\tau_1}\gamma_1^2(s){\mathrm{d}}s \leq 2\gamma' f(t)$, Chebychev’s inequality implies $$\mathbb{P}\left(\left|\int_0^{\tau_1}\gamma_1(s){\mathrm{d}}W_1(s)\right| \geq (\gamma'-\gamma+\bar{\gamma})f(t)\right) \leq \frac{2\gamma'}{(\gamma'-\gamma+\bar{\gamma})^2f(t)}<\frac{\varepsilon}{4}.$$ For $Z_{\tau_1} :=\exp\left(-\int_0^{\tau_1}\gamma_1(s){\mathrm{d}}W_1(s)-\frac{1}{2}\int_0^{\tau_1}\gamma_1^2(s) {\mathrm{d}}s\right)$, we then obtain $$\begin{aligned} \mathbb{P}\left(Z_{\tau_1} \geq {\mathrm{e}}^{(\bar{\gamma}-\gamma)f(t}\right) & = \mathbb{P}\left(-\int_0^{\tau_1}\gamma_1(s){\mathrm{d}}W_1(s)-\frac{1}{2}\gamma_1^{2}(s){\mathrm{d}}s \geq (\bar{\gamma}-\gamma)f(t)\right)\\ & \leq \mathbb{P}\left(\left|\int_0^{\tau_1}\gamma_1(s){\mathrm{d}}W_1(s)\right|\geq (\bar{\gamma}-\gamma+\gamma')f(t)\right) + \mathbb{P}\left(\frac{1}{2}\int_0^{\tau_1}\gamma_1^2(s){\mathrm{d}}s \leq \gamma'f(t)\right)\\ & \leq \frac{\varepsilon}{4}+\frac{\varepsilon}{4} = \frac{\varepsilon}{2}.\end{aligned}$$ Take now $0<\bar{\gamma}<\gamma''<c_2/2$, and $t_1>\frac{8\gamma''}{(\gamma''-\bar{\gamma})^2\varepsilon}$ such that, for $t\geq t_1$, $\mathbb{P}\left(f(t)^{-1}\int_0^t\gamma_2^2(s){\mathrm{d}}s\leq 2\gamma''\right) < \varepsilon/4$. Define the stopping time $\tau_2$ by $\tau_2 := t\wedge \inf\left\{s\in[0,t]: \int_0^s\gamma_2^2(u){\mathrm{d}}u \geq 2\gamma''f(t)\right\}$ and the random variable $Z_{\tau_2}:=\exp\left(-\int_0^{\tau_2}\gamma_1(s){\mathrm{d}}W_2(s)-\frac{1}{2}\int_0^{\tau_2}\gamma_2^2(s) {\mathrm{d}}s\right)$. We then have $\mathbb{P}\left(Z_{\tau_2}\geq{\mathrm{e}}^{-\bar{\gamma}f(t)}\right) \leq \varepsilon/2$. The two sets $A_t:=\left\{Z_{\tau_1}\geq {\mathrm{e}}^{(\bar{\gamma}-\gamma) f(t)}\right\}\in\mathcal{F}_t$ and $B_t:=\left\{Z_{\tau_2}\geq {\mathrm{e}}^{-\bar{\gamma} f(t)}\right\}\in\mathcal{F}_t$. clearly satisfy the following inequalities: $$\begin{array}{rll} &\mathbb{P}(A_t) \leq \varepsilon/2, \qquad & \mathbb{Q}(A_t^{c}) \leq {\mathrm{e}}^{(\bar{\gamma}-\gamma) f(t)},\\ &\mathbb{P}(B_t) \leq \varepsilon/2, \qquad & \mathbb{Q}(B_t^{c}) \leq {\mathrm{e}}^{-\bar{\gamma} f(t)}, \end{array}$$ where again we define the probability $\mathbb{Q}(B):=\mathbb{E}(B Z_t)$, for any $B\in\Omega$. Combining these inequalities, we conclude that there exist $t_0,t_1>0$ such that for $t\geq t_0\vee t_1$, we have $\mathbb{P}(A_t\cup B_t)\leq \varepsilon$ and $\mathbb{Q}(A_t^c\cap B_t^c)\leq e^{-\gamma f(t)}$. Using [@Fol Proposition 2.1], we can now introduce the random variable $Y_t = -{\mathrm{e}}^{-\gamma f(t)}{1\hspace{-2.1mm}{1}}_{A_t\cup B_t} +\left(1-{\mathrm{e}}^{-\gamma f(t)}\right){1\hspace{-2.1mm}{1}}_{A_t^c\cap B_t^c}$. Clearly $Y_t \in K_t$ and satisfies $Y_t \geq-{\mathrm{e}}^{-\gamma f(t)}$ and $\mathbb{P}\left(Y_t \geq 1-{\mathrm{e}}^{-\gamma f(t)}\right) \geq 1-\varepsilon$. Letting $\bar{t}:=t_0\vee t_1$, the proposition follows. Note that the constraint $a\lambda^2/b=c_1>2\gamma$ reads $\lambda\in{\mathbb{R}}\setminus \left[-\sqrt{2b\gamma/a},\sqrt{2b\gamma/a}\right]$. The constraints on $c_2$ depend on the sign of $\lambda\rho(\mu-r)$, as explained in Remark \[rem:C2\]: - if $\mu = r$ and $\rho^2<1/2$, then $c_1>c_2$; then $c_2/2>\gamma$ if and only if $\lambda\in {\mathbb{R}}\setminus\left[-\sqrt{\frac{2b\gamma(1-\rho^2)}{a\rho^2}},\sqrt{\frac{2b\gamma(1-\rho^2)}{a\rho^2}}\right]$; - if $\mu = r$ and $\rho^2 > 1/2$, then $c_1<c_2$; then $c_1/2>\gamma$ if and only if $\lambda\in{\mathbb{R}}\setminus \left[-\sqrt{2b\gamma/a},\sqrt{2b\gamma/a}\right]$; - if $\mu\ne r$ and $\rho\lambda<0$, no further assumption on $\lambda$ is needed; - if $\mu\ne r$ and $\rho\lambda=0$, then the following constraint has to hold: $0<\gamma<\frac{(\mu-r)^2 b}{2(a-\sigma)(1-\rho^2)}$. Large deviations results {#App:Appendix} ======================== Recall from [@Lamberton Chapter 6, Proposition 2.5] that for any $t\geq 0$, $ \log\mathbb{E}\left({\mathrm{e}}^{X_t^{\alpha, \beta,0}}\right)= -a\phi_{-\alpha,-\beta}(t)-\psi_{-\alpha,-\beta}(t)V_0$, where $$\begin{aligned} \phi_{\alpha,\beta}(t) & := -\frac{1}{\sigma}\log\left(\frac{2\chi{\mathrm{e}}^{t(b-\chi)/2}}{2\sigma\alpha\left(1-{\mathrm{e}}^{-\chi t}\right)+(\chi-b) {\mathrm{e}}^{-\chi t}+(\chi+b)}\right),\\ \psi_{\alpha,\beta}(t) & := \frac{\alpha[(\chi+b){\mathrm{e}}^{-\chi t}+(\chi-b)]+2\beta\left(1-{\mathrm{e}}^{-\chi t}\right)}{2\sigma\alpha\left(1-{\mathrm{e}}^{-\chi t}\right)+(\chi-b) {\mathrm{e}}^{-\chi t}+(\chi-b)},\end{aligned}$$ with $\chi:=\sqrt{b^2+4\sigma\beta}$. For any $t>0$, the moment generating function of $X_t^{\alpha,\beta,0}/t$ therefore reads $\Lambda_t^{\alpha,\beta}(u) := \mathbb{E}\left({\mathrm{e}}^{uX_t^{\alpha,\beta,0}/t}\right)$, for $u\in{\mathcal{D}}_t^{\beta}$ where ${\mathcal{D}}_t^{\beta}=\left(-\infty,\frac{b^2 t}{4\sigma\beta}\right]$ if $\beta>0$ and $\left[\frac{b^2 t}{4\sigma\beta} ,+\infty\right)$ if $\beta<0$. Straightforward computations yield $$\Lambda^{\beta}(u) := \lim_{t\nearrow+\infty}t^{-1}\log\Lambda_t^{\alpha,\beta}(ut) = \frac{a}{2\sigma}\left(b-\sqrt{b^2-4\sigma\beta u}\right),$$ as given in , for $u\in{\mathcal{D}}_{\beta}:=\lim_{t\nearrow+\infty}{\mathcal{D}}_t^{\beta}$, defined in . We further have $$\partial_u \Lambda^{\beta}(u)= \frac{\beta a}{\sqrt{b^2-4\sigma\beta u}} \qquad\text{and}\qquad \partial_{uu}\Lambda^{\beta}(u)= \frac{2\beta^2 \sigma a}{(b^2-4\sigma\beta u)^{3/2}},$$ for any $u\in{\mathcal{D}}_{\beta}^o$, and hence $\partial_u\Lambda^{\beta}\left({\mathcal{D}}^o_{\beta}\right)=\mathbb{R_+^*}$ if $\beta>0$ and $\mathbb{R}_-^*$ if $\beta<0$. Therefore $\Lambda^{\beta}$ is convex on ${\mathcal{D}}_{\beta}$, and hence the Gärtner-Ellis theorem (see [@DZ]) applies, albeit only on subsets of $\partial_u\Lambda^{\beta}({\mathcal{D}}^o_{\beta})$. We now characterise the rate function $\Lambda^*$. Recall that the Fenchel-Legendre transform of $\Lambda$ is defined by $\Lambda^*_\beta(x):=\sup\left\{ux-\Lambda^{\beta}(u): u\in{\mathcal{D}}_{\beta}\right\}$. Let us first consider the case $\beta>0$. Since the function $\Lambda'$ is strictly increasing on ${\mathcal{D}}^o_{\beta}$ and $\partial_u\Lambda^{\beta}({\mathcal{D}}^o_{\beta})={\mathbb{R}}_+^*$, then for any $x>0$, the equation $\partial_u\Lambda^{\beta}(u)=x$ has a unique solution $u^*(x) = \left(b^2 x^2 -\beta^2 a^2\right)/\left(4\sigma\beta x^2\right)$, and we deduce $\Lambda^*_\beta(x) = u^*(x)x-\Lambda^{\beta}(u^*(x)) = (bx-a\beta)^2/(4\sigma\beta x)$, for any $x>0$. For $x\leq 0$, the definition of the Fenchel-Legendre transform implies $\Lambda^*_\beta(x)=+\infty$. In the case $\beta<0$, an analogous analysis holds: the Gärtner-Ellis theorem applies on subsets of $\Lambda'({\mathcal{D}}^o_{\beta})={\mathbb{R}}_-^*$ with rate function $\Lambda^*_\beta$ given in  on ${\mathbb{R}}_-^*$ and infinity on ${\mathbb{R}}_+$. We first start with the case $b\neq0$. The moment generating function of the random variable $X_t^{\alpha,\beta,\delta}/t$ is given by (see [@Keb proposition 2]), $$\begin{aligned} \Lambda_t(u) & =\mathbb{E}\left(\exp\left(\frac{\alpha u }{t}V_t+\frac{\beta u }{t}\int_0^t V_s{\mathrm{d}}s+\frac{\delta u }{t}\int_0^t V_s^{-1}{\mathrm{d}}s\right)\right)\\ &= \frac{\Gamma(\kappa+\nu/2+1/2)}{\Gamma(\nu+1)} \exp\left\{\frac{b}{2\sigma}(at+V_0)-\frac{AV_0}{2\sigma}\coth\left(\frac{At}{2}\right)\right\}\\ & \times \left(\frac{AV_0}{2\sigma\sinh(At/2)}\right)^{\nu/2+1/2-\kappa} \left(\left(b-\frac{2\sigma\alpha u}{t}\right)\frac{\sinh(At/2)}{A}+\cosh(At/2)\right)^{-\nu/2-1/2-\kappa}\\ & \times _1F_1\left(\kappa+\frac{\nu+1}{2},\nu+1,\frac{A^2V_0}{2\sigma\sinh(At/2)\left((b-\frac{2\sigma\alpha u}{t})\sinh(At/2)+\cosh(At/2)\right)}\right)\end{aligned}$$ where $\kappa:=\frac{a}{2\sigma}$, $A:=\sqrt{b^2-\frac{4\sigma\beta u}{t}}$, $\nu:=\frac{1}{\sigma}\sqrt{(a-\sigma)^2-\frac{4\sigma\delta u}{t}}$. The confluent hypergeometric function is defined by $_1F_1(u,v,z)=\sum_{n\geq 0}\frac{u^{(n)}}{v^{(n)}}\frac{z^n}{n!}$, with $v^{(n)}$ denoting the rising factorial $v^{(n)}:=v(v+1)\ldots(v+n-1)$. As $t$ tends to infinity, $t^{-1}\log\left(\frac{\Gamma(\kappa+\nu/2+1/2)}{\Gamma(\nu+1)}\right)$ clearly tends to zero and $$\lim_{t\nearrow+\infty}\frac{1}{t} \log\left( _1F_1\left(\kappa+\frac{\nu+1}{2},\nu+1, \frac{A^2V_0}{2\sigma\sinh(At/2)\left[\left(b-\frac{2\sigma\alpha u}{t}\right)\sinh(At/2)+\cosh(At/2)\right]}\right)\right)=0.$$ Therefore, $$\begin{aligned} \Lambda^{\beta,\delta}(u) & := \lim_{t\nearrow+\infty}t^{-1}\log\Lambda_t(tu)\\ & = \lim_{t\nearrow+\infty}\frac{1}{t} \Bigg\{ \frac{b}{2\sigma}(at+V_0)-\frac{AV_0}{2\sigma}\frac{{\mathrm{e}}^{At/2}+{\mathrm{e}}^{-At/2}}{{\mathrm{e}}^{At/2}-{\mathrm{e}}^{-At/2}}+ \left(\frac{\nu+1}{2}-\kappa\right)\log\left(\frac{AV_0}{\sigma\left({\mathrm{e}}^{At/2}-{\mathrm{e}}^{-At/2}\right)}\right)\\ & - \left(\kappa+\frac{\nu+1}{2}\right)\log\left(\frac{b-2\sigma\alpha u}{A}\left(\frac{{\mathrm{e}}^{At/2}-{\mathrm{e}}^{-At/2}}{{\mathrm{e}}^{At/2}+{\mathrm{e}}^{-At/2}}\right)+\frac{{\mathrm{e}}^{At/2}+{\mathrm{e}}^{-At/2}}{2}\right)\Bigg\}\\ & = -\frac{\nu A}{2}-\frac{A}{2}+\frac{ba}{2\sigma} =\frac{ab}{2\sigma}-\frac{1}{2\sigma}\sqrt{((a-\sigma)^2-4\sigma\delta u)(b^2-4\sigma\beta u)}-\frac{1}{2}\sqrt{b^2-4\sigma\beta u},\end{aligned}$$ for $u\in{\mathcal{D}}_{\beta,\delta}$ where the interval ${\mathcal{D}}_{\beta,\delta}$ is given in . We can then immediately compute $$\partial_u\Lambda^{\beta,\delta}(u) =\frac{\sigma\beta}{\sqrt{b^2-4\sigma\beta u}}-\frac{8\sigma\delta\beta u-\beta(a-\sigma)^2-\delta b^2}{\sqrt{((a-\sigma)^2-4\sigma\delta u)(b^2-4\sigma\beta u)}}, \quad \text{for any }u\in{\mathcal{D}}^o_{\beta,\delta},$$ and hence $$\partial_u\Lambda^{\beta,\delta}({\mathcal{D}}_{\beta,\delta}^o)= \left\{ \begin{array}{ll} \displaystyle \mathbb{R}, & \text{if } \beta\delta<0,\\ \displaystyle (2\sqrt{\delta\beta},+\infty), & \text{if } \beta>0, \delta>0,\\ \displaystyle (-\infty,-2\sqrt{\delta\beta}), & \text{if } \beta<0, \delta<0. \end{array} \right.$$ We also have, for any $u\in {\mathcal{D}}^o_{\beta,\delta}$, $$\partial_{uu}\Lambda^{\beta,\delta}(u) =\frac{2\sigma^2\beta^2}{(b^2-4\sigma\beta u)^{3/2}}+\frac{2\sigma(\delta b^2 -\beta (a-\sigma)^2)^2} {\left(((a-\sigma)^2-4\sigma\delta u)(b^2-4\sigma\beta u)\right)^{3/2}}.$$ Therefore $\Lambda^{\beta,\delta}$ is strictly convex on ${\mathcal{D}}_{\beta,\delta}$, and the Gärtner-Ellis theorem (see [@DZ]) only applies on subsets of $\partial_u\Lambda^{\beta,\delta}({\mathcal{D}}^o_{\beta,\delta})$. For any $x\in\partial_u\Lambda^{\beta,\delta}({\mathcal{D}}_{\beta,\delta}^o)$, the equation $\partial_u\Lambda^{\beta,\delta}(u)=x$ has a unique solution $u^*(x)$ and hence $\Lambda_{\beta,\delta}^*(x):=\sup_{u\in{\mathcal{D}}_{\beta,\delta}}\left\{ux-\Lambda^{\beta,\delta}(u)\right\} = u^*(x)x-\Lambda^{\beta,\delta}(u^*(x))$. We now move on to the case $b=0$. From [@Keb Corollary 1], the limiting mgf of $X_t^{\alpha,\beta,\delta}$ reads $$\Lambda^{\beta,\delta}(u) := \lim_{t\nearrow+\infty}t^{-1}\log\mathbb{E}\left({\mathrm{e}}^{uX_t^{\alpha,\beta,\delta}}\right) = -\sqrt{-\sigma\beta u}-\frac{1}{\sigma}\sqrt{-\sigma\beta u}\sqrt{(a-\sigma)^2-4\sigma\delta u},$$ for any $u\in{\mathcal{D}}_{\beta,\delta}$ where this interval now reads $${\mathcal{D}}_{\beta,\delta}= \left\{ \begin{array}{ll} \displaystyle \left[0,\frac{(a-\sigma)^2}{4\sigma\delta}\right], & \text{if } \beta<0 \text{ and } \delta>0,\\ \displaystyle \left[\frac{(a-\sigma)^2}{4\sigma\delta},0\right], & \text{if } \beta>0 \text{ and } \delta<0,\\ \displaystyle \mathbb{R}_-, & \text{if } \beta>0 \text{ and } \delta>0,\\ \displaystyle \mathbb{R}_+, & \text{if } \beta<0 \text{ and } \delta<0. \end{array} \right.$$ Then $$\partial_u\Lambda^{\beta,\delta}(u) =\frac{\sigma\beta}{2\sqrt{-\sigma\beta u}}+\frac{\beta \sqrt{(a-\sigma)^2-4\sigma\delta u}}{2\sqrt{-\sigma\beta u}} +\frac{2\delta\sqrt{-\sigma\beta u}}{\sqrt{(a-\sigma)^2-4\sigma\delta u}}, \quad\text{for any }u\in{\mathcal{D}}^o_{\beta,\delta},$$ and hence $$\label{eq:Dd2} \partial_u\Lambda^{\beta,\delta}({\mathcal{D}}_{\beta,\delta}^o)= \left\{ \begin{array}{ll} \displaystyle \mathbb{R}, & \text{if } \beta\delta<0,\\ \displaystyle (2\sqrt{\delta\beta},+\infty), & \text{if } \beta>0 \text{ and } \delta>0,\\ \displaystyle (-\infty,-2\sqrt{\delta\beta}), & \text{if } \beta<0 \text{ and }\delta<0. \end{array} \right.$$ We also have $$\partial_{uu}\Lambda^{\beta,\delta}(u)=\frac{\sigma^2\beta^2}{4(-\sigma\beta u)^{3/2}}-\frac{\beta (a-\sigma)^2} {4u\sqrt{(a-\sigma)^2-4\sigma\delta u}\sqrt{-\sigma\beta u}} -\frac{\sigma\beta\delta (a-\sigma)^2}{\left((a-\sigma)^2-4\sigma\delta u\right)^{3/2}\sqrt{-\sigma\beta u}}.$$ Clearly then, $\Lambda^{\beta,\delta}$ is convex on ${\mathcal{D}}_{\beta,\delta}$, and the Gärtner-Ellis theorem (see [@DZ]) only applies on subsets of $\partial_u\Lambda^{\beta,\delta}({\mathcal{D}}^o_{\beta,\delta})$. For any $x\in\partial_u\Lambda^{\beta,\delta}({\mathcal{D}}^o_{\beta,\delta})$, the equation $\partial_u\Lambda^{\beta,\delta}(u)=x$ has a unique solution $u^*(x)$ and hence $\Lambda_{\beta,\delta}^*(x):=\sup_{u\in{\mathcal{D}}_{\beta,\delta}}\left\{ux-\Lambda^{\beta,\delta}(u)\right\} = u^*(x)x-\Lambda_{\beta,\delta}(u^*(x))$, and the lemma follows. \[lem:ratefunction\] For any $x\in\partial_u\Lambda^{\beta,\delta}({\mathcal{D}}_{\beta,\delta}^o)$, the equation $\partial_u\Lambda^{\beta,\delta}(u^*(x))=x$ admits a unique solution $u^*(x)\in{\mathcal{D}}^o_{\beta,\delta}$. The function $\Lambda_{\beta,\delta}^*$ is strictly convex and satisfies $\Lambda_{\beta,\delta}^*(x)=u^*(x)x-\Lambda^{\beta,\delta}(u^*(x))$ on $\partial_u\Lambda^{\beta,\delta}({\mathcal{D}}_{\beta,\delta}^o)$ and is (positive) infinite outside. In the case $\beta\delta\geq 0$, $\Lambda^*_{\beta,\delta}$ is strictly positive. When $\beta\delta< 0$, $\Lambda_{\beta,\delta}^*$ admits a unique minimum, which is equal to zero (and is attained at the origin) if and only if $a>\sigma$. When $\beta\delta<0$, the image of ${\mathcal{D}}_{\beta,\delta}^o$ by $\partial_u\Lambda^{\beta,\delta}$ is the whole real line, and the representation of $\Lambda_{\beta,\delta}^*$ in the lemma clearly follows. Now, suppose there exists $\bar{x}\in{\mathbb{R}}$ such that $\Lambda_{\beta,\delta}^*(\bar{x})=0$. Then there exists some (possibly non-unique) $u^*(\bar{x})\in{\mathcal{D}}_{\beta,\delta}$ such that $u^*(\bar{x})\bar{x}=\Lambda^{\beta,\delta}(u^*(\bar{x}))$, i.e. $\Lambda^{\beta,\delta}(u^*(\bar{x}))/u^*(\bar{x}) = \bar{x}$. But $u^*(\bar{x})$ also satisfies $\partial_u\Lambda^{\beta,\delta}(u^*(\bar{x}))=\bar{x}$. A straightforward analysis shows that the equality $\partial_u\Lambda^{\beta,\delta}(u) = \Lambda^{\beta,\delta}(u)/u$ is satisfied if and only if $u=0$ and $a>\sigma$. When $\beta>0$ and $\delta>0$, for any $x\leq 2\sqrt{\beta\delta}$, the map $u\mapsto ux-\Lambda^{\beta,\delta}(u)$ is strictly decreasing on ${\mathcal{D}}_{\beta,\delta}^o$, and the result follows. By definition, the function $\Lambda_{\beta,\delta}^*$ admits a (unique) minimum $\bar x$ if and only if (i) there exists $u(\bar x)\in{\mathcal{D}}_{\beta,\delta}$ such that $u(\bar x)\bar x = \Lambda^{\beta,\delta}(u(\bar x))$ and (ii) $\Lambda^{\beta,\delta}(u)>u \bar x$ for any $u\in{\mathcal{D}}_{\beta,\delta}\setminus\{u(\bar x)\}$. A straightforward analysis shows that the function $u\mapsto \Lambda^{\beta,\delta}(u)/u$ on ${\mathbb{R}}_-^*$ is strictly increasing and maps ${\mathbb{R}}_-^*$ to $(2\sqrt{\beta\delta}, +\infty)$. On ${\mathbb{R}}_+^*\cap{\mathcal{D}}_{\beta,\delta}$, it is strictly increasing and maps this interval to $(-\infty, -2\sqrt{\beta\delta})$. Therefore the inequality $\Lambda(u)>u x$ holds if and only if both (a) $\Lambda^{\beta,\delta}(u)/u>x$ for $u\in{\mathbb{R}}_+^*\cap{\mathcal{D}}_{\beta,\delta}$ and (b) $\Lambda^{\beta,\delta}(u)/u<x$ for $u<0$. Case (b) clearly only holds for $x<2\sqrt{\beta\delta}$, which is not valid. The other cases are treated analogously. The case $\beta\delta=0$ is straightforward. [99]{} M. Ben Alaya and A. Kebaier. Parameter estimation for the square root diffusions: ergodic and non ergodic case. *Stochastic Models*, [28]{} (4): 609-634, 2012. M. Ben Alaya and A. Kebaier. Asymptotic behavior of the maximum likelihood estimator for ergodic and non ergodic square-root diffusions. Preprint, hal.archives-ouvertes.fr-00640053, 2011. M.L.D. Mbele Bidima and M. Rasonyi. On long-term arbitrage opportunities in Markovian models of financial markets. *Annals of Operations Research*, [200]{} (1):131-146, 2012. M. Craddock and K. A. Lennox. The calculation of expectations for classes of diffusion processes by Lie symmetry methods. *The Annals of Applied Probabilty*, [19]{} (1): 127-157, 2009. F. Delbaen and W. Schachermayer. The Mathematics of Arbitrage. Springer Finance, 2006. A.  Dembo and O. Zeitouni. Large deviations techniques and applications. Jones and Bartlet publishers, Boston, 1993. K. Du. and A.Neufeld. A note on asymptotic exponential arbitrage with exponentially decaying failure probability. Forthcoming in *Journal of Applied Probability*, 2013. H. Föllmer and W. Schachermayer. Asymptotic arbitrage and large deviations. *Mathematics and Financial Economics*, [1]{} (34): 213-249, 2007. J.P. Fouque, G. Papanicolaou, R. Sircar and K. Solna. Multiscale Stochastic Volatility for Equity, Interest Rate, and Credit Derivatives. CUP, 2011. J. Gatheral. The Volatility Surface: a practitioner’s guide. Wiley, 2006. S. Heston. A closed-form solution for options with stochastic volatility with applications to bond and currency options. *The Review of Financial Studies*,(6): 327-342, 1993. C.C. Heyde and B. Wong. On changes of measure in stochastic volatility models. *Journal of Applied Mathematics and Stochastic Analysis*, [2006]{}, Article ID 18130, 2006. Y. Kabanov and D. Kramkov. Asymptotic arbitrage in large financial markets. *Fin. & Stochastics*, [2]{}: 143-172, 1998. I. Klein and W. Schachermayer. Asymptotic arbitrage in non-complete large financial markets. *Theory of Probability and its Applications*, [41]{} (4): 927-934, 1996. Y.A. Kutoyants. Statistical inference for ergodic diffusion processes. *Springer series in Statistics. Springer-Verlag London Ltd*, 2004. D. Lamberton and B. Lapeyre. Introduction au calcul stochastique appliqué à la finance, 2nd edition. Ellipses Édition Marketing, Paris, 1997. R.C. Merton The theory of rational option pricing. *Bell.J.Econ.Manag.Sci*, [4]{} : 141-183, 1973. D. Revuz and M. Yor. Continuous martingales and Brownian motion. Springer-Verlag, Berlin, 1999. D.B. Rokhlin. Asymptotic arbitrage and numéraire portfolio in large financial markets. *Finance and Stochastics*, [12]{} (2): 173-194, 2008.
--- abstract: 'A class of Riemann-Cartan Gödel-type space-times are examined in the light of the equivalence problem techniques. The conditions for local space-time homogeneity are derived, generalizing previous works on Riemannian Gödel-type space-times. The equivalence of Riemann-Cartan Gödel-type space-times of this class is studied. It is shown that they admit a five-dimensional group of affine-isometries and are characterized by three essential parameters $\,\ell, m^2, \omega$: identical triads ($\ell, m^2, \omega$) correspond to locally equivalent manifolds. The algebraic types of the irreducible parts of the curvature and torsion tensors are also presented.' author: - | J.E. [Å]{}man[^1],    J.B. Fonseca-Neto[^2],    M.A.H. MacCallum[^3],   \ &   M.J. Rebouças[^4]\ \ $^{\ast}~$Institute of Theoretical Physics, Stockholm University\ Box 6730 (Vanadisvägen 9), S-113 85   Stockholm, Sweden\ \ $~^{\dagger}~$Departamento de Física, Universidade Federal da Paraíba\ Caixa Postal 5008, 58059-900 João Pessoa – PB, Brazil\ \ $^{\ddagger}~$School of Mathematical Sciences, Queen Mary & Westfield College\ Mile End Road, London E1 4NS, U.K.\ \ $^{\S}$ Centro Brasileiro de Pesquisas Físicas\ Departamento de Relatividade e Partículas\ Rua Dr. Xavier Sigaud 150, 22290-180 Rio de Janeiro – RJ, Brazil\ title: | **Riemann-Cartan Space-times of Gödel Type\ ** --- Introduction ============ \[intro\] Theories with non-zero torsion have been of notable interest in a few contexts. In the framework of gauge theories they have been used in the search for unification of gravity with the other fundamental interactions in physics. Space-time manifolds with nonsymmetric connection have also been considered as the appropriate arena for the formulation of a quantum gravity theory. At a classical level the generalization of the standard general relativity by introducing torsion into the theory has also received a good deal of attention mainly since the sixties (see Hehl [*et al.*]{} [@Hehl1976] and references therein). The geometric concept of torsion has also been used in a continuum approach to lattice defects in solids since the early fifties [@Kroener1981] – [@Balachandran1997]. For further motivation and physical consequences of studying manifolds and theories with non-zero torsion (whether dropping the metricity condition or not) as well as for a detailed list of references on theories with non-zero torsion, we refer the readers to a recent review by Hehl [*et al.*]{} [@Hehl1995]. In general relativity (GR), the space-time $M$ is a four-dimensional Riemannian manifold endowed with a locally Lorentzian metric and a metric-compatible symmetric connection, namely the Christoffel symbols $\{_{b\ c}^{\ a}\}$. However, it is well known that the metric tensor and the connection can be introduced as independent structures on a given space-time manifold $M$. In GR there is a unique torsion-free connection on $M$. In the framework of torsion theories of gravitation (TTG), on the other hand, we have Riemann-Cartan (RC) manifolds, i.e., space-time manifolds endowed with locally Lorentzian metrics and metric-compatible nonsymmetric connections ${\Gamma}^a_{\ bc}$. Thus, in TTG the connection has a metric-independent part given by the torsion, and for a characterization of the local gravitational field, one has to deal with both metric and connection. The arbitrariness in the choice of coordinates is a basic assumption in GR and in TTG. Nevertheless, in these theories it gives rise to the problem of deciding whether or not two apparently different space-time solutions of the field equations are locally the same — the equivalence problem. In GR this problem can be couched in terms of local isometry, whereas in TTG besides local isometry ($g_{ab} \rightarrow \tilde{g}_{ab}$) it means affine collineation ($\Gamma^{a}_{\ bc} \rightarrow \tilde{\Gamma}^{a}_{\ bc}$) of two RC manifolds. The equivalence problem in general relativity (Riemannian space-times) has been discussed by several authors and is of interest in many contexts (see, for example, Cartan [@cartan], Karlhede [@karl], MacCallum [@mm1] – [@MacCSkea] and references therein). The equivalence problem in torsion theories of gravitation (RC space-times), on the other hand, was only discussed recently [@frt]. Subsequently, an algorithm for checking the equivalence in TTG and a first working version of a computer algebra package (called [tclassi]{}) which implements this algorithm have been presented [@frm] – [@frm1]. The Gödel [@godel] solution of Einstein’s field equations is a particular case of the Gödel-type line element, defined by $$ds^{2} = [ dt + H(x)\, dy]^{2} - D^{2}(x) \, dy^{2} - dx^{2} - dz^{2}, \label{ds2}$$ in which $$H(x) = e^{m x}, \;\;\;\; D(x) = e^{m x}/ \sqrt{2}, \label{gddgod}$$ and with the energy-momentum tensor $T_{\mu \nu}$ given by $$\begin{aligned} & T_{\mu \nu}=\rho v_{\mu} v_{\nu}\,, \qquad v^{\alpha}=\delta^{\alpha}_{\ 0}\,,& \label{gdsrc1} \\ & \kappa \rho = - 2 \Lambda = m^{2} = 2\, \omega^{2}\,, \label{gsol} & \end{aligned}$$ where $\kappa$ and $\Lambda$ are, respectively, the Einstein gravitational and the cosmological constants, $\rho$ is the fluid density and $v^{\alpha}$ its four-velocity, and $\omega$ is the rotation of the matter. The Gödel model is homogeneous in space-time (hereafter called ST homogeneous). Actually it admits a five parameter group of isometries ($G_{5}$) having an isotropy subgroup of dimension one ($H_{1}$). The problem of space-time homogeneity of four-dimensional Riemannian manifolds endowed with a Gödel-type metric (\[ds2\]) was considered for the first time by Raychaudhuri and Thakurta [@raytha]. They have determined the necessary conditions for space-time homogeneity. Afterwards, Rebouças and Tiomno [@rebtio] proved that the Raychaudhuri-Thakurta necessary conditions are also sufficient for ST homogeneity of Gödel-type Riemannian space-time manifolds. However, in both articles [@raytha; @rebtio] the study of ST homogeneity is limited in that only time-independent Killing vector fields were considered [@tra]. The necessary and sufficient conditions for a Gödel-type Riemannian space-time manifold to be ST homogeneous were finally rederived without assuming such a simplifying hypothesis in [@rebaman], where the equivalence problem techniques for Riemannian space-times, as formulated by Karlhede [@karl] and implemented in [classi]{} [@Aman], were used. In this article, in the light of the equivalence problem techniques for Riemann-Cartan space-times, as formulated by Fonseca-Neto [*et al.*]{} [@frt] and embodied in the suite of computer algebra programs [tclassi]{} [@frm] – [@frm1], we shall examine all Riemann-Cartan manifolds endowed with a Gödel-type metric (\[ds2\]), with a torsion polarized along the preferred direction defined by the rotation, and sharing the translational symmetries of $g_{\mu\nu}$ in (\[ds2\]). Hereafter, for the sake of brevity, we shall refer to this family of space-time manifolds as Riemann-Cartan Gödel-type manifolds. The necessary and sufficient conditions for a Riemann-Cartan Gödel-type manifold to be ST (locally) homogeneous are derived. The [Å]{}man-Rebouças results [@rebaman] for Riemannian Gödel-type space-times are generalized. The ST homogeneous Riemann-Cartan Gödel-type manifolds are shown to admit a five-dimensional group of affine-isometric motions. The equivalence of these Riemann-Cartan space-times is discussed: they are found to be characterized by three essential parameters $m^2$, $\omega$ and $\ell$: identical triads ($\ell, m^2, \omega$) correspond to equivalent manifolds. The algebraic classification of the nonvanishing irreducible parts of the curvature is presented. For a general triad ($\ell, m^2, \omega$), the Weyl-type spinors $\Psi_A$ and $\psi_A$ are both Petrov type D, while the non-null Ricci-type spinors $\Phi_{AB'}$ and $\phi_{AB'}$ are both Segre type \[1,1(11)\]. A few main instances for which these algebraic types can be more specialized are also studied. The classification of the irreducible parts of the torsion and the corresponding group of isotropy are also discussed. The pseudo-trace torsion spinor ${\cal P}_{AX'}$ is found to be space-like, with $SO(2,1)$ as its group of isotropy. The Lanczos spinor ${\cal L}_{ABCX'}$ is found to be invariant under one-dimensional spatial rotation. Our major aim in the next section is to present a brief summary of some important theoretical and practical results on the equivalence problem for Riemann-Cartan space-times required in Section 3, where we state, prove and discuss our main results. Equivalence of Riemann-Cartan Space-times: Basic Results ======================================================== \[Equivalence\] Most relativists’s first approach to solving the (local) equivalence problem of Riemann-Cartan manifolds would probably be to make use of the so-called scalar polynomial invariants built from the curvature, the torsion, and their covariant derivatives [@Christensen1980]. However, this attempt cannot work since there exist [*curved*]{} plane wave RC space-times with non-zero torsion [@Adamowicz1980] for which all the scalar polynomial invariants vanish — indistinguishable therefore from the Minkowski space (flat and torsion free). This example shows that although necessary the scalar polynomial invariants are not sufficient to distinguish (locally) two RC space-times. To make apparent that the conditions for the local equivalence of RC manifolds follow from Cartan’s approach to the equivalence problem, we shall first recall the definition of equivalence and then proceed by pointing out how Cartan’s results [@cartan] lead to the solution of the problem found in [@frt]. The basic idea is that if two Riemann-Cartan manifolds $M$ and $\widetilde{M}$ are the same, they will define identical Lorentz frame bundles \[$L(M) \equiv L(\widetilde{M})$\]. The manifold $L(M)$ incorporates the freedom in the choice of Lorentz frames and has a uniquely-defined set of linearly independent 1-forms $\{ \Theta^{A}, \omega^{A}_{\ B} \}$, forming a basis of the cotangent space $T^{\ast}_{P}(L(M))$ at an arbitrary point $P \in L(M)$. Two RC manifolds $M$ and $\widetilde {M}$ are then said to be locally equivalent when there exists a local mapping $F$ between the Lorentz frame bundles $L(M)$ and $L(\widetilde{M})$ such that (see [@frt] and also Ehlers [@Ehlers1981]) $$\label{equidef} F^{\ast}\,\widetilde{\Theta}^{A} = \Theta^{A} \qquad \mbox{and} \qquad F^{\ast}\,\widetilde{\omega}^{A}_{\ B} = \omega^{A}_{\ B}$$ hold. Here $F^{\ast}$ is the well known pull-back map defined by $F$. A solution to the equivalence problem for Riemann-Cartan manifolds can then be obtained by using Cartan’s results on the equivalence of sets of 1-forms (see p. 312 of the English translation of Ref. [@cartan]) together with Cartan equations of structure for a manifold endowed with a nonsymmetric connection. The solution can be summarized as follows [@frt; @frm1]. Two $n$-dimensional Riemann-Cartan (locally Lorentzian) manifolds $M$ and $\widetilde{M}$ are locally equivalent if there exists a local map (diffeomorphism) $F$ between their corresponding Lorentz frame bundles $L(M)$ and $L(\widetilde{M})$, such that the [*algebraic*]{} equations relating the components of the curvature and torsion tensors and their covariant derivatives: $$\begin{aligned} \label{eqvcond} T^{A}_{\ BC} & = & \widetilde{T}^{A}_{\ BC}\;, \nonumber \\ R^{A}_{\ BCD} & = & \widetilde{R}^{A}_{\ BCD}\;, \nonumber \\ T^{A}_{\ BC;M_{1}} & = & \widetilde{T}^{A}_{\ BC;M_{1}}\;, \nonumber \\ R^{A}_{\ BCD;M_{1}} & = & \widetilde{R}^{A}_{\ BCD;M_{1}}\;, \nonumber \\ T^{A}_{\ BC;M_{1}M_{2}} & = & \widetilde{T}^{A}_{\ BC;M_{1}M_{2}}\;, \\ & \vdots & \nonumber \\ R^{A}_{\ BCD;M_{1}\ldots M_{p+1}} & = & \widetilde{R}^{A}_{\ BCD;M_{1} \ldots M_{p+1}}\;,\nonumber \\ T^{A}_{\ BC ;M_{1} \ldots M_{p+2}} & = & \widetilde{T}^{A}_{\ BC;M_{1} \ldots M_{p+2}} \nonumber \end{aligned}$$ are compatible as equations in Lorentz frame bundle coordinates $\left( x^{a}, \xi^{A} \right)$. Here and in what follows we use a semicolon to denote covariant derivatives. Note that $x^{a}$ are coordinates on the manifold $M$ while $ \xi^{A}$ parametrize the group of allowed frame transformations. Reciprocally, equations (\[eqvcond\]) imply local equivalence between the space-time manifolds. The $(p+2)^{th}$ derivative of torsion and the $(p+1)^{th}$ derivative of curvature are the lowest derivatives which are functionally dependent on all the previous derivatives. It should be noticed that in the above set of [*algebraic*]{} equations [*necessary and sufficient*]{} for the local equivalence we have taken into account the Bianchi identities $R^{A}_{\ \ [\,BCD\,]} -T^{A}_{\ \ [\,BC;D\,]} = - T^{N}_{\ \ [\,BC}T^{A}_{\ D\,]\,N}$ and their differential concomitants. Thus, when the components of the $\,0^{th}, \ldots ,(p+1)^{th}\,$ covariant derivatives of the torsion are known, the Bianchi identities and their differential concomitants reduce to a set of linear algebraic equations, which relates (for each $p$) the $(p+1)^{th}$ covariant derivatives of curvature to the $(p+2)^{th}$ covariant derivatives of torsion. So we need the $(p+2)^{th}$ derivatives of torsion in (\[eqvcond\]), which did not appear in [@frt]. A comprehensive local description of a Riemann-Cartan manifold is, therefore, given by the set $$I_{p} = \{ T^{A}_{\ BC}\,, R^{A}_{\ BCD}\,, T^{A}_{\ BC;N_{1}}\,, R^{A}_{\ BCD;M_{1}}\,,T^{A}_{\ BC;N_{1}N_{2}}\,, \,\ldots, \, R^{A}_{\ BCD;M_{1} \ldots M_{p}\,,} T^{A}_{\ BC;N_{1} \ldots N_{p+1}} \}, \label{rcscl}$$ whose elements are called Cartan scalars, since they are scalars under coordinate transformations on the base manifold. The theoretical upper bound for the number of covariant derivatives to be calculated is $10$ for the curvature and $11$ for the torsion, which corresponds to $11$ steps (from $0$th to $10$th-order derivatives for the curvature) in the algorithm presented below. The number of steps can be thought of as being related to the six Lorentz transformation parameters $\xi^A$, the four coordinates $x^a$ on the space-time manifold and one integrability condition. A word of clarification regarding this integrability condition is in order here: when the number of derivatives needed is not the maximum possible (set by the dimension of the frame bundle) then to show that the derivative process has terminated one has to take one more derivative and show that it contains no new information by checking the functional relations between the Cartan scalars. This can be understood as if we were introducing invariantly-defined coordinates (though we cannot explicitly do that) and then had to take their derivatives in order to substitute for the differentials in the usual formula for the line element. In practice, the coordinates and Lorentz transformation parameters are treated differently. Actually a fixed frame (a local section of the Lorentz frame bundle) is chosen to perform the calculations so that the elements of the set $I_{p}$ coincide with the the components of the curvature and torsion tensors of the space-time base manifold and their covariant derivatives; there is no explicit dependence on the Lorentz parameters. To deal with equivalence it is necessary to calculate the elements of the set $I_{p}$. However, even when the Bianchi and Ricci identities and their differential concomitants are taken into account, in the worst case one still has 11064 independent elements to calculate. Thus, an algorithmic procedure for carrying out these calculations and a computer algebra implementation are highly desirable. A practical procedure for testing equivalence of Riemann-Cartan space-times has been developed [@frm] – [@frm1], [@afmr1]. In the procedure the maximum order of derivatives is not more than $7$ for the curvature and $8$ for the torsion. The basic idea behind our procedure is separate handling of frame rotations and space-time coordinates, fixing the frame at each stage of differentiation of the curvature and torsion tensors by aligning the basis vectors as far as possible with invariantly-defined directions. The algorithm starts by setting $q=0$ and has the following steps [@frm1]: 1. Calculate the set $I_{q}$, i.e., the derivatives of the curvature up to the $q^{th}$ order and of the torsion up to the $(q+1)^{th}$ order. 2. Fix the frame, as much as possible, by putting the elements of $I_{q}$ into canonical forms. 3. Find the frame freedom given by the residual isotropy group $H_{q}$ of transformations which leave the canonical forms invariant. 4. Find the number $t_{q}$ of functionally independent functions of space-time coordinates in the elements of $I_q$, brought into the canonical forms. 5. If the isotropy group $H_{q}$ is the same as $H_{(q-1)}$ and the number of functionally independent functions $t_{q}$ is equal to $t_{(q-1)}$, then let $q=p+1$ and stop. Otherwise, increment $q$ by 1 and go to step $1$. This procedure provides a discrete characterization of Riemann-Cartan space-times in terms of the following properties: the set of canonical forms in $I_{p}$, the isotropy groups $\{H_{0},\ldots ,H_{p}\}$ and the number of independent functions $\{t_{0}, \dots ,t_{p}\}$. Since there are $t_p$ essential space-time coordinates, clearly $4-t_p$ are ignorable, so the isotropy group will have dimension $s = \mbox{dim}\,( H_p )$, and the group of symmetries (called affine-isometry) of both metric (isometry) and torsion (affine collineation) will have dimension $r$ given by $$r = s + 4 - t_p \,, \label{gdim}$$ acting on an orbit with dimension $$d = r - s = 4 - t_p \,. \label{ddim}$$ To check the equivalence of two Riemann-Cartan space-times one first compares the above discrete properties and only when they match is it necessary to determine the compatibility of equations (\[eqvcond\]). In our implementation of the above practical procedure, rather than using the curvature and torsion tensors as such, the algorithms and computer algebra programs were devised and written in terms of spinor equivalents: (i) the irreducible parts of the Riemann-Cartan curvature spinor $$\begin{aligned} R_{ABCDG'H'} & = & \varepsilon_{G'H'} \, [ \Psi_{ABCD} +(\varepsilon_{AC}\varepsilon_{BD} + \varepsilon_{AD}\varepsilon_{BC})(\Lambda + i\Omega) + \varepsilon_{AC}\Sigma_{BD} \nonumber \\ &+& \varepsilon_{BD}\Sigma_{AC} + \varepsilon_{AD}\Sigma_{BC} + \varepsilon_{BC}\Sigma_{AD} \,] + \varepsilon_{CD}(\Phi_{ABG'H'} + i\Theta_{ABG'H'})\,, \label{spcurv}\end{aligned}$$ which, clearly, are $\Psi_{ABCD}$ ([tpsi]{}), $\Phi_{ABX'Z'}$ ([tphi]{}), $\Theta_{ABX'Z'}$ ([theta]{}), $\Sigma_{AB}$ ([sigma]{}), $\Lambda$ ([tlambd]{}) and $\Omega$ ([omega]{}); and (ii) the irreducible parts of the torsion spinor $$T_{AX'BC} = L_{X'ABC} +\frac{1}{3}\,(\varepsilon_{AB}T_{CX'} + \varepsilon_{AC}\bar{T}_{BX'}) + \frac{1}{3}\,i\,(\varepsilon_{AB}S_{CX'} + \varepsilon_{AC}\bar{S}_{BX'})\,, \label{sptor}$$ namely: ${\cal T}_{AX'}$ ([spttor]{}, [sp]{} $=$ spinor, [t]{} $=$ trace, [tor]{} $=$ torsion ), ${\cal P}_{AX'}$ ([spptor]{}, [sp]{} $=$ spinor, [p]{} $=$ pseudo-trace, [tor]{} $=$ torsion ), and ${\cal L}_{ABCX'}$ ([spltor]{}, [sp]{} $=$ spinor, [l]{} $=$ Lanczos spinor, [tor]{} $=$ torsion). Note that these irreducible parts of curvature and torsion spinors are nothing but the spinor equivalents of the curvature and torsion tensors given, respectively, by equations (B.4.3) and (B.2.5) in the appendix B of Ref. [@Hehl1995]. Note that the [tclassi]{} users’ names for the spinorial quantities have been indicated inside round brackets. We note that, besides the above indication for the names of the irreducible parts of the torsion spinor, the names of the irreducible parts of both Riemann-Cartan curvature and first covariant derivative of the torsion were generalized from the names in [classi]{} [@MacCSkea] by bearing in mind whether they have the same symmetry as the Weyl spinor (the Weyl-type spinors: $\Psi_A$ and $\psi_A$) or the symmetry of the Ricci spinor (the Ricci-type spinors: $\Phi_{AB'}$, $\phi_{AB'}$, $\Theta_{AB'}$, $\nabla {\cal T}_{AX'}$, $\nabla {\cal P}_{AY'}$). We have employed the affixes: [bv]{} for bivector, [sp]{} for spinor, [sc]{} for scalar, [a]{} for d’Alembertian; [d]{}, [d2]{} and so on, for the first, the second and so forth derivative of the spinorial quantities. We have used the letters $\Sigma$, ${\cal M}$, ${\cal B}$ to denote bivectors, i.e., objects with the same symmetries of Maxwell spinor. We have also named three basic scalars by [tlambd]{} ($\Lambda$) [omega]{} ($\Omega$) and [scttor]{} (${\cal T}$). There are also names which were simply borrowed from [classi]{} with the addition of the letter [t]{} for torsion, as for example [txi]{} (see [@MacCSkea], [@frm] and [@frm1] for details). A relevant point to be taken into account when one needs to compute derivatives of the curvature and the torsion tensors is that they are interrelated by the Bianchi and Ricci identities and their differential concomitants. Thus, to cut down the number of quantities to be calculated it is very important to find a minimal set of quantities from which the curvature and torsion tensors, and their covariant derivatives are obtainable by algebraic operations. For Riemann-Cartan space-time manifolds, taking into account the irreducible parts of the Bianchi and Ricci identities and their differential concomitants, a complete minimal set $C_q$ of such quantities recursively defined in terms of totally symmetrized $q^{th}$ derivatives of the curvature spinors and $(q+1)^{th}$ derivatives of the torsion spinors can be specified [@frm1; @fmr2] by: 1. For $q=0$ : the torsion’s irreducible parts, namely\ (a) ${\cal T}_{AX'}$ ([spttor]{}), (b) ${\cal P}_{AX'}$ ([spptor]{}), (c) ${\cal L}_{ABCX'}$ ([spltor]{}); 2. The totally symmetrized $q^{th}$ derivatives of 1. 1. $\Psi_{ABCD}$ ([tpsi]{}), 2. $\psi_{ABCD} \equiv - \nabla^{N'}_{\ \ \ (A}{\cal L}^{}_{BCD)N'}$ ([psiltor]{}), 2. 1. $\Phi_{ABX'Z'}$ ([tphi]{}), 2. $\Theta_{ABX'Z'}$ ([theta]{}), 3. $\phi_{ABX'Z'} \equiv - \frac{1}{2} ( \nabla^{N}_{\ \ (X'}{\cal L}^{}_{Z')ABN} + \nabla^{N'}_{\ \ (A}{\bar {\cal L}}^{}_{B)X'Z'N'} ) $ ([philtor]{}), 3. 1. $\Lambda$ ([tlambd]{}), 2. $\Omega$ ([omega]{}), 3. ${\cal T} \equiv \nabla_{NN'}{\cal T}^{NN'}$ ([scttor]{}), 4. 1. $\Sigma_{AB}$ ([sigma]{}), 2. ${\cal M}_{AB} \equiv \nabla^{N'}_{\ \ \ (A}{\cal T}^{}_{B)N'}$ ([bvttor]{}), 3. ${\cal B}_{AB} \equiv \nabla^{N'}_{\ \ \ (A}{\cal P}^{}_{B)N'}$ ([bvptor]{}). 3. The totally symmetrized $(q+1)^{th}$ derivatives of (a) ${\cal T}_{AX'}$ ([dspttor]{}), (b) ${\cal P}_{AX'}$ ([dspptor]{}), (c) ${\cal L}_{ABCX'}$ ([dspltor]{}). 4. For $q \geq 1$: 1. the totally symmetrized $(q-1)^{th}$ derivatives of 1. $\Xi_{ABCX'} \equiv \nabla^{N}_{\ X'}\Psi^{}_{ABCN}\,$ ([txi]{}), 2. ${\cal X}_{ABCX'} \equiv \nabla^{N'}_{\ (A}\Theta^{}_{BC)N'X'}\,$ ([xith]{}), 3. ${\cal U}_{AX'} \equiv \frac{1}{2}(\nabla_{\ \ X'}^{N} \Sigma_{AN} +\nabla_{\ \ A}^{N'}{\bar \Sigma}_{X'N'})\,$ ([tsigm]{}), 4. ${\cal V}_{AX'} \equiv -\frac{i}{2}(\nabla_{\ \ X'}^{N} \Sigma_{AN} -\nabla_{\ \ A}^{N'}{\bar \Sigma}_{X'N'})$ ([psigm]{}); 2. for $q=1$ the d’Alembertian of the irreducible parts of the torsion: 1. $\Box\,{\cal T}_{AX'} \equiv \nabla^{NN'}\nabla_{NN'}\,{\cal T}_{AX'}$ ([aspttor]{}), 2. $\Box\,{\cal P}_{AX'} \equiv \nabla^{NN'}\nabla_{NN'}\,{\cal P}_{AX'}$ ([aspptor]{}), 3. $\Box\,{\cal L}_{ABCX'} \equiv \nabla^{NN'}\nabla_{NN'}\,{\cal L}_{ABCX'}$ ([aspltor]{}). 5. For $q \geq 2$: 1. the d’Alembertian $\Box\,Q \equiv \nabla^{NN'}\nabla_{NN'}\,Q$ applied to all quantities $Q$ calculated for the derivatives of order $q-2$, i.e. in the set $C_{(q-2)}$, except the d’Alembertians of the irreducible parts of torsion for $q=2$ (when $n=2$, e.g., $\,\Box\, \Psi_A$ ([atpsi]{}), $\Box\, \psi_A$ ([apsiltor]{}), $\Box\, \Phi_{AB'}\,$ ([atphi]{}), and so forth). 2. the totally symmetrized $(q-2)^{th}$ derivatives of 1. $ \Upsilon_{ABCD} \equiv -\nabla^{N'}_{\ \ (A}{\cal X}^{}_{BCD)N'}\,$ ([psixith]{}), 2. ${\cal F}_{AB} \equiv \nabla^{N'}_{\ \ \ (A}{\cal U}^{}_{B)N'}\,$ ([bvtsigm]{}). It should be stressed that we have included in the above set the d’Alembertian of the irreducible parts of the torsion (4.(b) i – iii), which was missed in [@frm]. Note also that the above list contains inside parentheses the [tclassi]{} external name (for the users) after each quantity. Finally, we remark that the above minimal set is a generalization of the corresponding set found for Riemannian space-time manifolds by MacCallum and [Å]{}man [@MacAman]. In our practical procedure the frame is fixed (as much as possible) by bringing into canonical form first the quantities with the same symmetry as the Weyl spinor (called Weyl-type), i.e., $\Psi_A$ and $\psi_A$, followed by the spinors with the symmetry of the Ricci spinor (referred to as Ricci-type spinor), namely $\Phi_{AB'}$, $\phi_{AB'}$, $\Theta_{AB'}$, $\nabla {\cal T}_{AX'}$, $\nabla {\cal P}_{AY'}$, then bivector spinors $\Sigma_{AB}$, ${\cal M}_{AB}$ and ${\cal B}_{AB}$, and finally vectors ${\cal T}_{AX'}$ and ${\cal P}_{AX'}$ are taken into account. Thus, if $\Psi_{A}$ is Petrov I, for example, the frame can be fixed by demanding that the nonvanishing components of $\Psi_A$ are such that $\Psi_1 = \Psi_3 \not= 0, \Psi_2 \not= 0$. Clearly an alternative canonical frame is obtained by imposing $\Psi_0 = \Psi_4 \not= 0, \Psi_2 \not= 0$. Although the latter is implemented in [tclassi]{} as the canonical frame, in the next section we shall use the former (defined to be an acceptable alternative in [tclassi]{}) to make easier the comparison between our results and those of the corresponding Riemannian case [@rebaman]. To close this section we remark that in the [tclassi]{} implementation of the above results a notation is used in which the indices are all subscripts and components are labelled by a primed and unprimed index whose numerical values are the sum of corresponding (primed and unprimed) spinor indices. Thus, for example, one has $$\nabla\,\Psi_{20'} \equiv \Psi_{(1000;1)0'} = \nabla^{X'}_{\ \ \ (A} \Psi^{}_{BCDE)}\,\iota^A \iota^B o^C o^D o^E\, \bar{o}_{X'}\,, \nonumber$$ where the parentheses indicate symmetrization, the bar is used for complex conjugation, and the pair ($\iota^A, o^B$) constitutes an orthonormal spinor basis. Homogeneous Riemann-Cartan Gödel-type Space-times ================================================= Throughout this section we shall consider a four-dimensional Riemann-Cartan manifold $M$, endowed with a Gödel-type metric (\[ds2\]) and a torsion that shares the same translational symmetries as the metric, and is aligned with the direction singled out by the rotation vector field $w$ (called Riemann-Cartan Gödel-type space-time). So, in the coordinate system given in (\[ds2\]) the torsion tensor is given by $T^t_{\ xy} = S(x)$. For arbitrary functions $H(x)$, $D(x)$ and $S(x)$ both $\Psi_A$ and $\psi_A$ are Petrov I; this fact can be easily checked by using the package [tclassi]{}. Accordingly the null tetrad $\Theta^A$ which turns out to be appropriate (canonical) for our discussions is $$\begin{aligned} \Theta^{0} = \frac{1}{\sqrt{2}}(\theta^{0} + \theta^{3})\,, \quad\qquad \Theta^{1} = \frac{1}{\sqrt{2}}(\theta^{0} - \theta^{3})\,, \nonumber \\ \label{nullt} \\ \Theta^{2} = \frac{1}{\sqrt{2}}(\theta^{2} - i \theta^{1})\,, \quad\qquad \Theta^{3} = \frac{1}{\sqrt{2}}(\theta^{2} + i \theta^{1})\,, \nonumber\end{aligned}$$ where $\theta^{A}$ is a Lorentz tetrad ($\eta_{AB} = {\rm diag}\,(+1,-1,-1,-1)$) given by $$\theta^{0} = dt + H(x)\,dy\,, \quad \theta^{1} = dx\,, \quad \theta^{2} = D(x)\,dy\,, \quad \theta^{3} = dz\,. \label{lort}$$ Clearly in the null frame (\[nullt\]) the torsion tensor and the Gödel-type line element (\[ds2\]) are given by $$T^0_{\ 23} = T^{1}_{\ 23} = \frac{\sqrt{2}}{2}\,i\,S(x) \qquad \mbox{and} \qquad ds^2 = 2\,(\Theta^0\,\Theta^1 - \Theta^2\,\Theta^3)\,. \label{gtyrc}$$ It is worth mentioning that the Petrov type for $\Psi_A$ and $\psi_A$ and the canonical frame (\[nullt\]) were obtained by interaction with [tclassi]{}, starting from the Lorentz frame (\[lort\]), changing to a null tetrad frame, and making dyad transformations to bring $\Psi_A$ and $\psi_A$ into the canonical form for Petrov type I discussed in section 2. Using the [tclassi]{} package we referred to in the previous sections one finds the following nonvanishing components of the Cartan scalars corresponding to the first step (for $q=0$) of our algorithm: $$\begin{aligned} \Psi_1 &=& \Psi_3 = \frac{1}{8}\,\left[\,S' - \left( \frac{H'}{D}\, \right)' \, \right] \,, \label{1st} \\ \Psi_2 &=& - \,\frac{S}{4}\, \left(\frac{S}{3} - \frac{H'}{D}\, \right) +\frac{1}{6}\left[\,\frac{D''}{D} - \left( \frac{H'}{D}\, \right)^2\, \,\right] \,, \\ \psi_1 &=& \psi_3 = \frac{S'}{8} \,, \\ \psi_2 &=& - \,\frac{S}{4}\,\left( S - \frac{H'}{D}\, \right) \,, \\ \Phi_{00'}&=&\Phi_{22'} = \frac{S}{4}\, \left(\frac{S}{2} -\,\frac{H'}{D}\, \right) +\frac{1}{8}\, \left(\frac{H'}{D}\,\right)^2 \,,\\ \Phi_{01'}&=&\Phi_{12'} = - \,\frac{S'}{8} + \frac{1}{8}\, \left(\frac{H'}{D}\,\right)' \,, \\ \Phi_{11'}&=& \,\frac{S}{4}\,\left(\,\frac{S}{4} - \frac{H'}{D}\, \right) + \frac{1}{4} \left[\, \frac{3}{4}\,\left(\frac{H'}{D}\,\right)^2 - \,\frac{D''}{D} \right] \,, \\ \phi_{00'}&=& \phi_{22'} = \phi_{11'} = \frac{S}{4}\, \left( S - \frac{H'}{D}\,\right) \,, \\ \phi_{01'}&=& \phi_{12'} = - \,\frac{S'}{8} \,, \\ \nabla {\cal P}_{01'} &=& -\,\nabla {\cal P}_{12'} = -\,\frac{i}{4}\, S' \,, \\ {\cal B}_0 &=& - {\cal B}_2 = - \,\frac{i}{2}\, S' \,, \\ {\cal P}_{00'}&=& - {\cal P}_{11'} = - \,\frac{\sqrt{2}}{2}\, S \,, \\ \Lambda &=& - \frac{S^2}{48}\,- \frac{1}{12} \left[\,\frac{D''}{D} - \frac{1}{4} \left(\frac{H'}{D}\,\right)^2 \right] \,, \\ {\cal L}_{10'}&=& {\cal L}_{21'} = - \,\frac{i}{6}\,\sqrt{2}\,S \,, \\ \nabla {\cal L}_{10'}&=&- \,\nabla {\cal L}_{32'}= \frac{S}{16}\, \left( S -\frac{H'}{D}\, \right)\,, \\ \nabla {\cal L}_{11'}&=&- \,\nabla {\cal L}_{31'}= -\,\frac{3}{4} \,\nabla {\cal L}_{20'}= \frac{3}{4} \,\nabla {\cal L}_{22'} = \frac{S'}{16} \,, \label{last}\end{aligned}$$ where the prime denotes derivative with respect to $x$. From equation (\[ddim\]) one finds that for ST homogeneity we must have $t_{p} = 0$, that is the number of functionally independent functions of the space-time coordinates in the set $I_{p}$ must be zero. Accordingly all the above quantities of the minimal set must be constant. Thus, from eqs. (\[1st\]) – (\[last\]) one easily concludes that for a Riemann-Cartan Gödel-type space-time (\[gtyrc\]) to be ST homogeneous it is necessary that $$\begin{aligned} S &=& \mbox{const} \equiv \ell \,, \label{torcond} \\ \frac{H'}{D} &=& \mbox{const} \equiv 2\,\omega \label{metcond1} \,, \\ \frac{D''}{D}&=& \mbox{const} \equiv m^2 \,. \label{metcond2}\end{aligned}$$ We shall now show that the above necessary conditions are also sufficient for ST homogeneity. Indeed, under the conditions (\[torcond\]) – (\[metcond2\]) the Cartan scalars corresponding to the first step (for $q=0$) of our algorithm reduce to $$\begin{aligned} \Psi_2 &=& \,\frac{\ell}{2}\, \left(\,\omega - \frac{\ell}{6} \,\right) +\frac{m^2}{6} - \frac{2}{3}\,\omega^2 \,, \label{um} \\ \psi_2 &=&-\,\frac{\ell}{4}\,\left(\ell -2\,\omega\,\right)\,,\label{dois} \\ \Phi_{00'}&=&\Phi_{22'} =\frac{\ell}{4}\,\left(\frac{\ell}{2} -2\,\omega\,\right) +\frac{\omega^2}{2} \,, \label{tres} \\ \Phi_{11'}&=& \,\frac{\ell}{4}\,\left(\,\frac{\ell}{4} - 2\,\omega\, \right) + \frac{3}{4}\,\omega^2 - \frac{m^2}{4} \,, \label{quatro} \\ \phi_{00'}&=& \phi_{22'} = \phi_{11'} = \frac{\ell}{4}\, \left( \ell - 2\,\omega\,\right) \,, \label{cinco} \\ {\cal P}_{00'}&= & -{\cal P}_{11'} = -\,\frac{\sqrt{2}}{2}\, \ell\,,\label{seis} \\ \Lambda &=& -\frac{\ell^2}{48}\,+\frac{1}{12} \left(\,\omega^2-m^2 \right)\,, \label{sete} \\ {\cal L}_{10'}&=& {\cal L}_{21'} = -\,\frac{i}{6}\,\sqrt{2}\,\ell\,,\label{oito}\\ \nabla {\cal L}_{10'}&=&- \,\nabla {\cal L}_{32'}= \frac{\ell}{16}\, \left( \ell - 2\,\omega\,\right)\,. \label{nove}\end{aligned}$$ Following the algorithm of the previous section, one needs to find the isotropy group which leaves the above Cartan scalars (canonical forms) invariant. Since $\ell \not= 0$ one can easily find that there are Cartan scalars invariant under the three-dimensional Lorentz group $SO(2,1)$ like, e.g., ${\cal P}_{AB'}$, or even the whole Lorentz group, like $\Lambda$. However, the whole set of Cartan scalars (\[um\]) – (\[nove\]) is invariant only under the spatial rotation $$\label{SpaRot} \left( \begin{array}{cc} e^{i\alpha} & 0 \\ 0 & e^{- i\alpha} \\ \end{array} \right) \,\,,$$ where $\alpha$ is a real parameter. So, the residual group $H_0$ which leaves the above Cartan scalars invariant is one-dimensional. We proceed by carrying out the next step of our practical procedure, i.e., by calculating the totally symmetrized covariant derivative of the Cartan scalars (\[um\]) – (\[nove\]) and the d’Alembertian of the irreducible parts of the torsion. Using [tclassi]{} one finds the following nonvanishing quantities: $$\begin{aligned} \nabla \,\Psi_{20'} &=& -\nabla \,\Psi_{31'} = \frac{i}{40}\,\sqrt{2}\,\,\ell \left(2\,m^2 + 8\,\ell\, \omega - \ell^2 - 20\,\omega^2 \,\right) \nonumber \\ && +\frac{i}{10}\,\sqrt{2}\,\omega \left( 4\,\omega^2 - m^2\right)\,, \label{dez} \\ \nabla \,\psi_{20'} &=& -\nabla \,\psi_{31'} = -\frac{i}{40}\,3\,\sqrt{2}\,\,\ell \left( 4\,\omega^2 - 4\,\ell\, \omega + \ell^2 \,\right)\,, \label{onze} \\ \Xi_{10'} & = & \Xi_{21'} = \frac{i}{16}\,\sqrt{2}\,\,\ell \left( 2\,m^2 + 8\,\ell\, \omega - \ell^2 - 20\, \omega^2\, \right) \nonumber \\ && +\frac{i}{4}\,\sqrt{2}\,\omega\left( 4\,\omega^2 - m^2 \right) \label{doze} \,, \\ \Box\,{\cal L}_{10'}&=& \,\Box\,{\cal L}_{21'}= \frac{i}{4}\,\sqrt{2}\,\,\ell \left(4\,\omega^2 - 4\,\ell\, \omega + \ell^2 \,\right)\,, \label{treze} \\ \nabla^2 {\cal L}_{10'}&=& \,\nabla^2 {\cal L}_{43'}= \frac{i}{80}\,\sqrt{2}\,\,\ell \left( 4\,\omega^2 - 4\,\ell\, \omega + \ell^2 \,\right)\,, \label{quatorze} \\ \nabla^2 {\cal L}_{21'}&=& \,\nabla^2 {\cal L}_{32'}= -\,\frac{i}{480}\,\sqrt{2}\,\,\ell \left(4\,\omega^2 - 4\,\ell\, \omega +\ell^2 \,\right)\,. \label{quinze} \end{aligned}$$ As no new functionally independent function arose, $t_0=t_1$. Besides, the Cartan scalars (\[dez\]) – (\[quinze\]) are invariant under the same isotropy group (\[SpaRot\]), i.e. $H_0 = H_1$. Thus no new covariant derivatives should be calculated. From eq. (\[gdim\]) one finds that the group of symmetries (affine-isometric motions) of the Riemann-Cartan Gödel-type space-time is five-dimensional — the necessary conditions (\[torcond\]) – (\[metcond2\]) are also sufficient for ST homogeneity. The above results can be collected together in the following theorems: \[HomCond\] The necessary and sufficient conditions for a Riemann-Cartan Gödel-type space-time to be ST (locally) homogeneous are those given by equations (\[torcond\]) – (\[metcond2\]). \[RelPar\] All ST locally homogeneous Riemann-Cartan Gödel-type space-times admit a five-dimensional group of affine-isometric motion and are characterized by three independent parameters $\,\ell$, $m^2$ and $\omega$: identical triads ($\ell, m^2, \omega$) specify equivalent space-times. As the parameter $\omega$ is known to be essentially the rotation in Gödel-type space-times, a question which naturally arises here is whether there is any simple geometrical interpretation for the parameters $\ell$ and $m$. As the parameter $\ell$ is a measure of the strength of the torsion it clearly has the geometrical interpretation usually associated with the torsion tensor. We have not been able to figure out a simple geometrical interpretation for the parameter $m$, though. The parameter $m^2$, nevertheless, has been used to group the general class of Gödel-type metrics into three disjoint subclasses, namely: (i) the hyperbolic class ($m^2 > 0$), (ii) the circular class ($m^2 \equiv -\,\mu^2 < 0$), and the linear class, where $m^2=0$ (see in this regard [@rebtio]). It is worth emphasizing that when $\ell = 0$ eqs. (\[um\]) – (\[quinze\]) reduce to the corresponding equations for Riemannian Gödel-type space-times (eqs. (3.12) – (3.15) and (3.18) – (3.21) in [@rebaman]). Therefore, the results in [@rebaman] can be reobtained as a special case of our study here. So, for example, the above theorems \[HomCond\] and \[RelPar\] generalize the corresponding theorems in [@rebaman] (theorems 1 and 2 on page 891). It should be noticed that the Riemannian ST-homogeneous Gödel-type space-times can have a group of isometries of dimension higher than five, as, e.g., when $m^2 = 4\,\omega^2$ which permits a $G_7$, and $\omega = 0$, $m \not= 0$ which allows a $G_6$. However, for Riemann-Cartan Gödel-type space-times, apart from the rather special case of flat Riemann-Cartan space-time ($\ell=m=\omega=0$), there are no relations among the relevant parameters ($\ell, m^2, \omega$) for which the dimension of the group of affine-isometric motions is higher than five. As far as the algebraic classification of the nonvanishing Weyl-type and Ricci-type spinors is concerned, from eqs. (\[um\]) – (\[cinco\]) we find that for a general triad $(\ell, m^2, \omega)$ both Weyl-type spinors $\Psi_A$ and $\psi_A$ are Petrov type D, whereas the Ricci-type spinors $\Phi_{AB'}$ and $\phi_{AB'}$ are both of Segre type \[1,1(11)\]. There exist, nevertheless, many instances for which these algebraic types can be more specialized. We mention a few: 1. When either $m= \ell/3 = 2\,\omega$ or $m^2=\ell^2/2,\;\;\omega=0$, $\Psi_A$ and $\psi_A$ are, respectively, Petrov 0 and D, while both $\Phi_{AB'}$ are Segre type \[(1,11)1\]; and both $\phi_{AB'}$ are type \[1,1(11)\]; 2. For $\ell = 2\,\omega$ and $m \not= 0$, $\Psi_A$ is Petrov D, $\psi_A$ is Petrov 0, $\Phi_{AB'}$ is type \[(1,1)(11)\], and $\phi_{AB'}$ is Segre type 0; 3. When $\ell = 2\,\omega$ and $m=0$, both $\Psi_A$ and $\psi_A$ are Petrov 0, and $\Phi_{AB'}$ and $\phi_{AB'}$ are both Segre type 0; 4. For $m=\,\omega= 0,\;\; \ell \not=0$ (Riemannian flat space-time), $\Psi_A$ and $\psi_A$ are both Petrov type D, while $\Phi_{AB'}$ is Segre type \[1,(111)\]; and $\phi_{AB'}$ is type \[1,1(11)\]. Regarding the classification of the nonvanishing parts of the torsion spinors one can easily find that for $\ell \not=0$ the spinor ${\cal P}_{AX'}$ corresponds to a space-like vector, with $SO(2,1)$ as its group of isotropy. The Lanczos spinor ${\cal L}_{ABCX'}$ is invariant under the spatial rotation (\[SpaRot\]) (one-dimensional isotropy group). It should be noticed that the equivalence problem techniques, as formulated in Ref. [@frt] and embodied in the suite of computer algebra programs [tclassi]{} which we have used in this work, can certainly be used in more general contexts, as for example in the examination of Riemann-Cartan Gödel-type family space-times in which the torsion, although polarized along the direction of the rotation, does not share the translational symmetries of the metric [@fr]. We have chosen the case of the present article because it gives a simple illustration of our approach to the equivalence problem techniques applied to Einstein-Cartan Gödel-type solutions which have already been discussed in the literature (see Ref. [@DTT2] and references therein quoted on Gödel-type solutions with torsion). As well as specialist systems such as [sheep]{}, on which [tclassi]{} is based, all the main general-purpose computer algebra systems have some sort of facilities for calculation in general relativity. Indeed, extensive sets of programs useful in GR are available with [reduce]{}, [maple]{} and [macsyma]{}, and with [mathematica]{} through the [MathTensor]{} package. By contrast, as far as we are aware, the existing facilities in computer algebra systems for calculations in theories with non-zero torsion are quite limited. Actually, we only know of the [reduce]{} programs for applications to Poincaré gauge field theory written by J. Dermott McCrea in collaboration with F. W. Hehl [@McCrea] and a set of [mathematica]{} programs for calculation in RC manifolds written by H. H. Soleng and called [cartan]{} [@Soleng96] (see also [@Hehl97]). McCrea’s programs are written using the [reduce]{} package [excalc]{} [@McCrea; @Schrufer]. These programs, however, do not contain the implementation of the equivalence problem for Riemann-Cartan manifolds. The [lisp]{}-based system [tclassi]{} was devised with the equivalence problem of RC manifold in mind, and is so far the only package that incorporates the equivalence problem techniques (see also [@frm] and [@frm1]). Furthermore, also in TTG there is room for specialized systems like [tclassi]{}. The major reason for this is that they tend to be more efficient than general-purpose systems. For a comparison of CPU times for a specific metric in GR, for example, see MacCallum [@mm2; @mm3]. To conclude, we should like to emphasize that as no field equations were used to show the above results, they are valid for every Riemann-Cartan Gödel-type solution regardless of the torsion theory of gravitation one is concerned with, in particular they hold for the Riemann-Cartan Gödel-type class of solutions discussed in [@DTT2] and [@DTT1], which were found in the context of Einstein-Cartan theory. Acknowledgments {#acknowledgments .unnumbered} =============== J.B. Fonseca-Neto and M.J. Rebouças gratefully acknowledge financial assistance from CNPq. [99]{} F. W. Hehl, P. von der Heyde, G. D. Kerlick, and J. M. Nester, [*Rev. Mod. Phys.*]{} [**48**]{}, 393 (1976). E. Kröner, “Continuum Theory of Defects”, [*Physics of Defects*]{}, Les Houches, Session XXXV, edited by R. Balian [*et al.*]{}. North-Holland, Amsterdam (1981). E. Kröner, [*Int. J. Theor. Phys.*]{} [**29**]{}, 1219 (1990). M. O. Katanaev and I. V. Volovich, [*Ann.Phys.  (NY)*]{} [**216**]{}, 1 (1992). F. Moraes, [*Phys. Lett. A*]{} [**214**]{}, 189 (1996). A. P. Balachandran, V. John, A. Momen and F. Moraes, “Anomalous Defects and Their Quantized Transverse Conductivities”, hep-th/9612247, to appear in [*Int. J. Theor. Phys.*]{} (1997). F. W. Hehl, J. D. McCrea, E. W. Mielke, and Y. Ne’eman, [*Phys.  Rep.*]{} [**258**]{}, 1 – 171 (1995). E. Cartan, “Leçons sur la Géométrie des Éspaces de Riemann”, Gauthier-Villars, Paris (1951). English translation by J. Glazebrook, Math. Sci. Press, Brookline (1983). A. Karlhede, [*Gen. Rel. Grav.*]{} [**12**]{}, 693 (1980). M. A. H. MacCallum, “Classifying Metrics in Theory and Practice”, in [*Unified Field Theory in More Than 4 Dimensions, Including Exact Solutions*]{}, edited by V. de Sabbata and E. Schmutzer. World Scientific, Singapore (1983). M. A. H. MacCallum, “Computer-aided Classification of Exact Solutions in General Relativity”, in [*General Relativity and Gravitational Physics (9th Italian Conference)*]{}, edited by R. Cianci, R. de Ritis, M. Francaviglia, G. Marmo, C. Rubano and P. Scudellaro. World Scientific, Singapore (1991). M. A. H. MacCallum and J. E. F. Skea, “[sheep]{}: A Computer Algebra System for General Relativity”, in [*Algebraic Computing in General Relativity, Lecture Notes from the First Brazilian School on Computer Algebra*]{}, Vol. II, edited by M. J. Rebouças and W. L. Roque. Oxford U. P., Oxford (1994). J. B. Fonseca-Neto, M. J. Rebouças and A. F. F. Teixeira, [*J. Math. Phys.*]{} [**33**]{}, 2574 (1992). J. B. Fonseca-Neto, M. J. Rebouças and M. A. H. MacCallum, “Algebraic Computing in Torsion Theories of Gravitation”, in [*Proceedings of the International IMACS Symposium on Symbolic Computation*]{}, edited by G. Jacob, N. E. Oussous and S. Steinberg. IMACS Press (1993). J. E. [Å]{}man, J. B. Fonseca-Neto, M. A. H. MacCallum and M. J. Rebouças, “[tclassi]{}: A computer algebra package for torsion theories of gravitation”, in [*Abstracts of Contributed papers*]{}, 14th International Conference on General Relativity and Gravitation, p. 179 (1995). University of Firenze. J. B. Fonseca-Neto, M. J. Rebouças and M. A. H. MacCallum, [*Maths. Comp. Simul.*]{} [**42**]{}, 739 (1996). K. Gödel, [*Rev.  Mod. Phys.*]{} [**21**]{}, 447 (1949). A. K. Raychaudhuri and S. N. G. Thakurta, [*Phys.  Rev.  D*]{} [**22**]{}, 802 (1980). M. J. Rebouças and J. Tiomno, [*Phys.  Rev.  D*]{} [**28**]{}, 1251 (1983). A. F. F. Teixeira, M. J. Rebouças and J. E. [Å]{}man, [*Phys.  Rev.  D*]{} [**32**]{}, 3309 (1985). M. J. Rebouças and J. E. [Å]{}man, [*J.  Math. Phys.*]{} [**28**]{}, 888 (1987). J. E. [Å]{}man, “Manual for [classi]{}: Classification Programs for Geometries in General Relativity”, Institute of Theoretical Physics Technical Report, 1987. Third provisional edition. Distributed with the [sheep]{} sources. S. M. Christensen, [*J. Phys. A: Math.Gen.*]{} [**13**]{}, 3001 (1980). W. Adamowicz, [*Gen.  Rel. Grav.*]{} [**12**]{}, 677 (1980). J. Ehlers, “Christoffel’s Work on the Equivalence Problem for Riemannian Spaces and its Importance for Modern Field Theories”, in [*E. B. Christoffel*]{}, edited P. L. Butzer and F. Feher, Birkhäuser-Verlag, Basel (1981). J. E. [Å]{}man, J. B. Fonseca-Neto, M. A. H. MacCallum and M. J. Rebouças “A Practical Procedure for the Equivalence Problem in Torsion Theories of Gravitation”, in preparation (1997). J. B. Fonseca-Neto, M. A. H. MacCallum and M. J. Rebouças, “Algebraically Independent Derivatives of Curvature and Torsion Tensors in Riemann-Cartan Space-times”, in preparation (1997). M. A. H. MacCallum and J. E. [Å]{}man, [*Class.  Quant. Grav.*]{} [**3**]{}, 1133 (1986). J. B. Fonseca-Neto and M. J. Rebouças, “On a Class of Riemann-Cartan Space-times of Gödel Type”, in preparation (1997). J. D. Oliveira, A. F. F. Teixeira and J. Tiomno, [*Phys. Rev. D*]{} [**34**]{}, 2661 (1986). H. H. Soleng, “The Mathematica Packages CARTAN and MathTensor for Tensor Analysis, in [*Relativity and Scientific Computing – Computer Algebra, Numerics, Visualization*]{}, edited by F. W. Hehl, R. A. Puntigam and H. Ruder. Springer, Berlin (1996). F. W. Hehl, “Computer Methods in General Relativity and Algebraic Computing”, in [*Proceedings of the 14th International Conference on General Relativity and Gravitation*]{}, edited by M. Francaviglia, G. Longhi, L. Lusanna and E. Sorace. World Scientific Publishing Co. Pte. Ltd., Singapore (1997). J. D. McCrea,“[reduce]{} in General Relativity and Poincaré Gauge Theory”, in [*Algebraic Computing in General Relativity, Lecture Notes from the First Brazilian School on Computer Algebra*]{}, Vol. II, edited by M. J. Rebouças and W. L. Roque. Oxford U. P., Oxford (1994). E. Schrüfer, F. W. Hehl and J. D. McCrea, “Application of the [reduce]{} Package [excalc]{} to Poincaré Gauge Field Theory”, [*Gen. Rel. Grav.*]{} [**19**]{}, 197 (1987). M. A. H. MacCallum, “Comments on the Performance of Computer Algebra Systems in General Relativity and a Recent Paper by Nielsen and Pedersen”, in [*ACM SIGSAM Bulletin*]{} [**23**]{}, 22 (1989). J. D. Oliveira, A. F. F. Teixeira and J. Tiomno, “Gödel-type metric in Einstein-Cartan Spaces”, in [*Proceedings of the 10th International Conference on General Relativity and Gravitation*]{}, edited by B. Bertotti, F. de Felice and A. Pascolini. Consiglio Nazionale delle Ricerche – Roma (1983). [^1]: [[email protected]]{} [^2]: [[email protected]]{} [^3]: [[email protected]]{} [^4]: [[email protected]]{}
--- abstract: 'The $\ell_1$ tracker obtains robustness by seeking a sparse representation of the tracking object via $\ell_1$ norm minimization [@Xue_ICCV_09_Track]. However, the high computational complexity involved in the $ \ell_1 $ tracker restricts its further applications in real time processing scenario. Hence we propose a Real Time Compressed Sensing Tracking (RTCST) by exploiting the signal recovery power of Compressed Sensing (CS). Dimensionality reduction and a customized Orthogonal Matching Pursuit (OMP) algorithm are adopted to accelerate the CS tracking. As a result, our algorithm achieves a real-time speed that is up to $6,000$ times faster than that of the $\ell_1$ tracker. Meanwhile, RTCST still produces competitive (sometimes even superior) tracking accuracy comparing to the existing $\ell_1$ tracker. Furthermore, for a stationary camera, a further refined tracker is designed by integrating a CS-based background model (CSBM). This CSBM-equipped tracker coined as RTCST-B, outperforms most state-of-the-arts with respect to both accuracy and robustness. Finally, our experimental results on various video sequences, which are verified by a new metric—Tracking Success Probability (TSP), show the excellence of the proposed algorithms.' author: - 'Hanxi Li, Chunhua Shen, and Qinfeng Shi [^1] [^2]' title: 'Real-time Visual Tracking Using Sparse Representation' --- =1 Visual tracking, compressed sensing, particle filter, linear programming, hash kernel, orthogonal matching pursuit. Introduction {#sec:intro} ============ Within Bayesian filter framework, the representation of the likelihood model is essential. In a tracking algorithm, the scheme of object representation determines how the concerned target is represented and how the representation is updated. A promising representation scheme should accommodate noises, occlusions and illumination changes in various scenarios. In the literature, a few representation models have been proposed to ease these difficulties [@Cootes_PAMI_01_AAM; @Comaniciu_PAMI_03_KMS; @Yilmaz_PAMI_04_Contour; @Avidan_PAMI_04_SVT; @Serby_ICPR_04_Prob; @Shen_CSVT_10_Gener]. Most tracking algorithms represent the target by a single model, typically built on extracted features such as color histogram [@Doucet_SC_00_SMC; @Shen_ICCV_05_GKMS], textures [@Cascia_PAMI_00_Head] and correspondence points [@Sha_ICCV_03_PC]. Nonetheless, these approaches are usually sensitive to variations in target appearance and illumination, and a powerful template update method is usually needed for robustness. Other tracking algorithms train a classifier off-line [@Avidan_PAMI_04_SVT; @Williams_PAMI_05_RVMT] or on-line [@Shen_CSVT_10_Gener] based on multiple target samples. These algorithms benefit from the robust object model, which is learned from labeled data by sophisticated learning methods. Recently, Mei and Ling proposed a robust tracking algorithm using $\ell_1$ minimization [@Xue_ICCV_09_Track]. Their algorithm, referred to as the *$\ell_1$ tracker*, is designed within Particle Filter (PF) framework [@Sanjeev_TSP_02_PF]. There a target is expressed as a *sparse* representation of multiple predefined templates. The $\ell_1$ tracker demonstrates promising robustness compared with existing trackers [@Comaniciu_PAMI_03_Kernel; @Porikli_CVPR_2006_Cov; @Zhou_TIP_04_AAPF]. However, it has following problems: Firstly, $\ell_1$ minimization in their work is slow; Secondly, they use an over-complete dictionary (an identity matrix) to represent the background and noise. This dictionary, in fact, can also represent any objects (including the user interested tracking objects) in video. Hence it may not discriminate the objects against background and noise. Although the $\ell_1$ tracker [@Xue_ICCV_09_Track] is inspired by the face recognition work using *sparse representation classification* (SRC)[@Wright_PAMI_09_Face], it doesn’t make use of the sparse signal recovery power of Compressed Sensing (CS) used in [@Wright_PAMI_09_Face]. CS is an emerging topic originally proposed in signal processing community [@Donoho_TIT_06_CS; @Candes_CPAM_05_Stable]. It states that sparse signals can be exactly recovered with fewer measurements than what the Nyquist-Shannon criterion requires with overwhelming probability. It has been applied to various computer vision tasks [@Wright_PAMI_09_Face; @Volkan_ECCV_08_Background; @Ali_ICASSP_08_Shape]. Inspired by the $\ell_1$ tracker and motivated by their problems, we propose two CS-based algorithms termed *Real-Time Compressed Sensing Tracking* (RTCST) and *Real-Time Compressed Sensing Tracking with Background Model* (RTCST-B) respectively. The new tracking algorithms are tremendously faster than the standard $\ell_1$ tracker and serve as [*better*]{} (in terms of both accuracy and robustness) alternatives to existing visual object trackers such as those in [@Sanjeev_TSP_02_PF; @Comaniciu_PAMI_03_Kernel; @Shen_CSVT_10_Gener]. The key contributions of this work can be summarized as follows. 1. We make use of the sparse signal recovery power of CS to reduce the computational complexity significantly. That is we hash or random project the original features to a much lower dimensional space to accelerate the CS signal recovery procedure for tracking. Moreover, we propose a customized *Orthogonal Matching Pursuit* (OMP) algorithm for real-time tracking. Our algorithms are up to about $6,000$ times faster than the standard $\ell_1$ tracker of [@Xue_ICCV_09_Track]. In short, [*we make the tracker real-time by using CS*]{}. 2. We propose background template rather than the over-complete dictionary in [@Xue_ICCV_09_Track]. This further improves the robustness of the tracking, because the representation of the objects and background are better separated. This new tracker, which is referred to as RTCST-B in this work, outperforms most state-of-the-art visual trackers with respect to accuracy while achieves even higher efficiency compared with RTCST. 3. Finally, we propose a new metric called *Tracking Success Probability* (TSP) to evaluate trackers’ performance. We argue that this new metric is able to measure tracking results quantitatively and demonstrate the robustness of a tracker. Consequently, all the empirical results are assessed by using TSP in this work. For ease of exposition, symbols and their denotations used in this paper are summarized in Table \[tab:notations\]. Notation Description -------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------ $\s_k$ A dynamic state vector at time $k$ $\s_k^i$ A dynamic state vector at time $k$ corresponding to the $i$th particle $A$ The measurement matrix or the collection of templates $\y$ The observed target, a.k.a, observation $\x$ The signal to be recovered in compressed sensing. For CS-based pattern recognition or tracking, it is the coefficient vector for the sparse representation $\Phi$ The projection matrix, could be either a random matrix or a hash matrix in this work $T, E, B$ The collection of target, noise and background templates $\x_t,~\x_e,~\x_b$ The coefficient vector associated with target, noise and background templates respectively $N_t, N_b$ The number of target templates and background templates $d_0, d$ The dimensionality of original and reduced feature space \[tab:notations\] The rest of the paper is organized as follows. We briefly review the related literature background in the next section. In Section \[sec:rcst\], the proposed RTCST algorithm is presented. We present the RTCST-B tracker in Section \[sec:rtcstb\]. We verify our methods by comparing them against existing visual tracking methods in Section \[sec:exp\]. Conclusion and discussion can be found in the last section. Related work ============ In this section, we briefly review theories and algorithms closest to our work. Bayesian Tracking and Particle Filters -------------------------------------- From a Bayesian perspective, the tracking problem is to calculate the posterior probability $p(\mathbf{s}_k|\y_k)$ of state $\mathbf{s}_k$ at time $k$, where $\y_k$ is the observed measurement at time $k$ [@Sanjeev_TSP_02_PF]. In principle, the posterior PDF is obtained recursively via two stages: prediction and update. The prediction stage involves the calculation of prior PDF: $$p(\mathbf{s}_k|\y_{k-1}) = \int p(\mathbf{s}_k|\mathbf{s}_{k-1}) p(\mathbf{s}_{k-1}|\y_{k-1}) d \s_{k-1}. \label{equ:bayes_prediction}$$ In the update stage, the prior is updated using Bayes’ rule $$p(\mathbf{s}_k|\y_k) = \frac{p(\y_k|\mathbf{s}_k) p(\mathbf{s}_k|\y_{k-1})}{p(\y_k|\y_{k-1})}. \label{equ:bayes_update}$$ The recurrence relations and form the basis for the optimal Bayesian solution. Nonetheless, the solution of above problem can not be analytically solved without further simplification or approximation. Particle Filter (PF) is a Bayesian sequential importance sampling technique for estimating the posterior distribution $p(\mathbf{s}_k|\y_k)$. By introducing the so-called *importance sampling distribution* [@Doucet_SC_00_SMC]: $$\mathbf{s}_i \sim ~ q(\mathbf{s}),\; i = 1, \dots, N_s, \label{equ:important_sample}$$ the posterior density is estimated by a weighted approximation, $$p(\mathbf{s}_k|\y_k) \approx \sum_{i=1}^{N_s} w_k^i\delta(\mathbf{s}_k - \mathbf{s}_k^i). \label{equ:pf_weighted_density}$$ Here $$w_k^i \propto w_{k-1}^i \frac{p(\y_k|\mathbf{s}_k^i)p(\mathbf{s}_k^i|\mathbf{s}_{k-1}^i)}{q(\mathbf{s}_k^i|\mathbf{s}_{k-1}^i,\:\y_k)}. \label{equ:pf_weight_update}$$ For the sake of convenience, $q(\cdot)$ is commonly formed as $$q(\mathbf{s}_k|\mathbf{s}_{k-1}^i,\:\y_k) = p(\mathbf{s}_k|\mathbf{s}_{k-1}^i). \label{equ:q_common_choice}$$ Therefore, is simplified into $$w_k^i \propto w_{k-1}^i p(\y_k|\mathbf{s}_k^i) \label{equ:pf_weight_update_simple}$$ The posterior then could be updated only depending on its previous value and observation likelihood $p(\mathbf{s}_k|\mathbf{s}_{k-1}^i)$. Plus, in order to reducing the effect of *particle degeneracy* [@Doucet_SC_00_SMC], a resampling scheme is usually implemented as $$Pr(\mathbf{s}_k^{i*} = \mathbf{s}_k^j) = w_k^j, \; j = 1, 2, \dots, N_s \label{equ:pf_resample}$$ where the set $\{\mathbf{s}_k^{i*}\}_{i=1}^{N_s}$ is the particles after re-sampling. Like the $\ell_1$ tracker, both RTCST and RTCST-B trackers use PF framework. However, they differ in how to seek a sparse representation which consequently lead to different observation likelihood $p(\mathbf{s}_k|\mathbf{s}_{k-1}^i)$ estimtation. $\ell_1$-norm Minimization-based Tracking ----------------------------------------- The underlying conception behind SRC is that in many circumstances, an observation belonging to a certain class lies in the subspace that is spanned by the samples belong to this class, and the linear representation is assumed to be sparse. Hence, reconstructing the sparse coefficients associated with the representation is crucial to identify the observation. The coefficients recovery could be accomplished by solving a relaxed version of $$\begin{split} \min_{\x} ~ \|\x\|_{1}, \; \sst ~ \|A\x & - \y\|_{2} \leq \varepsilon, \end{split} \label{equ:cs_opt_track_lasso}$$ where $\x \in \mathbb{R}^n$ is the coefficient vector of interest; $A = [\mathbf{a}_1, $ $ \mathbf{a}_2, \dots, $ $ \mathbf{a}_n] \in \mathbb{R}^{d \times n}$ is sometimes dubbed as *dictionary* and composed of pre-obtained pattern samples $a_i \in \mathbb{R}^d ~\forall i$; and $\y \in \mathbb{R}^d$ is the query/test observation. $\varepsilon$ is error tolerance. Then, the class identity $l(\y)$ is retrieved as $$l(\y) = \argmin_{j \in \{1, \cdots, C\}}{r_j(\y)}, \label{equ:cs_opt_cv_class}$$ where $r_j(\y) \doteq \|\y - A\delta_j(\x)\|_{2}$ is the reconstruction residual associated with class $i$, $C$ is the number of classes and the function $\delta_j(\x)$ sets all the coefficients of $\x$ to $0$ except those corresponding to $j$th class [@Wright_PAMI_09_Face]. Given a target template set $T = [\t_1, \cdots, \t_{N_t}] \in \mathbb{R}^{d_0 \times N_t}$ and a noise template set $E = [I,\;-I] \in \mathbb{R}^{d_0 \times 2d_0}$, the $\ell_1$ tracker adopts a positive-restricted version of for recovering the sparse coefficients $\x$, , $$\setlength{\abovedisplayskip}{0.1cm} \setlength{\belowdisplayskip}{0.1cm} \begin{split} \min ~ \|\x\|_{1}, \; \sst ~ \|A\x & - \y\|_{2} \leq \varepsilon, \, ~ \x \succeq 0. \end{split} \label{equ:cs_opt_track}$$ Here $A \doteq [T, E] \in \mathbb{R}^{d_0 \times (N_t + 2d_0)}$ is the combination of target templates and noise templates while $\x \doteq [\x_t^\T, ~\x_e^\T]^\T \in \mathbb{R}^{N_t + 2d_0}$ denotes the associated target coefficients and noise coefficients. Note that $N_t$ denotes the number of target templates and $d_0$ is the original dimensionality of feature space which equals to the pixel number of the initial target. The $\ell_1$ tracker tracks the target by integrating and a template-update strategy into the PF framework. Algorithm \[alg:l1\_tracker\] illustrates the tracking procedure. In addition, there is a heuristic approach for updating the target templates and their weights in the $\ell_1$ tracker. Refer to [@Xue_ICCV_09_Track] for more details. \[alg:l1\_tracker\] Compressed sensing and its application in pattern recognition ------------------------------------------------------------- CS states that a $\eta$-sparse[^3] signal $\x\in\mathbb{R}^n$ can be exactly recovered with overwhelming probability via few measurements $$\setlength{\abovedisplayskip}{0.1cm} \setlength{\belowdisplayskip}{0.1cm} y_i = \Phi_i\x, ~~~i=1,\dots,m \ll n.$$ Intuitively, one would achieve $\x$ via $$\setlength{\abovedisplayskip}{0.1cm} \setlength{\belowdisplayskip}{0.1cm} \begin{split} \min_{\x} ~& \|\x\|_{0}, \; \sst ~ \Phi \x = \y, \end{split} \label{equ:ori_cs_opt}$$ where $\Phi \in \mathbb{R}^{m \times n}$ is the measurement matrix, of which rows are the measurement vectors $\Phi_i$ and $\y = (y_1,\dots,y_m)^T$. $ \|\x\|_{0} $ is the number of non-zero elements of $ \x $. Since is NP-hard [@Tropp_TIT_07_OMP], it is commonly relaxed to $$\setlength{\abovedisplayskip}{0.1cm} \setlength{\belowdisplayskip}{0.1cm} \begin{split} \min_{\x} ~& \|\x\|_{1}, \; \sst ~ \Phi \x = \y, \end{split} \label{equ:cs_opt}$$ which can be casted into a linear programming problem. As regards CS-based pattern recognition, to deal with noise, one could alternatively solve a Second Order Cone Program: $$\setlength{\abovedisplayskip}{0.1cm} \setlength{\belowdisplayskip}{0.1cm} \begin{split} \min_{\x} ~ \|\x\|_{1}, \; \sst ~ \|\Phi\x & - \y\|_{2} \leq \varepsilon, \end{split} \label{equ:cs_opt_cv_lasso}$$ where $ \varepsilon$ is a pre specified tolerance. Real-time compressed sensing tracking {#sec:rcst} ===================================== In this section, we present the proposed real-time CS tracking. Dimension reduction {#subsec:dim_reduct} ------------------- The biggest problem of $\ell_1$ tracking is the extremely high dimensionality of the feature space, which leads to heavy computation. More precisely, suppose that the cropped image of observation is $I \in \mathbb{R}^{h \times w}$, the dimensionality $d_0 = h\cdot w$ is typically in the order of $10^{3} \sim 10^{5}$, which prevents tracking from real-time. Fortunately, in the context of compressed sensing (ignoring the non-negativity constraint on $\x$ for now), it is well known that if the measurement matrix $\Phi$ has Restricted Isometry Property (RIP) [@Candes_CPAM_05_Stable], then a sparse signal $\x$ can be recovered from $$\setlength{\abovedisplayskip}{0.1cm} \setlength{\belowdisplayskip}{0.1cm} \begin{split} \min ~ \|\x\|_{1}, \; \sst ~ \|\Phi A\x & - \Phi\y\|_{2} \leq \varepsilon. \end{split} \label{equ:cs_opt_track_cs}$$ A typical choice of such measurement matrix is random gaussian matrix $$\setlength{\abovedisplayskip}{0.1cm} \setlength{\belowdisplayskip}{0.1cm} R \in \mathbb{R}^{d \times n},\quad R_{i,j} \sim \mathcal{N}(0,\;1). \label{equ:def_rand_matrix}$$ Besides random projection, there are other means that guarantee RIP. Shi [@Shi_JMLR_09_Hash] proposed a hash kernel to deal with the issue of computational efficiency. Let $h_s(j,d)$ denotes a hash function (, the hash kernel) $h_s:\mathbb{N} \to \{1,\dots,d\}$ drawn from a distribution of pairwise independent hash functions, where $s \in \{1,\dots,S\}$ is the seed. Different seed gives different hash function. Given $h_s(j,d)$, the hash matrix $H$ is defined as $$\begin{aligned} H_{ij}:= \left\{{\begin{array}{*{20}c} 2h_s(j,2)-3, & {h_s(j,d) = i},\forall s\in\{1,\dots,S\} \\ 0, & \text{otherwise}. \\ \end{array} } \right. \end{aligned}$$ Obviously, $H_{ij} \in \{0,\pm 1\}$. The hash kernel generates hash matrices more efficiently than conventional random matrices while maintains the similar random characteristics, which implies good RIP. In this work, the dimensionality of feature space is reduced by matrix $\Phi \in \mathbb{R}^{d \times d_0}$ (which could be either random matrix $R$ or hash matrix $H$) from $d_0$ to $d$ where $d \ll d_0$. This significantly speeds up solving equation , for its complexity depends on $d$ polynomially. Customized orthogonal matching pursuit for real-time tracking {#subsec:omp} ------------------------------------------------------------- ### Orthogonal matching pursuit {#subsubsec:omp} Before the compressed sensing theory was proposed, numerous approaches had been applied for sparse approximation in the literature of signal processing and statistics [@Mallat_TSP_93_MP; @Pati_ACSSC_93_OMP; @Davis_OE_94_Adap]. Orthogonal Matching Pursuit (OMP) is one of the approaches and solves in a greedy fashion. Tropp and Gilbert [@Tropp_TIT_07_OMP] proved OMP’s recoverability and showed its higher efficiency compared with linear programming which is adopted by the original $\ell_1$ tracker of [@Xue_ICCV_09_Track]. Be more explicit, given that $ A \in \mathbb{R}^{d \times n}$ the computational complexity of linear programming is around $O(d^2n^{\frac{3}{2}})$, while OMP can achieve as low as $O(dn)$[^4]. We implement the sparse recovery procedure of the proposed tracker with OMP so as to accelerate the tracking process. The number of measurements required by OMP is $O(\eta log(n))$ for $\eta$-sparse signals, which is slightly harder to achieve compared with that in $\ell_1$ minimization. However, it is merely a theoretical bound for signal recovering, no significant impact of OMP upon the tacking accuracy is observed in our experiments (see Section \[sec:exp\]). ### Further acceleration—OMP with early stop {#susubbsec:early_stop_omp} The OMP algorithm was proposed for recovering sparse signal exactly (see Equation ), and the perfect recovery is also guaranteed within $d$ steps [@Pati_ACSSC_93_OMP]. However, in the realm of pattern recognition, we argue that there is no requirement for perfect recovery for many applications. For example, for classification problems, test accuracy is of interest and exact recovery does not necessarily translate into high classification accuracy. So on the contrary, an appropriate recovery error may even improve the accuracy of recognition [@Wright_PAMI_09_Face]. We introduce a residual based stopping criterion into OMP by modifying as $$\begin{split} \min_{\x} ~ \|\x\|_{0}, \; \sst ~ \|A\x & - \y\|_{2} \leq \varepsilon. \end{split} \label{equ:omp_opt_residual}$$ Moreover, the procedure of OMP could be accelerated remarkably if the above stopping criterion is enforced. To understand this, let us assume that OMP follows the MP algorithm [@Mallat_TSP_93_MP] with respect to the convergence rate[^5], , $$r_k = \frac{K}{\sqrt{t}}, \; t < n, \label{equ:mp_convergence}$$ where $K$ is a positive constant and $r_k = \|A\x_k -\y \|_2$ is the recovery residual after $t$ steps. Given that we relax the stopping criterion $\varepsilon$ by $10$ times $$\varepsilon' = 10\varepsilon, \label{equ:change_stop_residual}$$ then the required step $t_{\rm stop}$ is reduced to be $$\begin{split} t_{\rm stop}' = & ~K^{2} / {\varepsilon'}^{2} \\ = & ~10^{-2}K^{2} / \varepsilon^{2} \\ = & ~10^{-2}t_{\rm stop}. \end{split} \label{equ:change_stop_step}$$ Considering that the complexity of OMP is at least proportional to $t$, the algorithm could be accelerated by $100$ times theoretically. Figure \[fig:time\_iter\] shows the empirical influence of the terminating criterion upon the running iterations and running time. In our algorithm, we empirically set the stopping threshold $\varepsilon = 0.01$, which draws a balance between speed and accuracy. ![ The decreasing tendency of running time and iteration numbers of the OMP procedure with different residual thresholds. The result is produced from a Matlab-based experiment on video “Cubicle”, with the feature dimension of $50$. Both the running time and iteration numbers are the average result over all the frames and particles. []{data-label="fig:time_iter"}](Running_Time_Iter_Num_OMP_Error){width="45.00000%"} ### Tracking with a large number of templates {#susubbsec:inf_rcst} One noticeable advantage of the SRC-based tracker is the exploitation of multiple templates obtained from different frames. However, for the $\ell_1$ tracker, the number of templates $n$ should be curbed into strictly because it equals to the dimensionality of the optimization variable $\x$. To design a good $\ell_1$ tracker, a trade-off between $n$ and the optimization speed is always required. Fortunately, this dilemma dose not exist when the tracker is facilitated with OMP and a carefully-selected sparsity $\eta$. The computational burden of OMP consists of two steps: one is for selecting the maximum correlated vector from matrix $A \in \mathbb{R}^{d \times n}$, and the other is for solving the least squares fitting. In step $t ~(t < d)$, it is trivial to compute the complexity of the first step is $O(dn)$ and that for least-square fitting is $O(d^3 + td^2 + td)$. Accordingly, the running time of OMP is dominated by solving the least-squares problem, which is independent of the number of templates, $n$. In other words, [*within a certain number of iterations, the amount of templates would not affect the overall running time significantly*]{}. This is an important and desirable property in the sense that we might be able to employ a large amount of templates. Admittedly, larger $n$ might lead to more iterations. However, if we impose a maximum sparsity $\eta$, the OMP procedure would only last for $\eta$ steps in the worst scenario. From this perspective, a preset $\eta \ll n$ is capable to eliminate the influence of a large $n$ upon the running iterations. Figure \[fig:time\] depicts the change tendency of running time with increasing $n$, given that $d \in \{50, 75\}$, $\eta = 15$. As can be seen, the elapsed time is only doubled when $n$ is raised by $10^2$ times. ![ Running time of OMP with various numbers of target templates. The experiment is carried out on video sequence “Cubicle” with reduced dimensions $50$ and $75$. The recorded running time is the average time consumption for one OMP procedure which calculates the observation likelihood for a particle. Note that the $x$-axis only indicates target templates’ number, and the number of trivial templates is not counted. The sparsity $\eta = 15$. []{data-label="fig:time"}](Running_Time_OMP_Numerous_Templates){width="45.00000%"} Inspired by this valuable finding, we aggressively set the number of target templates to $100$ which is $10$ times larger than that in X. Mei’s paper. We try to harness the enormous target templates to accommodates the variation of illumination, gesture and occlusion and consequently improve the tracking accuracy. As regards the sparsity, we elaborately set $\eta = 0.5\cdot d$ for RTCST and $\eta = 15$ for RTCST-B which is introduced in Section \[sec:rtcstb\]. We believe the numbers are sufficiently large for the representations. Hereby, we sum up all the adjustments to OMP mentioned in Algorithm \[alg:customized\_omp\]. Note that here we use the inner product rather than its absolute value to verify the correlation. This heuristic manner is used to make the recovered coefficient vector $\x \succeq 0$, approximately. For RTCST-B introduced in next section, the absolute value of inner product is re-employed due to the absence of the positive constraint. \[alg:customized\_omp\] Minor modifications {#subsec:minor_modifications} ------------------- Besides the dimension reduction methods and OMP, modifications to the original $\ell_1$ tracker are proposed in this section to achieve a even higher tracking accuracy. ### Update templates according to sparsity concentration index {#susubbsec:sci} In the $\ell_1$ tracker, the template set is updated when a certain threshold of similarity is reached, , $${\rm sim}(\y, \mathbf{a}_i) < \tau, \label{equ:cst_update_sim}$$ where $i = \argmax(x_i)$ and $ {\rm sim}(\y, \mathbf{a})$ is the function for evaluating the similarity between vectors $\y$ and $\a$. It can be the angle between two vectors or SSD between them. However, Wright proposed a better approach to validate the representation. The approach, which utilizes the recovered $\x$ itself rather than the similarity, is termed *Sparsity Concentration Index* (SCI) [@Wright_PAMI_09_Face]. Particularly, in the context of RTCST, class number is $1$ if the noise is not viewed as a class, then we obtain a simplified SCI measurement for the target class, which writes $$\text{SCI}_{t}(\x) = \|\x_t\|_1 / \|\x\|_1 \in [0, 1], \label{equ:rtcst_sci}$$ where $\x_t = \x(1 : N_t)$. In the presented RTCST algorithm, $\text{SCI}_{t}$ is employed instead of . ### Abandoning the template weight {#susubbsec:abandon_weight} The original $\ell_1$ tracker enforces a template re-weighting scheme to distinguish templates by [@Xue_ICCV_09_Track], their importance. Nonetheless, following their scheme the weight of each target template is always smaller than that of noise templates (see Algorithm  \[alg:l1\_tracker\]). This does not make much sense. Actually, it may be intractable to design an ideal template re-weighting scheme that works in all the circumstances. A poorly-designed re-weighting scheme could even deteriorate the tracking performance. We abandon the template weight because the importance of templates be easily exploited by the compressed sensing procedure. Without template weights, the tracker becomes simpler and less heuristic. The empirical result also shows better tracking accuracy when template weight is abandoned. ### MAP and MSE {#susubbsec:mse} In Mei and Ling’s framework [@Xue_ICCV_09_Track], the new state ${\s}_k$ is corresponding to the particle with the largest observation likelihood. This method is known as the Maximum A Posterior (MAP) estimation. It is also known that for the particle filtering framework, Mean Square Error (MSE) estimation is usually more stable than MAP. As a result, we adopt MSE in our real-time tracker, namely, $${\s}_k = \frac{\sum^{N_s}_{i=1}({\s}_k^i \cdot l_i)}{\sum^{N_s}_{i=1}l_i}, \label{equ:mse}$$ where ${\s}_k^i$ is the $i$th particle at time $k$ and $l_i$ is the corresponding observation likelihood. The Algorithm {#subsec:rcst} ------------- In a nutshell, for each observation, we utilize Algorithm \[alg:customized\_omp\] to recover the coefficient vector $\x$ by solving the problem $$\min_{\x}\|\x \|_{0},~\sst\; \|\Phi A\x - \Phi \y\|_{2} \leq \varepsilon,~ \x \succeq 0 \label{equ:rtcst_opt}$$ where $\x = [\x_t, ~\x_e]$, $A = [T,~E]$. The residual is then obtained by $$r = \| \Phi \y - \Phi A\x_t\|_2. \label{equ:rtcst_residual}$$ Finally the likelihood of this observation is updated as $$l = \exp(-\lambda\cdot r),~\lambda > 0. \label{equ:rtcst_lklhd}$$ The procedure of Real-Time Compressed Sensing Tracking algorithm is summarized in Algorithm \[alg:rtcst\_tracker\]. Our template update scheme is demonstrated in Algorithm \[alg:rtcst\_temp\_up\]. As can be seen, the proposed update scheme is much conciser than that in the $\ell_1$ tracker [@Xue_ICCV_09_Track] thanks to the abandonment of template weight. The empirical performance of RTCST is verified in Section \[sec:exp\]. \[alg:rtcst\_tracker\] \[alg:rtcst\_temp\_up\] RTCST-B: More Robust and Efficient RTCST with background model {#sec:rtcstb} ============================================================== To some extent, visual tracking is viewed as object detection task with prior information. Similar to object detection, which is sometimes treated as a classification problem, visual tracking also distinguishes the foreground (target) from background. In detection applications, the background class is usually considered without distinct feature because it could follow any pattern. Quite the contrary, in the context of visual tracking, the background is much more limited with respect to appearance variation. Particularly, for the stationary camera, the background is nearly fixed. Under these assumptions, it is worthwhile exploiting the background information for tracking. And appropriate incorporations of background model indeed improve the tracking performance[@Stauffer_CVPR_99_Back; @Isard_ICCV_01_Blob; @Zhao_PAMI_08_Segmentation; @Shen_CSVT_10_Gener]. We hereby propose a novel CS-based background model (CSBM) to facilitate tracking algorithm. The definition of CS-based background model is quite simple. Suppose that $\Gamma_i \in \mathbb{R}^{h \times w},\;i = 1, \cdots, N_b$ is the $i$th frame where foreground is absent, and $h$ and $w$ are the height and width of the frame respectively, we define the background model as $$\mathbb{G} = \{\Gamma_1, \dots, \Gamma_{N_b}\} \label{equ:def_cbm}$$ or in short, the collection of $N_b$ backgrounds. The background templates are then generated from CSBM to cooperate with target templates in our new tracker. Please note that our algorithms is unrelated to the background subtraction manner proposed by Volkan [@Volkan_ECCV_08_Background]. In their paper, foreground silhouettes are recovered via CS procedure but the background subtraction is still performed in conventional way. Our CSBM and RTCST-B is entirely different from their manner, both in essence and appearance. The details of CSBM and its incorporation with RTCST are introduced below. Building the Optimal CSBM {#subsec:cs_back} ------------------------- A good CSBM should only constitute “pure” backgrounds and contain sufficiently large appearance variation, , illumination changes. Ideally, we could simply select certain number of foreground-absent frames from video sequence to build a CSBM. However, the “pure” background is usually difficult to find and it is even harder to obtain the ones cover the main distribution of background appearance. An intuitive way to obtain a clean background is replacing the foreground of one frame with a background patch cropped from another frame. More precisely, let $F \in \mathbb{R}^{h \times w}$ denote the frame based on which the background is retrieved, and $F'\in \mathbb{R}^{h \times w}$ stand for the frame where the background patch is cropped, suppose that the foreground region in $F$ is $F(t : b, l : r)$[^6], the patching operation could be described as $$\Gamma_{i, j} = \left\{{\begin{array}{cc} F'_{i, j}, & t \leq i \leq b ~\& ~l \leq j \leq r\\ F_{i, j}, & \text{otherwise}. \\ \end{array} }\right. \label{equ:patch}$$ where $\Gamma$ is the retrieved background. An illustration of is also available in Figure \[fig:patch\]. In practice, multiple foreground regions need to be mended for each “impure” background candidate. Furthermore, a selection approach should be conducted to form the optimal combination of the retrieved backgrounds over all the candidates. To achieve this goal, we first randomly capture $N' > N_b$ frames from the concerned video sequence. Afterwards, every foreground region of the frames are located manually. The foreground is then replaced by a clean background region cropped from the nearest frame (in terms of frame index). Finally, a $k$-median clustering algorithm is carried out for selecting $N_b$ most comprehensive backgrounds. It is nontrivial to notice that even some backgrounds are not perfectly retrieved, , with minor foreground remains, CSBM can still work well considering that CS is robust to the noise in measurements [@Candes_CPAM_05_Stable]. Equiping RTCST with CSBM {#subsec:equip_back} ------------------------ We equip the RTCST with CSBM to build a novel visual tracker, *a.k.a Real-Time CS-based Tracker with Background Model* (RTCST-B). In RTCST-B, original noise templates are replaced by *background templates* which are generated from CSBM. In the context of PF tracking, given a observation position $\Xi$ with $d_0$ pixels and a CSBM $\mathbb{G}$ defined in , the background templates set $B$ is obtained by: $$\begin{split} B & = [I_1~I_2~\dots~I_{N_b}] \in \mathbb{R}^{d_0 \times N_b} \\ I_i & = \text{CV}(\Gamma_i,~ \Xi) ~\forall i = 1, \dots, N_b \end{split} \label{equ:def_back_temp}$$ where function $\text{CV}(\cdot)$ is called *crop-vectorize* operation which first crops the region indicated by $\Xi$ from background $\Gamma_i$ and then vectorize it into $I_i \in \mathbb{R}^{d_0}$. Eventually, the optimization problem for RTCST-B writes: $$\min_{\x}\|\x\|_{0} ~ \sst \; \|\Phi A\x - \Phi \y\|_{2} \leq \varepsilon, \label{equ:rtcst_b_opt}$$ where $\x$ is comprised of $\x_t$ and $\x_b$, , the coefficient vectors for target and background, $A=[T,~B] \in \mathbb{R}^{d \times (N_t + N_b)}$. Despite the diverse optimization problem, the calculation for the likelihood remains the same as in . To understand this, let $\x_t$ and $\x_b$ denote the coefficients associated with target templates and background templates respectively, $p(\y_k | \s) = p(\y_k | \x_t) = \exp(-\lambda r)$ be the observation likelihood[^7], where $r$ is defined in , then we have: $$p(\y_k | \x_t, \x_b) = p(\y_k | \x_t) = \exp(-\lambda r) \label{equ:proof_information}$$ with the assumption that $\x_t$ and $\x_b$ are deterministic by each other, , $$p(\x_b, \x_t) = p(\x_t) = p(\x_b) \label{equ:recovery_approx}$$ or in other words, the solution of CS procedure is unique. [@Candes_CPAM_05_Stable]. In addition, the template update scheme should be changed slightly considering a new class is involved in. More precisely, target templates are updated only when $$\text{SCI}_{tb}(\x) = \frac{\max\{\|\x_t\|_1, \|\x_b\|_1\}}{\|\x\|_1} \leq \tau \label{equ:new_sci}$$ Finally, the positive constraint for $\x$ is removed in \[equ:rtcst\_b\_opt\] because background subtraction implies minus coefficients for background templates. It is reasonable to not curb the coefficients in RTCST-B. In summary, one just needs to impose following minor modifications on RTCST to transfer it into RTCST-B. 1. Substitute the background templates for noise templates. 2. Eliminate the positive constraint. 3. Conduct the CV operation for each observation. 4. Utilize the new SCI measurement. Apparently, the diversity between RTCST and RTCST-B is not significant with respect to formulation. Nevertheless, the seemingly small change makes RTCST-B much more superior to its prototypes. Superiority Analysis {#subsubsec:rtcst_b_advantages} -------------------- Compared with the $\ell_1$ tracker and RTCST, RTCST-B enjoys three main advantages which are described as follows. ### More Sparse {#subsubsec:more_strict} An underlying assumption behind the $\ell_1$ tracker and RTCST is that, the background could be sparsely represented by noise templates in $E$. It is true when foreground dominates the observed rectangle. More quantitatively, given $\eta_t$ is the sparsity of target coefficient vector $\x_t$, when $$\eta_t + \|\x_e\|_0 \leq d / 3$$ the representation based on solution $\x$ in is guaranteed to be reliable [@Wright_PAMI_09_Face]. Nonetheless, the sparse representation is no longer valid when the background covers the main part of observation. Predictably, the incorrect representation will deteriorate tracking accuracy. On the other hand, after noise templates being replaced by background templates, the aforementioned assumption usually keeps true. Figure \[fig:rtcst\_b\_sparse\] give us a explicit demonstration for the sparsity of solutions. ### More Efficiency {#subsubsec:more_efficient} Comparing with existing background models, the computation burden of CSBM is extremely trivial. First of all, there is no need to conduct the background subtraction or foreground connection in RTCST-B, because these two functions are integrated within the CS procedure implicitly. Secondly, if the CSBM is generated properly, , can cover the main distribution of background’s appearance, to update model becomes unnecessary. Thirdly, the sufficient number of background templates is much smaller than that of noise template, , $$N_b \ll N_n = 2d$$ where $N_n$ is the number of noise templates. The reduction of templates’ amount will immediately speed up the optimization process. The last, and the most important reason is, the required sparsity $\eta$ for RTCST-B is much smaller than that for RTCST (see Section \[susubbsec:inf\_rcst\]). This leads to an earlier terminated OMP procedure in RTCST-B and hence makes it faster. In conclusion, the introduction of CSBM won’t impose further computational burden on the algorithm, and just the opposite, the tracking procedure will be accelerated to some extent. ### More Robust {#subsubsec:more_robust} In RTCST and $\ell_1$ tracker, one tries to use noise templates $E = [I -I]$ to represent background. However, it is the columns in $I$, which is called *standard basis vectors*, doesn’t favor background images over targets. This character makes RTCST and $\ell_1$ tracker powerless for recognizing background and consequently, decreases the tracking accuracy. Differing from the prototype, RTCST-B harnesses the discriminant nature of CS-based pattern recognition. Both foreground (target) and background are treated as a typical class with distinct features. In RTCST-B, target templates compete against background templates, who are as powerful as their competitors, to “attract” the observation. Intuitively, the more discriminative templates will make RTCST-B more robust. Moreover, once the tracked region drifts away, background information would be brought into target templates via template update (which is almost unavoidable). In this situation, for RTCST and $\ell_1$ tracker, some target templates could be more similar to background than all the noise templates. This leads to a serious classification ambiguity and therefore, poor tracking performance. Quite the contrary, RTCST-B could draw back the target to the correct position thanks to the capacity of recognizing background. In plain words, RTCST-B always tends to locate the target in the region which doesn’t *look like* background. An empirical evidence for the robustness of RTCST-B is shown in Figure \[fig:rtcst\_b\_robust\]. Experiment {#sec:exp} ========== Experiment Setting ------------------ To verify the proposed tracking algorithms, we design a series of experiments for examining the tracking algorithm in terms of accuracy, efficiency and robustness. The proffered algorithms are conducted on $10$ video sequences comparing with $\ell_1$ tracker, Kernel-Mean-Shift (KMS) tracker [@Comaniciu_PAMI_03_Kernel] and color-based PF tracker[@Sanjeev_TSP_02_PF]. The details of selected video sequences are list in Table \[tab:video\_sequence\]. Note that we only conduct $\ell_1$ tracker on $5$ videos which are *cubicle*, *dp*, *car11*, *pets2001\_c1* and *pets2004-2\_p1* respectively. It is because for other videos, the convex optimization problem is too slow to be solved (above $5$ minutes per frame). \[tab:video\_sequence\] There are two alternative dimension-reduction manners for RTCST and RTCST-B, namely, random projection and hash matrix projection. In our experiments, both of them are performed with reduced dimension $25$, $50$ and $100$. As regards the particles’ number, we examine the proposed trackers with $100$ and $200$ particles and the numbers for PF tracker is $100$, $200$ and $500$. All the PF-based trackers are run for $20$ times except $\ell_1$ tracker which is merely conducted for $3$ times. We perform KMS tracker for only $1$ time considering it is a deterministic method. The average values and standard errors are reported in this section. The MS tracker, PF tracker and $\ell_1$ tracker are implemented in C++ while our CS-based trackers are implemented in Matlab. To compare the efficiency with the proposed algorithms, there is also a Matlab version of $\ell_1$ tracker. All the algorithms are run on a PC with $2.6G$Hz quad-core CPU and $4G$ memory (we only use one core of it). As to the software, we use Matlab $2009a$ and the linear programming solver is called from Mosek $6.0$[@Mosek]. It is important to emphasize that in our experiment, *no trick is used for selecting the target region in the first frame*. The initial target region is always the minimum rectangle $R = [l, r, t, b]$ which can cover the whole target[^8], where $l$, $r$, $t$, and $b$ are the left, right, top and bottom boundaries’ coordinates (horizontal or vertical) respectively. This rigid rule is followed for eliminating the artificial factors in visual tracking and making the comparison unprejudiced. TSP — A New Metric of Tracking Robustness ----------------------------------------- A conventional choice of the manner to verify the tracking accuracy is *tracking error*. Specifically, given that the centroid of ground truth region is $\c_g$ while that of tracked region is $\c_t$, the tracking error $\rho$ is defined as $$\setlength{\abovedisplayskip}{0.1cm} \setlength{\belowdisplayskip}{0.1cm} \rho = \|\c_g - \c_t\|_2, \label{equ:pos_error}$$ , the euclidean distance between two centroids. However, if we take scale variation into consideration, $\rho$ is poor to verify tracker’s performance. Let’s see Figure \[subfig:dis\] for a example. In the image, red rectangle indicates the ground truth for a moving car. The blue and gray rectangles, which are obtained by various tracking algorithms, share the identical centroid. By using tracking error, same performance is reported for both two trackers despite the obvious difference on tracking accuracy. Inspired by the evaluation manner proposed for PASCAL data base[@Everingham_PASCAL_07_Detection], we propose a new tracking accuracy measurement which is termed *Tracking Success Probability* (TSP). To obtain the definition of TSP, firstly let’s suppose the bounding box of ground truth region is $R_g = [l_g, r_g, t_g, b_g]$, and the one for tracked region is $R_t = [l_t, r_t, t_t, b_t]$. We then design a function $a(R_g, R_t) \in [-1, 1]$ to estimate the overlapping state between $R_g$ and $R_t$. Given two distance sets: $$\begin{split} \mathbb{H} & = \{r_t - l_g, r_g - l_t, r_g - l_g, r_t - l_t\} \\ \mathbb{V} & = \{b_t - t_g, b_g - t_t, b_g - t_g, b_t - t_t\} \\ \end{split}$$ and a indicator function $s_{tg}$ $$s_{tg}:= \left\{{\begin{array}{*{20}c} -1,\;\;&R_g \text{ and } R_t \text{ are seperate} \\ 1,\;\;&\text{otherwise}. \\ \end{array} } \right. \label{equ:s}$$ then $a(R_g, R_t)$ writes[^9] $$a(R_g, R_t) = s_{tg} \cdot \left|\frac{min(\mathbb{H})\cdot min(\mathbb{V})}{max(\mathbb{H})\cdot max(\mathbb{V})}\right|,$$ It is easy to find that when two regions overlap each other, $a(R_g, R_t)$ is the ratio of the intersection area $R_{g\cap d}$ to the area $R^*$, which is the minimum region covering both $R_g$ and $R_d$. See Figure \[subfig:tsl\] for an instance. Finally, TSP is formulated as $$\text{TSP}(R_g, R_t) = \frac{\exp(\nu\cdot a(R_g, R_t))}{1 + \exp(\nu\cdot a(R_g, R_t))} \in [0, 1], \label{equ:def_tsl}$$ where $\nu > 0$ is a preset parameter reflects the worst scenario we could assure the target is located correctly. In our experiment, $\nu$ is the solution of $$\frac{\exp(0.25 \nu)}{1 + \exp(0.25\nu)} = 0.95 \Longrightarrow \nu = 11.8. \label{equ:def_nu}$$ In other words, when the overlapped region is larger than $25\%$ part of region $R^*$, we are convinced (with the probability of $0.95$) that the tracking is successful. Obviously, the larger the TSP is, the more confident we believe this tracking is successful. If we apply TSP to the tracking results shown in Figure \[subfig:dis\], then the TSP of blue rectangle is $0.95$ which is significantly larger than that of the gray one (with TSP of $0.55$). The difference implies that TSP is capable to accommodate dynamic factors besides displacement. Another merit of TSP is the comparability over different video sequences thanks to its fixed value range , $[0, 1]$. Considering these advantages, in the current paper, all the empirical results are evaluated by TSP. As a reference, tracking error results are also available. Tracking Accuracy {#subsec:accuracy} ----------------- Firstly, we examine the tracking accuracy of our trackers comparing with the competitors. The average TSP for every experiment is shown in Table \[tab:accuracy\]. For each video sequence, the optimal accuracy is displayed in bold type. \[tab:accuracy\] As illustrated in Table \[tab:accuracy\], all the tracking approaches achieve similar performances on the sequence with simple background and stable illumination (*dp* and *cubicle*). For the video sequence *fish*, traditional methods show higher capacity for accommodating extreme illumination variation. On the other hand, for the outdoor scene and complex background tasks, , the other $7$ sequences, CS-based trackers consistently outperform PF tracker and KMS tracker. All the best performances are observed with RTCST and RTCST-B for these video sequences. Considering that the target could be viewed as missed when the TSP is below $30\%$, the traditional trackers are failure for the majority of these video datasets, , KMS tracker for *car4*, *pets2001\_c1*, *pets2002\_p1* and *pets2004-2\_p1*; PF tracker for *pets2002\_p1* and *pets2004\_p1*. Moreover, $\ell_1$ tracker also fails on *pets2004\_p1* and *pets2004-2\_p1* due to the unstable target appearances. Our methods, on the contrary, do much better than the competitors and handle some intractable sequences (, *pets2004\_p1* and *pets2004-2\_p1*) very smoothly (with the TSP $> 65\%$). Particularly, for the camera-fixed scenes, RTCST-B is applied and always achieves the highest accuracy. The superiority of RTCST-B over all the other trackers confirms our assumption that higher accuracy would be achieved when the tracking is considered as binary classification problem. Besides the TSP values, video frames with the tracked regions are listed in Figure \[fig:track\_frames\] while tracking errors changing along with the frame index are also plotted in Figure \[fig:track\_error\]. In Figure \[fig:track\_frames\], only the best (with the highest average TSP value) result is employed to be shown for each tracker. The explicit tracking results support the statistics in Table \[tab:accuracy\]. RTCST beats KMS tracker and PF tracker on *cubicle*, *car4*, *pets2000\_c1* and *pets2002\_p1* and obtain the similar performance as its competitors on *dp*. Being facilitated with CSBM, RTCS-B always achieves the highest accuracy if it is present. Quite the contrary, the traditional trackers fail in some complex scenarios, PF tracker on *car4* and *pets2002\_p1*; KMS tracker on *car4* and *pets2002\_p1*. From the error curves shown in Figure \[fig:track\_error\], we can find that our methods beat other visual tracking algorithms on most video sequences except *dp* and *fish*. Given that all the trackers perform similarly for *dp* and video *fish* is generated with extreme illumination variation which is added deliberately, RTCST and RTCST-B could be considered better than their competitors in terms of accuracy. To evaluate the new measurement, the TSP curves for *cubicle* and *pets2002\_p1* are also available in the Figure \[subfig:tsl\_cubicle\] and Figure \[subfig:tsl\_pets2002\]. We can see that the TSP value and tracking error change oppositely, which is as expected. However, based on TSP, we can verify the capacity of single tracker without any “reference tracker”. This is hard to achieve based on tracking error. Tracking Efficiency {#subsec:efficiency} ------------------- Efficiency plays a fatal role in real-time visual tracking applications. We record the elapsed time of each tracker in our experiment. The time consumptions (in $ms$) for processing one frame by the tracking algorithms are reported in Table \[tab:run\_time\]. In the table, huge differences in tracking speed are observed. KMS tracker illustrates the highest efficiency with the lowest running speed of $83$ *ms per frame* ($83~mspf$). On the contrary, $\ell_1$ tracker (both for C-based version and Matlab-based version) is consistently slower than $14000~mspf$ due to the high computational complexity. Being equipped with OMP and dimension reduction manners, RTCST and RTCST-B are able to accelerate the original CS-based tracker by $117.3$ (*dp*) to $6271.2$ (*pets2004\_p1*) times. The speed range for RTCST is $54\sim 968~mspf$ while that for RTCST-B is $85\sim 534~mspf$. PF tracker shows unstable efficiency among all the tests. Its running speed varies from $37$ to $1727 ~mspf$ for the experiment with $500$ particles. Supposed that the speed threshold for real-time application is $100~mspf$, most of the traditional methods and a part of our methods are qualified. $\ell_1$ tracker could not be viewed as “real-time” from any perspective. Moreover, since RTCST and RTCST-B are implemented in Matlab with single core, their running speeds could be increased remarkably by employing C/C++ language and multiple cores. Actually, the speed of Matlab-based $\ell_1$ is already raised by $3.7$ (*pets2004\_p1*) to $8.4$ (*cubicle*) times in its C/C++ counterpart even though only one core is used. If we conservatively predict $10$-time speed growth , both RTCST and RTCST-B will be qualified for real-time application in all the circumstances. \[tab:run\_time\] Tracking Robustness {#subsec:robustness} ------------------- As mentioned before, no trick is played to select the initial target region. The first region $R$ should always be the minimum bounding box covers the whole target. Nonetheless, the bounding box could merely obtained manually, and hence, approximately. In practice, the selection error is unavoidable. If the visual tracker is not robust enough, minor selection error would lead to massive deviation with respect to tracking performance. We design a new experiment to test the robustness of tracking algorithms. In every repetition of the experiment, a fluctuation vector $\boldsymbol{\delta} = [\delta_l, \delta_r, \delta_s]$, is generated randomly as $$\setlength{\abovedisplayskip}{0.1cm} \setlength{\belowdisplayskip}{0.1cm} \delta_l \sim \mathcal{N}(0,~\omega), ~ \delta_t \sim \mathcal{N}(0,~\omega), ~ \delta_s \sim \mathcal{N}(0,~\frac{\omega}{25})$$ where $\omega$ is a preset standard deviation with small value. The original bounding box $R = [l, r, t, b]$ is then imposed by $\boldsymbol{\delta}$ to obtain a fluctuated rectangle region $R^*$ as $$\setlength{\abovedisplayskip}{0.1cm} \setlength{\belowdisplayskip}{0.1cm} R^* = [l^*, r^*, t^*, b^*] \label{equ:new_box}$$ where $l^*$, $r^*$, $t^*$ and $b^*$ are the new coordinates which are defined as $$\setlength{\abovedisplayskip}{0.1cm} \setlength{\belowdisplayskip}{0.1cm} \begin{split} l^* & = l + \delta_l, \; \; t^* = t + \delta_t, \\ r^* & = (1 + \delta_s)\cdot (r - l) + l + \delta_l, \\ b^* & = (1 + \delta_s)\cdot (b - t) + t + \delta_t. \end{split}$$ The tracking is then conduct based on $R^*$. This procedure is repeated for $100$ times for each tracker. Afterwards, the mean $\overline{T}$ and standard deviation $T_{std}$ of TSP values are calculated for each frame. Finally, we plot the *TSP band*, which is a band changing along with frame index and covers the range $[\overline{T} - T_{std}, \overline{T} + T_{std}]$, for every visual tracker. The new experiment is carried out on video sequence *pets2000\_c1* and the *TSP bands* are demonstrated in Figure \[fig:robustness\]. ![Robustness Verification for visual trackers. The semi-transparent patches stand for the TSP bands of trackers. Note that here RTCST and RTCST-B are performed with D-$100$ features which is generated via random projection and $200$ particles; PF tracker uses $500$ particles. []{data-label="fig:robustness"}](pets2000_c1_fluct_compare){width="40.00000%"} An ideal *TSP band* should be with small variance and centered around a relatively high mean. We can see that in Figure \[fig:robustness\], RTCST and KMS tracker show similar variance but RTCST has a higher TSP mean. PF tracker illustrates smaller variance but suffers from very low accuracy. RTCST-B comes with the highest average TSP value while still achieves smallest standard deviation. The experiment result exhibits the unstable nature of KMS tracker with respect to original target position. Meanwhile, it also confirms our conjecture about the presence of high robustness when background information is taken into consideration. Conclusion and Future Directions {#sec:conclusion} ================================ In this paper, two enhanced CS-based visual tracking algorithms, namely, RTCST and RTCST-B are proposed. A customized OMP algorithm is designed to facilitate the proposed tracking algorithms. Hash kernel and random projection are employed to reduce the feature dimension of tracking application. In RTCST-B, a CS-based background model , which is termed CSBM, is utilized instead of noise templates. The new trackers achieves significantly higher efficiency compared with their prototype—the $\ell_1$ tracker. The remarkable speed growth, which is up to $6271$ times, makes CS-based visual trackers qualified for real-time applications. Meanwhile, our methods also obtain higher accuracy than off-the-shelf tracking algorithms, , PF tracker and KMS tracker. Particularly, RTCST-B achieves consistently highest accuracy and robustness thanks to the exploitation of background information. In short words, the proposed RTCST and RTCST-B are sufficiently fast for real-time visual tracking and more accurate and robust than conventional trackers. For future topics, we believe that one low-hanging fruit is employing the trick mentioned in [@Tropp_TIT_07_OMP] by Tropp to accelerate the OMP procedure furthermore. Another promising direction is to take color information into consideration because in many scenarios, color-based classification is more discriminant than the intensity-based one. The third direction of future research is treating different part of the target, left-top quarter and middle-bottom quarter, as different classes. As a result, a multiple classification is conduct within CS framework. The obtained likelihood for each particle then becomes a vector comprised of the confidences associated with various target parts. Because the time consumptions for binary and multiple classification are the same when using CS-based manner, we actually obtain more information at the same cost. If we can find a reasonable way to exploit the extra information for tracking, more accurate and robust result is likely to be obtained. [10]{} X. Mei and H. Ling, “Robust visual tracking using $\ell_1$ minimization,” in [*Proc. IEEE Int. Conf. Comp. Vis.*]{}, Kyoto, Japan, 2009, pp. 1436–1443. T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” , pp. 484–498, 1998. D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” , vol. 25, pp. 564–577, 2003. A. Yilmaz and M. Shah, “Contour-based object tracking with occlusion handling in video acquired using mobile cameras,” , vol. 26, pp. 1531–1536, 2004. S. Avidan, “Support vector tracking,” , pp. 184–191, 2001. D. Serby and L. V. Gool, “Probabilistic object tracking using multiple features,” in [*Proc. IEEE Int. Conf. Patt. Recogn.*]{}, 2004, pp. 184–187. C. Shen, J. Kim, and H. Wang, “Generalized kernel-based visual tracking,” , vol. 20, pp. 119–130, 2010. A. Doucet, S. Godsill, and C. Andrieu, “On sequential monte carlo sampling methods for bayesian filtering,” , vol. 10, no. 3, pp. 197–208, 2000. C. Shen, M. J. Brooks, and A. [van den Hengel]{}, “Fast global kernel density mode seeking: applications to localization and tracking,” , vol. 16, no. 5, pp. 1457–1469, 2007. M. L. Cascia, S. Sclaroff, and V. Athitsos, “Fast, reliable head tracking under varying illumination: An approach based on registration of texture-mapped 3d models,” , vol. 22, pp. 322–336, 2000. K. Shafique and M. Shah, “A non-iterative greedy algorithm for multi-frame point correspondence,” in [*[IEEE]{} Trans. Pattern Anal. Mach. Intell.*]{}, 2003, pp. 51–65. O. Williams, A. Blake, and R. Cipolla, “Sparse bayesian learning for efficient visual tracking,” , vol. 27, pp. 1292–1304, 2005. M. S. Arulampalam, S. Maskell, and N. Gordon, “A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking,” , vol. 50, pp. 174–188, 2002. D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” , vol. 25, pp. 564–575, 2003. F. Porikli, O. Tuzel, and P. Meer, “Covariance tracking using model update based on lie algebra,” in [*Proc. IEEE Conf. Comp. Vis. Patt. Recogn.*]{}, 2006, vol. 1, pp. 728–735. S. Zhou, R. Chellappa, and B. Moghaddam, “Visual tracking and recognition using appearance-adaptive models in particle filters,” , vol. 13, pp. 1434–1456, 2004. J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” , vol. 31, pp. 210–227, 2009. Y. Tsaig and D. L. Donoho, “Compressed sensing,” , vol. 52, pp. 1289–1306, 2006. E. Candès, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” , vol. 59, pp. 1207–1223, 2006. V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, and R. G. Baraniuk, “Compressive sensing for background subtraction,” in [*Proc. Eur. Conf. Comp. Vis.*]{}, 2008, pp. 155–168. Ali Cafer G., J. H. Mcclellan, J. Romberg, and W. R. Scott, “Compressive sensing of parameterized shapes in images,” in [*Proc. IEEE Int. Conf. Acoust., Speech., Signal Process.*]{}, 2008, pp. 1949–1952. J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” , vol. 53, pp. 4655–4666, 2007. Q. Shi, J. Petterson, G. Dror, J. Langford, A. Smola, A. Strehl, and S. V. N. Vishwanathan, “Hash kernels,” in [*Proc. Int. Workshop Artificial Intell. & Statistics*]{}, 2009. S. Mallat and Z. Zhang, “Matching pursuit with time-frequency dictionaries,” , vol. 41, pp. 3397–3415, 1993. Y. C. Pati, R. Rezaiifar, Y. C. Pati R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition,” in [*Proceedings of the 27 th Annual Asilomar Conference on Signals, Systems, and Computers*]{}, 1993, pp. 40–44. G. Davis, S. Mallat, and Z. Zhang, “Adaptive time-frequency decompositions with matching pursuits,” , vol. 33, 1994. C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in [*Proc. IEEE Conf. Comp. Vis. Patt. Recogn.*]{}, 1999, vol. 2, pp. 246–252. M. Isard and J. Maccormick, “Bramble: A bayesian multiple-blob tracker,” in [*Proc. IEEE Int. Conf. Comp. Vis.*]{}, 2001, vol. 2, pp. 34–41. T. Zhao, R. Nevatia, and F. Lv, “Segmentation and tracking of multiple humans in complex situations,” , vol. 30, pp. 1198–1211, 2001. A.S. MOSEK, “[The MOSEK optimization software]{},” , 2010. M. Everingham, L. V. Gool, C.K.I. Williams, J. Winn, and A. Zisserman, “[The PASCAL visual object classes (VOC) challenge]{},” , vol. 88, no. 2, pp. 303–338, 2010. [^1]: H. Li and C. Shen is with NICTA, Canberra Research Laboratory, Canberra, ACT 2601, Australia, and also with the Australian National University, Canberra, ACT 0200, Australia (e-mail: {hanxi.li, chunhua.shen}@nicta.com.au). Q. Shi is with University of Adelaide, Adelaide, SA 5000, Australia (e-mail: [email protected]). Correspondence should be addressed to C. Shen. [^2]: [^3]: [^4]: Here, however, we do not employ the trick that Tropp and Gilbert mentioned for the least-squares routine. As a result, the OMP’s complexity is higher than $O(dn)$ but still much lower than that of linear programming. [^5]: Although the convergence rate for MP algorithm is $O(1/\sqrt{t})$, the convergence rate for OMP remains unclear. [^6]: In this paper, all the target or foreground is represented as a rectangle region [^7]: It is trivial to prove that the relationship between particle $\s$ and $\x_t$ is deterministic given a specific frame image [^8]: Shadows are not taken into consideration. [^9]: Here, we suppose the origin of image is on the left-top corner.
--- abstract: 'We propose a novel approach for First Impressions Recognition in terms of the Big Five personality-traits from short videos. The Big Five personality traits is a model to describe human personality using five broad categories: Extraversion, Agreeableness, Conscientiousness, Neuroticism and Openness. We train two bi-modal end-to-end deep neural network architectures using temporally ordered audio and novel stochastic visual features from few frames, without over-fitting. We empirically show that the trained models perform exceptionally well, even after training from a small sub-portions of inputs. Our method is evaluated in ChaLearn LAP 2016 Apparent Personality Analysis (APA) competition using ChaLearn LAP APA2016 dataset and achieved excellent performance.' author: - | Arulkumar Subramaniam[^1], Vismay Patel, Ashish Mishra,\ Prashanth Balasubramanian, Anurag Mittal bibliography: - '0003.bib' title: 'Bi-modal First Impressions Recognition using Temporally Ordered Deep Audio and Stochastic Visual Features' --- 16SubNumber[3]{} Introduction ============ A “First Impression" is the event when a person encounters another person and forms a mental image about the person [@wikifirstimpress]. Here the mental image can be based on lot of characteristics such as facial expressions, action, physical appearance, the way of interaction, body language, etc. According to research in Psychology [@willis2006first], the first impressions are formed even with a limited exposure (as less as 100ms) to unfamiliar faces. Forming a first impression is usually done in terms of Personality-traits recognition. Determining Personality-traits automatically will be helpful in human resourcing, recruitment process. An automatic analysis of Personality-traits will help people to train themselves. The problem can be represented as in Table \[inputoutput\]. A short video with a person’s interview is given as input and the output is expected to be 5 fractional values in the range \[0, 1\] representing Extraversion, Agreeableness, Conscientiousness, Neuroticism, Openness. These five are collectively known as the “Big-Five personality-traits". There has not been much work in literature for First-impressions recognition, though the researchers have explored Emotion recognition[@cowie2001emotion; @cohen2000emotion; @cohen2003facial; @kim2013deep; @Kessous; @Kim], a related area in terms of the type of problem and features (hand-crafted features as well as deep features) used. There are many ways, people express their emotions, among which facial expressions are the most useful[@cowie2001emotion; @cohen2000emotion; @cohen2003facial; @kim2013deep]. Cohen et. al [@cohen2000emotion] used HMM based models to categorize the emotions in a video into six types: (1)happy, (2)angry, (3)surprise, (4)disgust, (5)fear, (6)sad. Their extended work[@cohen2003facial] in multilevel HMM performed automatic segmentation and recognition from a continuous signal. Xiaowei Zhao et. al [@Kim] proposed iterative Multi-Output Random Forests for face analysis in images using a combination of three tasks namely Facial landmark detection, Head pose estimation and Facial expression recognition. Deep features have also been used for facial analysis. Javier G. Razuri et. al [@David] have extracted features from regions around eyes and mouth for recognizing the human emotions. Their idea was that information related to emotions could be captured by tracking the expressions around eyes and mouth region. The extracted features are then input into a feed-forward neural network trained by back-propagation for classification of emotions. Although, facial expressions form an important cue, they alone are not sufficient to recognize emotions effectively. Loic et. al [@Kessous] used facial expressions, gestures and acoustic analysis of speech based features. In their work, they have used a Bayesian classifier to recognize one of the eight types of emotions (Anger, Despair, Interest, Pleasure, Sadness, Irritation, Joy and Pride). They presented uni-modal (trained separately with all three types of features), bi-modal (combine two modes together) and multi-modal (combine all three modes together). Among all combinations, they observed that multi-modal based classification yielded the best performance. We propose two end-to-end trained deep learning models that use audio features and face images for recognizing first impressions. In the first model, we propose a Volumetric (3D) convolution based deep neural network for determining personality-traits. 3D convolution was also used by Ji et. al [@ji3dconv], although for the task of action recognition from videos of unconstrained settings. In the second model, we formulate an LSTM(Long Short Term Memory) based deep neural network for learning temporal patterns in the audio and visual features. Both the models concatenate the features extracted from audio and visual data in a later stage. This is in spirit of the observations made in some studies [@Kessous] that multi-modal classification yields superior performance. Our contribution in this paper is two-fold. First, mining temporal patterns in audio and visual features is an important cue for recognizing first impressions effectively. Secondly, such patterns can be mined from a few frames selected in a stochastic manner rather than the complete video, and still predict the first impressions with good accuracy. The proposed methods have been ranked second on the ChaLearn LAP APA2016 challenge(first round)[@chalearn1stround1stimpressions]. This paper is organized as follows. In Section \[sec:methodology\], we describe the two models in detail and the steps followed to prepare the input data and features for the models. Section \[sec:stochastic\_training\] describes the novel stochastic method of training and testing the networks. In Section \[sec:experiments\_results\], we discuss the Apparent Personality Analysis 2016: First Impressions Dataset, the evaluation protocol, the implementation details and the experimental results obtained in two phases of the competition. Section \[sec:conclusions\] concludes the paper providing future direction for the work. -------------------------------------- --------------------------------------------- ![image](./pics/KHQJhOzdrYo_003.png) ![image](./pics/KHQJhOzdrYo_003-target.png) ![image](./pics/xgRqkTXmZko_000.png) ![image](./pics/xgRqkTXmZko_000-target.png) -------------------------------------- --------------------------------------------- : Example of Input and Target. Input is the raw video containing a person’s interview & output will be the predicted personality-traits values. \[inputoutput\] Methodology {#sec:methodology} =========== We propose two bi-modal deep neural network architectures that have two branches, one for encoding audio features and the other for visual features. Inputs to both the audio and visual branches of the model are generated after pre-processing the raw video data. Features extracted from both the branches are fused in a later stage of the model, while the complete network is trained end-to-end. In this section, we describe the pre-processing that was performed on the data and the architecture of models in detail. ![Data pre-processing pipeline, where the face aligned images are extracted from image frames and spectral audio features are extracted from audio data.\[preprocess\]](./pics/Preprocessing_pipe.png){width="90.00000%" height="0.3\textheight"} Audio data pre-processing ------------------------- Given a video, we extract its audio component and split the audio component into N non-overlapping partitions as shown in figure \[preprocess\]. From each individual partition, we extract “mean and standard deviation" of certain properties (table \[audiofeats\]) of audio signal. We use an open-source python based audio processing library called pyAudioAnalysis [@giannakopoulos2015pyaudioanalysis; @pyaudioanalysis] for this purpose. The hand-crafted features are of 68 dimensions, which includes the mean and standard deviation of the following attributes: -------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------- Zero Crossing Rate The rate of sign-changes of the signal during the duration of a particular frame Energy The sum of squares of the signal values, normalized by the respective frame length. Entropy of Energy The entropy of sub-frames’ normalized energies. It can be interpreted as a measure of abrupt changes. Spectral Centroid The centre of gravity of the spectrum. Spectral Spread The second central moment of the spectrum. Spectral Entropy Entropy of the normalized spectral energies for a set of sub-frames. Spectral Flux The squared difference between the normalized magnitudes of the spectra of the two successive frames. Spectral Rolloff The frequency below which 90% of the magnitude distribution of the spectrum is concentrated. MFCCs Mel Frequency Cepstral Coefficients form a cepstral representation where the frequency bands are not linear but distributed according to the mel-scale. Chroma Vector A 12-element representation of the spectral energy where the bins represent the 12 equal-tempered pitch classes of western-type music (semitone spacing). Chroma Deviation The standard deviation of the 12 chroma coefficients. -------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------- : Audio features extracted using pyAudioAnalysis [@pyaudiofeatures] \[audiofeats\] Visual data pre-processing -------------------------- The visual processing branch of the model takes as input, a set of ’N’ 3D aligned segmented face images. We segment the face images to prevent the background from affecting the predictions, which should rather depend only on the features of the face (gaze direction, movements of eye, lips, etc). We use facial landmark detection and tracking to segment the faces. The landmark points are then aligned to fixed locations, which give us segmented face images that have also been aligned. We use an open-sourced C++ library OpenFace[@baltru2016openface; @openface] for all the visual pre-processing tasks. Model Architecture ------------------ We propose two models in our work. The models are shown in figure \[multimodalconv\] and \[multimodallstm\] respectively. We divide each video into N non-overlapping partitions. From each of the N partitions, both audio and visual features are extracted (figure \[preprocess\]) and used as inputs to the models. Here, only the inter-partition variations are learned as temporal patterns, while the intra-partition variations are ignored. We do so, to handle redundancy in consecutive frames especially in high fps videos. As we can see in figures \[convpipe\] and \[lstmpipe\], the audio and visual features from each block are passed through consecutive layers of neural network. Now, in our first model, the temporal patterns across the N sequential partitions are learned using a 3D convolution module. While in the second model, we use an LSTM to learn the temporal patterns across the partitions. The kernel sizes and stride information are available in the figure \[modelarch\]. By empirical analysis, we fixed N as 6. [0.5]{} [0.5]{} ### Volumetric (3D) convolution model: Our first model is inspired from the work of Ji et. al[@ji3dconv]. The architecture is shown in figure \[multimodalconv\] and the pipeline is demonstrated in figure \[convpipe\]. The visual data processing branch learns the change in facial expressions from face aligned images using 3D convolution. At first, the 6 face aligned temporally ordered images of size $3\times 112\times 112$ are passed through a 3D convolution layer, followed by a ReLU and a 3D max-pooling layer. The 3D convolution as well as max-pooling are done in a volume comprised of X, Y and t dimensions. The resulting feature maps are in-turn passed through a second set of similar layers of 3D convolution, ReLU and 3D max-pooling but with different kernel sizes (refer to figure \[multimodalconv\] for details about parameters). This is followed by another layer of 3D convolution, which result in a single feature map of size $1\times21\times21$ which is flattened to a 441 dimensional feature vector. Simultaneously, the audio-data processing branch gets a $6 \times 68$ dimensional feature vector which is reduced to a 100 dimensional vector using a fully connected layer. The feature vectors from audio and visual branches are concatenated and yields a 541 (100 from audio + 441 from visual data) dimensional feature vector, which is then input to a fully connected (FC) layer of 200 nodes and a ReLU layer, followed by another FC layer of 5 nodes which has the activation function as sigmoid. These 5 nodes represent the predicted values of the Big-Five Personality traits. ### LSTM based model: We designed our second model to learn the task based on temporal relationship within the input. The architecture and pipeline of the model are shown in figure \[multimodallstm\] and figure \[lstmpipe\] respectively. We propose LSTM units to capture the temporal patterns of the input data to predict the personality traits. Each aligned face image is passed through a series of spatial convolution, ReLU and spatial max-pooling layers of varying kernel sizes (refer to figure \[multimodallstm\] for details about parameters). The generated feature maps are flattened to get 1024 dimensional feature vector and it is connected to a fully connected layer of 128 nodes. Simultaneously, the audio-data (6 feature vectors of 68 dimension) is passed through a 32-node fully connected layer and reduced to 32-dimension. After these steps, the output feature vectors from audio and visual data processing branches are concatenated to yield 6 feature vectors of 160 dimension (32 dim of audio + 128 dim of visual data for each 6 partition) which are still maintained in temporal order. The extracted temporally ordered 6 feature vectors are then passed through an LSTM with output dimension of 128. The LSTM takes $6\times160$ dimensional input and outputs a sequence of 6 $128$-dimensional feature vector. The LSTM generates output for each time step and then, each output is passed through 5 dimensional fully-connected layer with sigmoid activation function. Thus, we get 6 outputs of predicted 5 personality traits. For each personality trait, we average the predicted value, output by all 6 LSTM output units. Thus we get a single prediction value for each of the Big Five personality traits. ![Pipeline of 3D-Convolution model\[convpipe\]](./pics/Convolution_pipe.png){height="0.25\textheight"} Stochastic Training and Testing {#sec:stochastic_training} =============================== According to Psychology research [@willis2006first], it is observed that first impressions of unfamiliar faces can be formed even with exposure times as small as 100-ms. Their results suggest that predictions made with a 100-ms exposure correlated highly with judgments made in the absence of time constraints, suggesting that small exposure times were sufficient for participants to form an impression. On similar lines, we also hypothesize that deep models can learn effective representations for recognizing first impressions from a few randomly selected frames. Stochastic Training ------------------- Training of the two proposed models is carried out using Stochastic Gradient Optimization (SGD) method. The parameters used for SGD are: learning rate = 0.05, weight decay = $5\times e^{-4}$, momentum = 0.9, batch size = 128, learning rate decay $ = 1\times e^{-4}$. As mentioned earlier (figure \[preprocess\]), each raw video file is split into non-overlapping 6 partitions and the audio as well as visual features are extracted from each partition individually. We propose to train the models by using a combined feature set such that we take single face aligned image from each partition, as well as the pre-processed audio features from each partition. Particularly, in video data, since we are only using 1 frame from whole partition, there are multiple combinations of frames from each partition possible for training. Consider there are N partitions & F frames per partition and we intend to take a single frame from each partition, hence $F^N$ combinations of frames are possible per video. We assume N as 6 and typically, F is in the range of ${\raise.17ex\hbox{$\scriptstyle\sim$}} 75$ (considering 30 fps and each video of 15 seconds). Training the model with $75^6$ combinations of frames is an overkill. Empirically, we found that training only on several hundreds of combinations (typically [.17ex]{} 500) is enough for the model to generalize for whole dataset. ![Pipeline of LSTM model\[lstmpipe\]](./pics/LSTM_pipe.png){height="0.3\textheight"} Going with the above explanation, the 6 input frames (single frame from each partition) for model training is selected randomly by keeping temporal ordering in mind. At every epoch, the random selection will yield new input combination for each video. This stochastic way of training produces new sample at every epoch and “regularizes” the learning effectively, thus increasing the generalization of the model. Testing ------- Testing the model also faces the same issue of exponential combination of frames per video. Empirically, we choose to use only a random subset (10 combinations) from total possible combinations and use the average of 10 evaluations as the Personality-traits recognition results. The validation and test results suggest that the model and evaluation method performs significantly better than the other submissions and the LSTM model stood at second place in the Final evaluation phase of competition. Experiments and Results {#sec:experiments_results} ======================= In this section, we first briefly describe about the dataset and the evaluation protocol from our experiments. Then we provide the implementation details for our method and discuss the results. Dataset: Apparent Personality Analysis (APA) - First impressions ---------------------------------------------------------------- In our validation experiment, we use the ChaLearn LAP 2016 APA dataset provided by the challenge organizers[@chalearn1stround1stimpressions]. This dataset has 6000 videos for training with ground truth Personality-traits, 2000 videos for validation without ground truth (performance is revealed on submission of predictions) and 2000 videos for test (Ground truth is not available until the competition is finished). Each video is of length 15 seconds and generally has 30 frames/second. The ground truth consists of fractional scores in the range between 0 to 1 for each of Big-Five Personality traits : Extraversion, Agreeableness, Conscientiousness, Neuroticism, Openness. ![Number of Epochs vs. Mean Squared Error (MSE) for individual models during training phase \[fig:msecompare\]](./pics/cnn_lstm.png) Evaluation Protocol ------------------- The evaluation is done in terms of Mean Average Accuracy.\ The individual personality traits Average Accuracy is calculated as, $$\text{Average Accuracy}_j = \frac{1}{N} \sum_{i=1}^N(1 - |Target_{ij}-y_{ij}|)$$ where j = 1…5, N is the number of total videos, $Target_{ij}$ is the ground truth value for $i^{th}$ video and $j^{th}$ personality-trait, $y_{ij}$ is the predicted value for $i^{th}$ video and $j^{th}$ personality-trait. The Mean Average Accuracy between the predictions and the ground truth personality-traits values: $$\text{Mean Average Accuracy} = \frac{1}{m} \sum_{j=1}^m(\text{Average accuracy}_j)$$ where m = 5 (the number of Personality Traits). Note, that the maximum value of the Mean Average Accuracy, as well as Average Accuracy is equal to 1, which represents the best result and the minimum is equal to 0 representing the worst match. Implementation details ---------------------- Both of the deep learning models are implemented using Torch[@collobert2011torch7] scientific computing framework. The training of 3D convolution based model takes 30 seconds per epoch and LSTM based model takes 3 minutes per epoch on a GeForce GTX Titan Black graphics card. The training of each individual model is done for up-to whole 1 day. We used only the ChaLearn LAP 2016 APA dataset[@chalearn1stround1stimpressions] for training. The comparison of mean squared error(MSE) of both models during training is shown in figure \[fig:msecompare\]. The source code files of both the training and final proposed prediction method are available in github [^2] repository. Development phase ----------------- In the development phase of the APA2016 competition[@chalearn1stround1stimpressions], only the training set ground truths were released and the methods were evaluated online by submitting the predictions on the validation videos to a server. The best performance of our models during development phase is shown in Table \[validationrankings\]. Test phase ---------- In the test phase of the APA2016 competition[@chalearn1stround1stimpressions], the testing videos were released. The testing ground truths were kept secret and the teams were invited to submit their results on the testing videos. The organizers announced the final ranking after the test phase. The results are summarized in Table \[resultstable\]. The proposed LSTM model secured the second place in the leader-board and shown in bold font. Results and Discussion ---------------------- The performance of CNN (3D convolution) based model and LSTM model can be seen from learning phase evaluation shown in table \[validationrankings\]: ------------------- -------------- ---------- Accuracy **0.913355** 0.912473 Extraversion 0.914548 0.915650 Agreeableness 0.915749 0.916123 Conscientiousness 0.913594 0.908370 Neuroticism 0.909814 0.909931 Openness 0.913069 0.912292 ------------------- -------------- ---------- : Evaluation during learning phase on ChaLearn LAP 2016 APA : First Impressions challenge \[validationrankings\] The test phase leader-board standings is shown in the table \[resultstable\]. ------- ---------------------------- -------------- 1 NJU-LAMDA 0.912968 **2** **evolgen (\*LSTM model)** **0.912063** 3 DCC 0.910933 4 ucas 0.909824 5 BU-NKU 0.909387 6 pandora 0.906275 7 Pilab 0.893602 8 Kaizoku 0.882571 ------- ---------------------------- -------------- : Leaderboard of Test-phase on ChaLearn LAP 2016 APA : First Impressions challenge. our entry is with **bold** \[resultstable\] As we noticed from the table \[validationrankings\], during learning phase, LSTM based model performs superior to 3D convolution based model. It maybe due to the fact that, LSTM is able to learn better temporal relationships than 3D convolution based approach. Also, the audio-features were not used to define temporal relationship in 3D convolution based model (only 3D face aligned images are used), but LSTM model used both audio and visual features to learn the temporal correspondences, which could have made it perform better. Because of these reasons, we chose LSTM model to be used for test phase: Our method secured second place in ChaLearn LAP 2016: APA challenge[@chalearn1stround1stimpressions]. Conclusions and Future Works {#sec:conclusions} ============================ In this work, we proposed two deep neural network based models that use audio and visual features for the task of First Impressions Recognition. These networks mine the temporal patterns that exist in a sequence of frames. It was also shown that such sequences can be small and selected in a stochastic manner respecting the temporal order. The proposed methods have been shown to yield excellent performance on the ChaLearn LAP APA2016 Challenge[@chalearn1stround1stimpressions]. As deep neural networks are known for their representation and feature extracting ability, they can be used to learn the optimal representations without having to pre-process the data. Appearance and Pose features can also be explored to see if they improve the performance given by the proposed audio and visual features. [^1]: Authors contributed equally [^2]: refer <https://github.com/InnovArul/first-impressions> for more information
--- abstract: 'We show that any Carnot group ${\mathbb{G}}$ with sufficiently many *deformable directions* contains a measure zero set $N$ such that every Lipschitz map $f\colon {\mathbb{G}}\to {\mathbb{R}}$ is differentiable at some point of $N$. We also prove that model filiform groups satisfy this condition, extending some previous results to a class of Carnot groups of arbitrarily high step. Essential to our work is the question of whether the existence of an (almost) maximal directional derivative $Ef(x)$ in a Carnot group implies the differentiability of a Lipschitz map $f$ at $x$. We show that such an implication is valid in model Filiform groups for directions that are outside a one-dimensional subspace of horizontal directions. Conversely, we show that this implication fails for every horizontal direction in the free Carnot group of step three and rank two.' address: - 'Department of Mathematics, University of Trento, Via Sommarive 14, 38050 Povo (Trento), Italy' - 'Department of Mathematical Sciences, University of Cincinnati, 2815 Commons Way, Cincinnati, OH 45221, United States' author: - Andrea Pinamonti - Gareth Speight title: Universal Differentiability Sets in Carnot Groups of Arbitrarily High Step --- Introduction {#intro} ============ Rademacher’s theorem asserts that every Lipschitz map $f\colon {\mathbb{R}}^{n}\to {\mathbb{R}}^{m}$ is differentiable almost everywhere with respect to the Lebesgue measure. This important result has been extended to many other spaces and measures [@AM14; @Che99; @LPT13; @Pan89]. It is also interesting to consider whether Rademacher’s theorem admits a converse: given a Lebesgue null set $N\subset {\mathbb{R}}^{n}$, does there exist a Lipschitz map $f\colon {\mathbb{R}}^{n}\to {\mathbb{R}}^{m}$ which is differentiable at no point of $N$? The answer to this question is yes if and only if $n\leq m$ and combines the work of several authors [@Zah46; @Pre90; @PS15; @ACP10; @CJ15]. In the case where $n>m=1$, the results in [@DM11; @DM12; @DM14] provide a stronger result: there is a compact set of Hausdorff dimension one in ${\mathbb{R}}^{n}$ which contains some point of differentiability of any Lipschitz map $f\colon {\mathbb{R}}^{n}\to {\mathbb{R}}$. Such a set may even be chosen with upper Minkowski dimension one [@DM14]. Sets containing a point of differentiability for any real-valued Lipschitz map are called *universal differentiability sets*. We refer the reader to [@PM] and the references therein for more discussion of such sets. The present paper continues the investigation of universal differentiability sets in Carnot groups which was started in [@PS16; @LPS17], see also the survey [@PSsurvey]. We recall that a Carnot group (Definition \[Carnot\]) is a simply connected Lie group whose Lie algebra $\mathfrak g$ admits a stratification, i.e. it admits a decomposition $\mathfrak g= V_1\oplus \dots \oplus V_s$ where $V_{i+1}=[V_1,V_i]$ for $i=1,\dots, s-1$. The subspace $V_1$ is called the horizontal layer while $s$ is the step of the Carnot group and to some extent indicates its complexity (Carnot groups of step one are simply Euclidean spaces). Carnot groups have a rich geometric structure adapted to the horizontal layer, including translations, dilations, Carnot-Carathéodory (CC) distance, and a Haar measure [@ABB; @CDPT07; @Gro96; @Mon02]. In the last two decades Carnot groups have been studied in connection with several different areas of mathematics, such as PDE, differential geometry, control theory, geometric measure theory, mathematical finance and robotics. Their rich structure allows one to define differentiability of maps between Carnot groups (Definition \[pansudifferentiability\]). Pansu’s theorem states that every Lipschitz map is differentiable almost everywhere with respect to the Haar measure [@Pan89]. This is a generalization of Rademacher’s theorem to Carnot groups. In [@PS16], it was shown that Heisenberg groups contain measure zero universal differentiability sets. Heisenberg groups are the most frequently studied non-Euclidean Carnot groups and have step two. In [@LPS17] this result was extended to give a measure zero and Hausdorff dimension one universal differentiability set in any step two Carnot group. The present paper extends these results and the associated techniques to higher step Carnot groups satisfying a precise geometric condition, namely having sufficiently many *deformable directions* (Definition \[deform\]). This is a geometric condition expressing that, roughly speaking, horizontal lines can be nicely modified to pass through nearby points, without changing too much their length or their direction. This condition applies in particular to model filiform groups (Definition \[filiform\]), which can have arbitrarily high step despite their relatively simple Lie brackets. Model filiform groups have been previously investigated in connection with non-rigidity of Carnot groups [@O08], quasiconformal mappings between Carnot groups [@War03; @Xia15] and geometric control theory [@BLU07]. Before describing more carefully the results of this paper, we briefly discuss the techniques involved in constructing universal differentiability sets. We believe these are of independent interest as they only depend on the geometry of the space involved. In [@Pre90; @PS16; @LPS17], the key technique for constructing measure zero universal differentiability sets builds upon the idea that existence of a *maximal directional derivative* for a Lipschitz map suffices for its differentiability. In Euclidean spaces, this observation takes the following form: if $f\colon {\mathbb{R}}^{n} \to {\mathbb{R}}$ is Lipschitz and $|f'(x,v)|= \mathrm{Lip}(f)$ for some direction $v\in {\mathbb{R}}^{n}$ with $|v|=1$, then $f$ is differentiable at $x$, see [@Fit84]. However, a general Lipschitz map may not have such a maximal directional derivative. In [@Pre90] it was shown that any Lipschitz map $f\colon {\mathbb{R}}^{n} \to {\mathbb{R}}$ admits a linear perturbation that has an *almost maximal directional derivative* at some point $x$ in a direction $v$. They also show that almost maximality suffices for differentiability and the point $x$ can be chosen inside a measure zero set $N$ that is independent of $f$. Combining the two facts we have that $N$ is a universal differentiability set of measure zero. In [@PS16; @LPS17], the present authors and E. Le Donne showed that if $f\colon {\mathbb{G}}\to {\mathbb{R}}$ is a Lipschitz map on a step two Carnot group and $Ef(x)$ is a maximal directional derivative (Definition \[maximal\]), then $f$ is differentiable at $x$ in the sense of Pansu, see Definition \[pansudifferentiability\]). Moreover, in step two Carnot groups, it was also shown that almost maximality of a directional derivative suffices for differentiability. Generalizing the Euclidean techniques one can then construct a measure zero and Hausdorff dimension one universal differentiability set. Moreover, for each horizontal direction $E$ in an arbitrary Carnot group, differentiability of the CC distance at $\exp(E)$ is equivalent to validity of the following implication: maximality of $Ef(x)$ for Lipschitz $f\colon {\mathbb{G}}\to {\mathbb{R}}$ implies differentiability of $f$ at $x$ (Proposition \[equivalence\]). However, in the Engel group, which represents the simplest step 3 Carnot group, neither of the above properties hold. The counterexample is simply given by the horizontal direction $X_{2}$, since the CC distance fails to be differentiable at $\exp(X_{2})$. It is then clear that the geometry of the space impacts the differentiability of its Lipschitz maps. The reason why maximality implies differentiability (Proposition \[equivalence\](2)) fails for the direction $X_{2}$ in the Engel group is that horizontal lines in the direction $X_{2}$ cannot be modified to pass through nearby points without increasing too much their length. If ‘maximality implies differentiability’ fails, then so does the stronger implication ‘almost maximality implies differentiability’. This stronger implication depends upon the possibility of modifying horizontal lines with some controlled bounds on both their length *and* their direction. This stronger modification is useful because in ‘almost maximality’ the directional derivative is maximal only compared to directional derivatives coming from pairs of points and directions which satisfy estimates expressed using difference quotients of the Lipschitz map. A direction is deformable if suitable deformations of horizontal lines are possible. All horizontal directions in step two Carnot groups are deformable. This was proved in [@LPS17], though the word deformable was not used there. In the present paper we show that in model filiform groups ${\mathbb{E}}_{n}$, any horizontal direction other than $\pm X_{2}$ is deformable (Theorem \[deformFiliform\]). We also show that $\pm X_{2}$ are deformable in ${\mathbb{E}}_{n}$ if and only if $n=2$ or $n=3$ (Corollary \[pmX2\]). A set $N$ in a Carnot group ${\mathbb{G}}$ is a universal differentiability set if every Lipschitz map $f\colon {\mathbb{G}}\to {\mathbb{R}}$ is differentiable at some point of $N$ (Definition \[defUDSabstract\]). We say that a set has CC Hausdorff dimension one if it has Hausdorff dimension one with respect to the CC metric. Our main result is the following. \[maintheorem\] Let ${\mathbb{G}}\neq {\mathbb{R}}$ be any Carnot group that has a ball of uniformly deformable directions (see Assumptions \[ass\]). Then ${\mathbb{G}}$ contains a universal differentiability set $N\subset {\mathbb{G}}$ of CC Hausdorff dimension one (in particular measure zero). In particular, all model filiform groups ${\mathbb{E}}_{n}$ for $n\geq 2$ contain a CC Hausdorff dimension one universal differentiability set. A ball of uniformly deformable directions is needed in Theorem \[maintheorem\] because one constructs the measure zero UDS using countably many horizontal curves which are dense in some sense. To prove that ‘almost maximality implies differentiability’ (Theorem \[almostmaximalityimpliesdifferentiability\]) one needs to approximate the almost maximal direction with a sequence of deformable ones. If we do not require that the UDS has measure zero, then only one deformable direction is needed to show that almost maximality implies differentiability. Notice that Theorem \[maintheorem\] applies to the Engel group ${\mathbb{E}}_{4}$, which was the problematic group in [@LPS17]. Hence one may ask whether Theorem \[maintheorem\] holds without assuming Assumptions \[ass\]. Our second result shows that, unless one fundamentally changes the techniques used, the class of Carnot groups must indeed be restricted. \[strongnondiff\] In the free Carnot group ${\mathbb{F}}_{2,3}$ with rank two and step three, the CC distance is not differentiable at $\exp(E)$ for any horizontal direction $E\in V_{1}$. Consequently, ‘maximality implies differentiability’ fails in ${\mathbb{F}}_{2,3}$ for every horizontal direction. This improves in a strong way upon [@LPS17], where it was shown that maximality implies differentiability fails for one direction in the Engel group. It would be interesting to know whether ${\mathbb{F}}_{2,3}$ contains a measure zero UDS or instead the opposite result holds: for every null set $N\subset {\mathbb{F}}_{2,3}$, does there exists a Lipschitz map $f\colon {\mathbb{F}}_{2,3}\to {\mathbb{R}}$ which is differentiable at no point of $N$? At present we do not know the answer to this question. The notion of deformability introduced in the present paper seems to share some analogy with the property of not being an abnormal curve, see [@Vit] for the definition of abnormal curves. For example, it is known that in ${\mathbb{F}}_{2,3}$ the set of all abnormal curves coincides with the set of horizontal lines [@ABB]. This phenomenon could explain why ‘maximality implies differentiability’ fails in ${\mathbb{F}}_{2,3}$ for every horizontal direction. A similar characterization holds in model filiform groups, which admit only one abnormal curve which is given by the $\pm X_{2}$ direction [@ABB]. However the picture is far from being clear. For example in ${\mathbb{F}}_{3,2}$, where ‘maximality implies differentiability’ holds for every horizontal direction [@LPS17], every horizontal line is an abnormal curve and viceversa, see [@LMOPV16 Proposition 3.11, Theorem 3.14] and [@OV17]. We plan to investigate this possible relation in future works. We now describe the structure of the paper. In Section \[preliminaries\] we recall the necessary background on Carnot groups and differentiability. In Section \[CCdifferentiability\] we investigate the differentiability of the CC distance. We show that, if $E$ is a deformable direction, then the CC distance is differentiable at $\exp(E)$ (Proposition \[deformimpliesdiff\]) and that in any model filiform group ${\mathbb{E}}_{n}$, with $n\geq 4$, the CC distance is not differentiable at $\exp(\pm X_{2})$ (Proposition \[X2nogood\]). We eventually prove Theorem \[strongnondiff\]. In Section \[CurvesFiliform\] we prove Lemma \[Xn\] and Lemma \[Filiformcurve\] which allow us to construct suitable horizontal curves in model filiform groups. These are then used to show that in every model filiform group all horizontal directions other than $\pm X_{2}$ are deformable (Theorem \[deformFiliform\]). In Section \[sectiondistanceestimate\] we prove an estimate for distances between piecewise linear curves with similar directions (Lemma \[closedirectioncloseposition\]). In Section \[sectionUDS\] we consider Carnot groups ${\mathbb{G}}$ that contain a ball of uniformly deformable directions with parameters. With this assumption we construct a universal differentiability set (Lemma \[uds\]) and prove that almost maximality implies differentiability if the direction belongs to the given ball (Theorem \[almostmaximalityimpliesdifferentiability\]). In Section \[sectionconstruction\] we show that any Lipschitz map $f\colon {\mathbb{G}}\to {\mathbb{R}}$ admits a group linear perturbation which has an almost maximal direction derivative at some point $x$ in some horizontal direction $E$ (Proposition \[DoreMaleva\]). Moreover, the point $x$ can be found inside a given measure zero $G_{\delta}$ set and the direction $E$ can be found close to a starting direction $E_{0}$. A proof of Theorem \[maintheorem\] is given by combining Theorem \[almostmaximalityimpliesdifferentiability\] and Proposition \[DoreMaleva\]. **Acknowledgement.** The authors thank the referees for very detailed comments which greatly improved the presentation of the paper. Part of this work was done while G. Speight was visiting the University of Trento; he thanks the institution for its hospitality. This work was supported by a grant from the Simons Foundation (\#576219, G. Speight). A. P. is a member of [*Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni*]{} (GNAMPA) of [*Istituto Nazionale di Alta Matematica*]{} (INdAM). Preliminaries ============= In this section we recall concepts which will be important throughout the paper. Basic notions in Carnot groups ------------------------------ \[Carnot\] A *Carnot group* ${\mathbb{G}}$ of *step* $s$ is a simply connected Lie group whose Lie algebra $\mathfrak{g}$ admits a decomposition as a direct sum of subspaces of the form $$\mathfrak{g}=V_{1}\oplus V_{2}\oplus \cdots \oplus V_{s}$$ such that $V_{i}=[V_{1},V_{i-1}]$ for any $i=2, \ldots, s$, and $[V_{1},V_{s}]=0$. The subspace $V_{1}$ is called the *horizontal layer* and its elements are called *horizontal left invariant vector fields*. The *rank* of ${\mathbb{G}}$ is $\dim(V_{1})$. The exponential mapping $\exp\colon \mathfrak{g}\to {\mathbb{G}}$ is a diffeomorphism. Given a basis $X_{1},\ldots, X_{n}$ of $\mathfrak{g}$ adapted to the stratification, any $x\in {\mathbb{G}}$ can be written in a unique way as $$x=\exp(x_{1}X_{1}+\ldots +x_{n}X_{n}).$$ We identify $x$ with $(x_{1},\ldots, x_{n})\in {\mathbb{R}}^{n}$ and hence ${\mathbb{G}}$ with ${\mathbb{R}}^{n}$. This is known as *exponential coordinates of the first kind*. To compute the group law in these coordinates, one uses the equality $$\exp(X)\exp(Y)=\exp(X\diamond Y)\quad \mbox{ for all } X,Y\in\mathfrak{g}.$$ Here $\diamond$ is defined by the Baker-Campbell-Hausdorff (BCH) formula $$\begin{aligned} \label{BCH} X\diamond Y= X+Y+\frac{1}{2}[X,Y]+\frac{1}{12}([X,[X,Y]]+[Y,[Y,X]]) + \ldots,\end{aligned}$$ where higher order terms are nested commutators of $X$ and $Y$ [@Var], see e.g. [@BLU07 Theorem 2.2.13]. Unless otherwise stated, ${\mathbb{G}}$ will be a Carnot group of step $s$ and rank $r$ with $\dim(\mathfrak{g})=n$ which is represented in exponential coordinates of the first kind. We say that a curve $\gamma \colon [a,b]\to {\mathbb{G}}$ is absolutely continuous if it is absolutely continuous as a curve into ${\mathbb{R}}^{n}$. Fix a basis $X_{1}, \ldots, X_{r}$ of $V_{1}$ and an inner product norm $\omega$ on $V_{1}$ making the chosen basis orthonormal. \[horizontalcurve\] An absolutely continuous curve $\gamma\colon [a,b]\to {\mathbb{G}}$ is *horizontal* if there exist $u_{1}, \ldots, u_{r}\in L^{1}[a,b]$ such that $$\gamma'(t)=\sum_{j=1}^{r}u_{j}(t)X_{j}(\gamma(t)) \quad \mbox{for almost every }t\in [a,b].$$ The *length* of such a curve is $L_{{\mathbb{G}}}(\gamma):=\int_{a}^{b}|u|$. Since ${\mathbb{G}}$ is identified with ${\mathbb{R}}^{n}$ as a manifold, its tangent spaces are also naturally identified with ${\mathbb{R}}^{n}$. We say that a vector $v\in {\mathbb{R}}^n$ is *horizontal* at $p\in {\mathbb{G}}$ if $v=E(p)$ for some $E\in V_1$. Thus a curve $\gamma$ is horizontal if and only if $\gamma'(t)$ is horizontal at $\gamma(t)$ for almost every $t$. All the curves of the form $t\mapsto p\exp(tV)$ for some $p\in {\mathbb{G}}$ and $E\in V_{1}$ are horizontal and they will be called *horizontal lines*. Chow’s theorem [@Chow39] asserts that any two points in a Carnot group can be connected by a horizontal curve. Hence the following definition gives a metric on ${\mathbb{G}}$. \[carnotdistance\] The *Carnot-Carathéodory (CC) distance* between any two points $x, y\in {\mathbb{G}}$ is defined by $$d(x,y):=\inf \{L_{{\mathbb{G}}}(\gamma)\colon \gamma \mbox{ is a horizontal curve joining } x \mbox{ and }y\}.$$ We also use the notation $d(x):=d(x,0)$ for $x\in {\mathbb{G}}$. Left group translations preserve lengths of horizontal curves. This implies $$d(gx,gy)=d(x,y) \quad \mbox{for every }g,x,y \in {\mathbb{G}}.$$ Even though the CC distance and the Euclidean distance are not Lipschitz equivalent, they induce the same topology. Hence ${\mathbb{Q}}^{n}$ is dense in ${\mathbb{R}}^{n}$ with respect to the CC distance. The following proposition will be useful to compare the two distances [@NSW], see also [@BLU07 Corollary 5.2.10 and Proposition 5.15.1]. \[euclideanheisenberg\] Let ${\mathbb{G}}$ be a Carnot group of step $s$ and $K\subset {\mathbb{G}}$ be a compact set. Then there exists a constant $C_{\mathrm{H}} \geq 1$ depending on $K$ such that $$C_{\mathrm{H}}^{-1} |x-y|\leq d(x,y)\leq C_{\mathrm{H}}|x-y|^{1/s} \qquad \mbox{for all }x, y\in K.$$ We will also need the following estimate for the CC distance [@FS Lemma 2.13]. \[conjugatedistance\] Let ${\mathbb{G}}$ be a Carnot group of step $s$. Then there is a constant $C_D\geq 1$ such that $$\begin{aligned} d(x^{-1}yx) \leq C_D\,\Big(d(y)+ d(x)^{\frac{1}{s}}d(y)^{\frac{s-1}{s}}+d(x)^{\frac{s-1}{s}}d(y)^{\frac{1}{s}}\Big)\quad \mbox{for }x,y\in{\mathbb{G}}.\end{aligned}$$ \[dilations\] For any $\lambda>0$, we define the *dilation* $\delta_{\lambda}\colon {\mathbb{G}}\to {\mathbb{G}}$ in coordinates by $$\delta_{\lambda}(x_{1}, \ldots, x_{n})=(\lambda^{\alpha_{1}}x_{1},\ldots, \lambda^{\alpha_{n}}x_{n})$$ where $\alpha_{i}\in {\mathbb{N}}$ is the homogeneity of the variable $x_{i}$, which is defined by $$\alpha_{j}=i \qquad \mbox{whenever} \qquad h_{i-1}+1<j\leq h_{i},$$ where $h_{i}:=\dim(V_{1}) + \ldots \dim(V_{i})$ for $i\geq 1$ and $h_{0}:=0$. For our purposes, it will be enough to know that $\alpha_{1}=\cdots=\alpha_{r}=1$, where $r=\dim(V_{1})$. Dilations are group homomorphisms of ${\mathbb{G}}$ and they satisfy $$d(\delta_{\lambda}(x),\delta_{\lambda}(y))=\lambda d(x,y) \quad \mbox{for every }x, y \in {\mathbb{G}}\mbox{ and }\lambda>0.$$ We will also use the fact that $\delta_{\lambda}(\exp(E))=\exp (\lambda E)$ for every $\lambda >0$ and $E\in V_{1}$. Carnot groups have a Haar measure which is unique up to scalars. When ${\mathbb{G}}$ is represented in first exponential coordinates as ${\mathbb{R}}^{n}$, the Haar measure is simply the Lebesgue measure $\mathcal{L}^{n}$, which satisfies $$\mathcal{L}^{n}(gA)=\mathcal{L}^{n}(A) \qquad \mbox{and}\qquad \mathcal{L}^{n}(\delta_{\lambda}(A))=\lambda^{Q}\mathcal{L}^{n}(A)$$ for every $g\in {\mathbb{G}}$, $\lambda>0$ and $A\subset {\mathbb{G}}$ measurable. Here $Q:=\sum_{i=1}^{s}i\dim(V_{i})$ is the *homogeneous dimension* of ${\mathbb{G}}$, which is also the Hausdorff dimension of ${\mathbb{G}}$ with respect to the CC metric. Differentiability in Carnot groups ---------------------------------- \[defdirectionalderivative\] Let $f\colon {\mathbb{G}}\to {\mathbb{R}}$ be a Lipschitz function, $x\in {\mathbb{G}}$ and $E\in V_{1}$. The *directional derivative of $f$ at $x$ in direction $E$* is defined by $$Ef(x):=\lim_{t\to 0} \frac{f(x\exp(tE))-f(x)}{t},$$ whenever the limit exists. Pansu defined the notion of differentiability in Carnot groups and proved a Rademacher theorem for maps between general Carnot groups [@Pan89]. We will only be concerned with the case where the target is ${\mathbb{R}}$. \[pansudifferentiability\] A function $L\colon {\mathbb{G}}\to {\mathbb{R}}$ is *${\mathbb{G}}$-linear* if $L(xy)=L(x)+L(y)$ and $L(\delta_{r}(x))=rL(x)$ for all $x, y\in {\mathbb{G}}$ and $r>0$. Let $f\colon {\mathbb{G}}\to {\mathbb{R}}$ and $x\in {\mathbb{G}}$. We say that $f$ is *differentiable at $x$* if there is a ${\mathbb{G}}$-linear map $L \colon {\mathbb{G}}\to {\mathbb{R}}$ such that $$\lim_{y \to x} \frac{|f(y)-f(x)-L(x^{-1}y)|}{d(x,y)}=0.$$ In this case we say that $L$ is the *Pansu differential* of $f$ at $x$. \[pansutheorem\] Every Lipschitz function $f\colon {\mathbb{G}}\to {\mathbb{R}}$ is differentiable almost everywhere with respect to the Haar measure on ${\mathbb{G}}$. Note that Theorem \[pansutheorem\] also holds for Carnot group targets [@Pan89] and even for suitable infinite dimensional targets [@MR; @MPS17]. \[defUDSabstract\] A set $N\subset {\mathbb{G}}$ is called a *universal differentiability set (UDS)* if every Lipschitz map $f\colon {\mathbb{G}}\to {\mathbb{R}}$ is differentiable at a point of $N$. Theorem \[pansutheorem\] implies that every positive measure subset of ${\mathbb{G}}$ is a UDS [@Mag01]. In Euclidean space ${\mathbb{R}}^{n}$ for $n>1$, where the group law is simply addition and the step is $1$, measure zero UDS exist and they can be made compact and of Hausdorff and Minkowski dimension one [@Pre90; @DM12; @DM14]. All step 2 Carnot groups contain a measure zero UDS of Hausdorff dimension one with respect to the CC metric [@PS16; @LPS17]. Note that the Hausdorff dimension of any UDS must be at least one [@LPS17]. Define the horizontal projection $p\colon {\mathbb{G}}\to {\mathbb{R}}^{r}$ by $p(x):=(x_{1},\ldots, x_{r})$. We now recall some relevant results from [@LPS17]. \[distanceinequality\] Let $u=\exp(E)$ for some $E\in V_{1}$. Then $$d(uz) \geq d(u)+ \langle p(z), p(u)/d(u)\rangle \qquad \mbox{for any }z\in {\mathbb{G}}.$$ Moreover, if the CC distance $d\colon {\mathbb{G}}\to {\mathbb{R}}$ is differentiable at $u=\exp(E)$, then its Pansu differential at $u$ takes the form $z\mapsto \langle p(z), p(u)/d(u)\rangle$. Recall that $\omega$ is an inner product norm on $V_{1}$ making the basis $X_{1}, \ldots, X_{r}$ of $V_{1}$ orthonormal. We have the following connection between directional derivatives and the Lipschitz constant of a Lipschitz map. \[lipismaximal\] Let $f\colon {\mathbb{G}}\to {\mathbb{R}}$ be a Lipschitz map. Then $$\mathrm{Lip}(f)=\sup\{|Ef(x)| \colon x\in {\mathbb{G}}, \, E\in V_{1}, \, \omega(E)=1, \, Ef(x) \mbox{ exists}\}.$$ This justifies the definition of maximal directional derivative. \[maximal\] Let $f\colon G\to \mathbb{R}$ be Lipschitz and let $E\in V_{1}$ with $\omega(E)=1$. We say that a directional derivative $Ef(x)$ is *maximal* if $|Ef(x)|=\mathrm{Lip}(f)$. In Euclidean spaces (and Banach spaces with a differentiable norm), maximality of a directional derivative suffices for differentiability. The following proposition from [@LPS17] gives a condition for ‘maximality implies differentiability’ using the differentiability of the CC distance. \[equivalence\] Let $E\in V_{1}$ with $\omega(E)=1$. Then the following are equivalent. 1. The CC distance $d$ is differentiable at $\exp(E)$. 2. The following implication holds: whenever $f\colon {\mathbb{G}}\to {\mathbb{R}}$ is Lipschitz and $Ef(x)$ is maximal for some $x\in {\mathbb{G}}$, then $f$ is differentiable at $x$. Known constructions of measure zero UDS rely upon a stronger implication, namely that the existence of an *almost maximal* directional derivative suffices for differentiability. To investigate this stronger implication, deformability as defined below will be important. First recall that two horizontal curves $f_{1}\colon [a,b]\to {\mathbb{G}}$ and $f_{2}\colon [b,c]\to {\mathbb{G}}$ with $f_{1}(b)=f_{2}(b)$ can be joined to form a horizontal curve $f\colon [a,c]\to {\mathbb{G}}$ given by $f(t)=f_{1}(t)$ if $a\leq t\leq b$ and $f(t)=f_{2}(t)$ if $b\leq t\leq c$. Similarly one can join any finite number of horizontal curves provided the end of each curve agrees with the start of the subsequent curve. \[deform\] We say that $E\in V_{1}$ with $\omega(E)=1$ is *deformable* if there exist $C_{E}$, $N_{E}$ and a map $\Delta_{E}\colon (0,\infty) \to (0, \infty)$ such that the following condition holds. For every $0<s<1, \eta \in (0,\infty), 0<\Delta<\Delta_{E}(\eta)$ and $u\in {\mathbb{G}}$ with $d(u)\leq 1$, there is a Lipschitz horizontal curve $g\colon {\mathbb{R}}\to {\mathbb{G}}$ formed by joining at most $N_{E}$ horizontal lines such that 1. $g(t)=\exp(tE)$ for $|t|\geq s$, 2. $g(\zeta)=\delta_{\Delta s}(u)$, where $\zeta:= \langle \delta_{\Delta s}(u),E(0)\rangle$, 3. $\mathrm{Lip}_{{\mathbb{G}}}(g)\leq 1+\eta \Delta$, 4. $|(p\circ g)'(t)-p(E)|\leq C_{E}\Delta$ for all but finitely many $t\in {\mathbb{R}}$. \[deform2\] Consider the restriction of the curve $g$ from Definition \[deform\] to the interval $[-s,\zeta]$. By applying left translations and reparameterizing, we obtain a curve $\varphi\colon [0,s+\zeta]\to {\mathbb{G}}$ with $\varphi(0)=0$, $\varphi(s+\zeta)=\exp(sE)\delta_{\Delta s}(u)$, and satisfying conditions 3 and 4 of Definition \[deform\]. Useful facts about exponential coordinates of the first kind ------------------------------------------------------------ In the first $r$ coordinates, the group operation and dilations $\delta_{\lambda}$ read as $$p(xy)=p(x)+p(y) \quad \mbox{and} \quad p(\delta_{\lambda}(x))=\lambda p(x) \quad \mbox{for }x,y\in {\mathbb{G}}\mbox{ and }\lambda>0.$$ Let $e_{1}, \ldots, e_{n}$ be the standard basis vectors of ${\mathbb{R}}^{n}$. If $1\leq j\leq r$, the basis element $X_{j}$ of $V_{1}\subset \mathfrak{g}$ can be written as $$\label{vfcoordinates} X_{j}(x)=e_{j}+\sum_{i>r}^{n}q_{i,j}(x)e_{i},$$ where $q_{i,j}$ are homogeneous polynomials, in particular, $q_{i,j}(0)=0$. Using , it follows that $\exp (E)=E(0)$ for any $E\in V_{1}$. Thus points $u=\exp (E)$ for some $E\in V_{1}$ are exactly those of the form $u=(u_{h},0)$ for some $u_{h}\in {\mathbb{R}}^{r}$ and therefore $\exp(E)=(p(\exp(E)),0)$. If $E\in V_{1}$, it follows from that $p(E(x))$ is independent of $x\in {\mathbb{G}}$. Hence one can unambiguously define $p(E)\in {\mathbb{R}}^{r}$ for every $E\in V_{1}$. The inner product norm $\omega$ is equivalently given by $\omega(E)=|p(E)|$. From Definition \[horizontalcurve\] and we notice that $L_{{\mathbb{G}}}(\gamma)=L_{{\mathbb{E}}}(p \circ \gamma)$, where $L_{{\mathbb{E}}}$ is the Euclidean length of a curve in ${\mathbb{R}}^{r}$. This implies that $$d(x,y)\geq |p(y)-p(x)| \quad \mbox{for all} \quad x, y\in {\mathbb{G}}.$$ Lemma \[horizontaldistances\] and Lemma \[lipschitzhorizontal\] below give useful facts about length and distance in coordinates. They can be proved exactly as in [@PS16 Lemma 2.8 and Lemma 2.9]. \[horizontaldistances\] If $E\in V_{1}$ then the following facts hold. 1. $|E(0)|=\omega(E)=d(E(0))$, 2. $d(x,x\exp(tE))=t\omega(E)$ for any $x\in {\mathbb{G}}$ and $t\in {\mathbb{R}}$. \[lipschitzhorizontal\] Suppose that $\gamma \colon I \to {\mathbb{G}}$ is a horizontal curve. Then $$\mathrm{Lip}_{{\mathbb{G}}}(\gamma) = \mathrm{Lip}_{{\mathbb{E}}}(p \circ \gamma).$$ The following lemma gives easy facts about the simplest ${\mathbb{G}}$-linear maps. It can be easily proved, e.g. as in [@PS16 Lemma 5.2]. \[lemmascalarlip\] Suppose $E\in V_{1}$ with $\omega(E)=1$ and let $L\colon {\mathbb{G}}\to {\mathbb{R}}$ be the function defined by $L(x)=\langle x, E(0) \rangle$. Then the following facts hold. 1. $L$ is ${\mathbb{G}}$-linear and $\mathrm{Lip}_{{\mathbb{G}}}(L) = 1$, 2. for every $x\in{\mathbb{G}}$ and every $\tilde E\in V_{1}$ one has $$\tilde{E}L(x)=L(\tilde{E}(0))=\langle p(\tilde{E}), p(E) \rangle.$$ Free Carnot groups and model filiform groups -------------------------------------------- Recall that a homomorphism between Lie algebras is a linear map that preserves the Lie bracket, while isomorphisms are bijective homomorphisms. Free-nilpotent Lie algebras are then defined as follows (e.g. [@BLU07 Definition 14.1.1]). \[freeliealgebra\] Let $r\geq 2$ and $s\geq 1$ be integers. We say that $\mathcal{F}_{r,s}$ is the *free-nilpotent Lie algebra* with $r$ *generators* $x_1, \ldots, x_{r}$ of *step* $s$ if: 1. $\mathcal{F}_{r,s}$ is a Lie algebra generated by elements $x_1, \ldots, x_r$, 2. $\mathcal{F}_{r,s}$ is nilpotent of step $s$ (i.e., nested Lie brackets of length $s+1$ are $0$), 3. for every Lie algebra $\mathfrak{g}$ that is nilpotent of step $s$ and for every map $\Phi\colon \{x_1, \ldots, x_r\}\to \mathfrak{g}$, there is a unique homomorphism of Lie algebras $\tilde{\Phi}\colon \mathcal{F}_{r,s} \to \mathfrak{g}$ that extends $\Phi$. We next define free Carnot groups, e.g. [@BLU07 Definition 14.1.3]. \[freecarnotgroup\] The *free Carnot group* with rank $r$ and step $s$ is the Carnot group whose Lie algebra is isomorphic to the free-nilpotent Lie algebra $\mathcal{F}_{r, s}$. We denote it by ${\mathbb{F}}_{r,s}$. By saying that two Carnot groups are isomorphic we simply mean that they are isomorphic as Lie groups, with an isomorphism that preserves the stratification. Since Carnot groups are simply connected Lie groups, any homomorphism $\phi$ between their Lie algebras lifts to a Lie group homomorphism $F$ between the Carnot groups satisfying $dF=\phi$. Intuitively, model filiform groups are the Carnot groups with the simplest Lie brackets possible while still having arbitrarily large step. The formal definition is as follows. \[filiform\] Let $n\geq 2$. The *model filiform group of step $n-1$* is the Carnot group $\mathbb{E}_{n}$ whose Lie algebra $\mathcal{E}_{n}$ admits a basis $X_{1}, \ldots, X_{n}$ for which the only non-vanishing bracket relations are given by $[X_{i},X_{1}]=X_{i+1}$ for $1<i<n$. The stratification of $\mathcal{E}_{n}$ is $\mathcal{E}_{n}=V_{1}\oplus \cdots \oplus V_{n-1}$ with $V_{1}=\mathrm{Span}\{X_{1}, X_{2}\}$ and $V_{i}=\mathrm{Span}\{X_{i-1}\}$ for $1<i<n$. Throughout the paper, we will view the model Filiform group ${\mathbb{E}}_{n}$ in first exponential coordinates as ${\mathbb{R}}^{n}$ with group operation obtained from the Lie brackets by the BCH formula . Differentiability of the CC distance in Carnot groups {#CCdifferentiability} ===================================================== In this section we investigate the differentiability of the CC distance at endpoints of horizontal vectors. By Proposition \[equivalence\], this is equivalent to the implication ‘maximality implies differentiability’. Deformability implies differentiability of the CC distance ---------------------------------------------------------- We first observe that if $E$ is a deformable direction then the CC distance is differentiable at $\exp(E)$. At present we do not know whether the converse holds. \[deformimpliesdiff\] Suppose ${\mathbb{G}}$ is a Carnot group and let $E\in V_{1}$ with $\omega(E)=1$ be deformable. Then the CC distance is differentiable at $\exp(E)$. First notice that Lemma \[distanceinequality\] gives $d(\exp(E)z)\geq 1 +\langle \exp(E),z\rangle$ for any $z\in \mathbb{G}$. Hence it suffices to derive a suitable upper bound for $d(\exp(E)z)$. Let $\eta>0$ and $\Delta_{E}\colon (0,\infty)\to (0,\infty)$ be as in Definition \[deform\]. Suppose $z\in {\mathbb{G}}$ satisfies $d(z)< \min(\Delta_{E}(\eta)/2, \ 1)$ and let $s=1/2$. Then we may choose $u\in {\mathbb{G}}$ with $d(u)=1$ and $0<\Delta<\Delta_{E}(\eta)$ so that $z=\delta_{\Delta s}(u)$. By applying Definition \[deform\] with this choice of $\eta, s, u, \Delta$ we find a Lipschitz horizontal curve $g\colon {\mathbb{R}}\to {\mathbb{G}}$ satisfying $\mathrm{Lip}(g)\leq 1+\eta\Delta$, $g(-1)=\exp(-E)$ and $g(\zeta)=z$, where $\zeta=\langle z, E(0)\rangle$. Since $|\zeta|\leq 1$, we now estimate as follows: $$\begin{aligned} d(\exp(E)z)=d(\exp(-E),z)&=d(g(-1),g(\zeta))\\ &\leq (1+\eta\Delta)|1+\zeta|\\ &\leq 1+\langle z, E(0)\rangle + 4\eta d(z)\\ &\leq 1+\langle z, E(0)\rangle + o(d(z)).\end{aligned}$$ Here $o(d(z))/d(z)\to 0$ as $z\to 0$, which follows because $2\eta\Delta/d(z)=2\eta\Delta/(\Delta s)=4\eta$ and $\eta$ can be made arbitrarily small by making $d(z)$ sufficiently small. A strong example of non-differentiability of the CC distance ------------------------------------------------------------ The CC distance in the Engel group (which is the model filiform group ${\mathbb{E}}_{4}$) is not differentiable at $\exp(X_{2})$ [@LPS17]. In other words, the implication ‘maximality implies differentiability’ fails for the direction $X_{2}$ in the Engel group. We now derive some consequences of this result for other Carnot groups. Fix two Carnot groups ${\mathbb{G}}$ and ${\mathbb{H}}$ of rank $r$ which have horizontal layers $V$ and $W$ with the following property. There exist bases $\mathbf{X}=(X_{1}, \ldots, X_{r})$ and $\mathbf{Y}=(Y_{1}, \ldots, Y_{r})$ of $V$ and $W$ respectively, together with a Lie group homomorphism $F\colon {\mathbb{G}}\to {\mathbb{H}}$ such that $F_{*}(X_{i})=Y_{i}$ for $1\leq i\leq r$. Equip ${\mathbb{G}}$ and ${\mathbb{H}}$ with the CC metrics $d_{{\mathbb{G}}}$ and $d_{{\mathbb{H}}}$ induced by the bases $\mathbf{X}$ and $\mathbf{Y}$ respectively. We view both ${\mathbb{G}}$ and ${\mathbb{H}}$ in exponential coordinates of the first kind and let $p_{{\mathbb{G}}}\colon {\mathbb{G}}\to {\mathbb{R}}^{r}$ and $p_{{\mathbb{H}}}\colon {\mathbb{H}}\to {\mathbb{R}}^{r}$ denote the horizontal projections. For any $u\in {\mathbb{G}}$ we have $p_{{\mathbb{H}}}(F(u))=p_{{\mathbb{G}}}(u)$. Also if $u=(u_{h},0)\in {\mathbb{G}}$ for some $u_{h}\in {\mathbb{R}}^{r}$, then $F(u)=(u_{h},0)\in {\mathbb{H}}$. The following proposition was proven in [@LPS17]. \[quotientdiffCC\] Suppose the CC distance in ${\mathbb{G}}$ is differentiable at $\exp(E)$ for some $E\in V_{1}$. Then the CC distance in ${\mathbb{H}}$ is differentiable at $\exp(F_{*}(E))$. Our first result about non-differentiability of the CC distance is the following. \[X2nogood\] In any model filiform group ${\mathbb{E}}_{n}$ with $n\geq 4$, the CC distance is not differentiable at $\exp(X_{2})$. Let $X_{1},\ldots, X_{n}$ be a basis of the Lie algebra $\mathcal{E}_{n}$ such that the only non-vanishing bracket relations are given by $[X_{i}, X_{1}]=X_{i+1}$ for $1<i<n$. Let $Y_{1}, \ldots, Y_{4}$ denote a similar basis for the Lie algebra $\mathcal{E}_{4}$. Define a linear map $\Phi\colon \mathcal{E}_{n}\to \mathcal{E}_{4}$ by $\Phi(X_{i})=Y_{i}$ for $1\leq i\leq 4$ and $\Phi(X_{i})=0$ for $i>4$. It is easy to see that $\Phi$ is a Lie algebra homomorphism, hence lifts to a Lie group homomorphism $F\colon {\mathbb{E}}_{n}\to {\mathbb{E}}_{4}$ satisfying $F_{*}(X_{i})=Y_{i}$ for $1\leq i\leq 2$. By [@LPS17 Theorem 4.2], the CC distance in ${\mathbb{E}}_{4}$ is not differentiable at $\exp(Y_{2})$. Hence, by Proposition \[quotientdiffCC\], the CC distance in ${\mathbb{E}}_{n}$ is not differentiable at $\exp(X_{2})$. Recall that ${\mathbb{E}}_{2}$ is just ${\mathbb{R}}^{2}$ and ${\mathbb{E}}_{3}$ is a Carnot group of step 2. Combining the results of [@LPS17] with Proposition \[deformimpliesdiff\] and Proposition \[X2nogood\] for $n\geq 4$, we obtain the following corollary. \[pmX2\] In the model filiform group ${\mathbb{E}}_{n}$, the directions $\pm X_{2}$ are deformable for $n=2$ and $n=3$. They are not deformable for $n\geq 4$. Our second result addressing non-differentiability of the CC distance gives an example of a Carnot group where the CC distance fails to be differentiable at the endpoint of every horizontal vector. This is Theorem \[strongnondiff\]. Fix an orthonormal basis $X_{1}, X_{2}$ of the horizontal layer of ${\mathbb{F}}_{2,3}$. It suffices to show that the CC distance in ${\mathbb{F}}_{2,3}$ is not differentiable at $\exp(E)$ whenever $E=aX_{1}+bX_{2}$ with $a^2+b^2=1$. Let $Y_{1}, Y_{2}$ be a basis of the horizontal layer in the Engel group ${\mathbb{E}}_{4}$, where the CC distance defined using $Y_{1}, Y_{2}$ is not differentiable at $\exp(Y_{2})$. Define $W_{1}=bY_{1}+aY_{2}$, $W_{2}=-aY_{1}+bY_{2}$. Notice $W_{1}, W_{2}$ are orthonormal with respect to the inner product induced by $Y_{1}, Y_{2}$. Hence the CC distance in ${\mathbb{E}}_{4}$ obtained from $W_{1}, W_{2}$ is the same as the CC distance obtained from $Y_{1}, Y_{2}$. Now define a linear map $\Phi$ from the horizontal layer of ${\mathbb{F}}_{2,3}$ to the horizontal layer of ${\mathbb{E}}_{4}$ by $\Phi(X_{i})=W_{i}$ for $i=1,2$. Using the definition of free Lie algebra and lifting, we obtain a Lie group homomorphism $F\colon {\mathbb{F}}_{2,3}\to {\mathbb{E}}_{4}$ such that $F_{*}(X_{i})=W_{i}$ for $i=1,2$. Since $a^2+b^2=1$, we have: $$F_{*}(aX_{1}+bX_{2})=aW_{1}+bW_{2}=Y_{2}.$$ Since the CC distance in ${\mathbb{E}}_{4}$ is not differentiable at $\exp(Y_{2})$, we deduce using Proposition \[quotientdiffCC\] that the CC distance in ${\mathbb{F}}_{2,3}$ cannot be differentiable at $aX_{1}+bX_{2}$. Deformability in model filiform groups {#CurvesFiliform} ====================================== In this section we work in ${\mathbb{E}}_{n}$ for some $n\geq 3$. Our goal is to prove that every horizontal direction in ${\mathbb{E}}_{n}$ except possibly $\pm X_{2}$ is deformable. For simplicity of notation, in this section we identify the model filiform group ${\mathbb{E}}_{n}$ with its Lie algebra $\mathcal{E}_{n}$. Hence for $E\in \mathcal{E}_{n}$ we will simply write $E$ instead of $\exp(E)$. Construction of horizontal curves in filiform groups ---------------------------------------------------- We start by proving two lemmas that show how a horizontal line can be perturbed to reach a nearby point. The first lemma shows how to reach a point whose $n$’th coordinate is known, with small errors in the other vertical coordinates. Given $A\in {\mathbb{R}}$, we will use the notation $$\label{EE'} E:=X_1+A X_2 \qquad \mbox{and} \qquad E':=AX_1-X_2.$$ Note that $E$ and $E'$ are orthogonal with respect to $\omega$, for any $A\in {\mathbb{R}}$. \[Xn\] For all $A\in{\mathbb{R}}$ there exist polynomials $P_{i}(x)$ for $3\leq i \leq n$ depending on $n$ such that - each $P_{i}(x)$ is divisible by $x^2$ - the coefficients of each $P_{i}(x)$ are polynomials in $A$ and the following holds. Then, for every $\eta \in {\mathbb{R}}$ there exist $\eta_{i}\in \{\pm \eta\}$ for $1\leq i\leq 2^{n-2}$ such that $$\prod_{i=1}^{2^{n-2}} \frac{1}{2^{n-2}} (E+\eta_i E')= E+C_{n}(A^2+1)\eta X_n +\sum_{i=3}^{n} P_i(\eta) X_i,$$ where $C_{n}\neq 0$ is a constant depending on $n$ and the sign of each $\eta_{i}$ depends on $i$ but not on $\eta$ or $A$. We define products $p_{k}(\eta)$ inductively by $$\label{p1} p_{1}(\eta):=(E+\eta E')(E-\eta E')$$ and $$\label{pi} \qquad p_{k+1}(\eta):=p_{k}(\eta)p_{k}(-\eta)$$ for all $k\geq 1$. Choose $\eta_i\in \{\pm \eta\}$ for $1\leq i\leq 2^{n-2}$ such that $$p_{n-2}(\eta) = \prod_{i=1}^{2^{n-2}} (E+\eta_i E').$$ To establish the lemma, it suffices to prove there exist $C=C_{n}\neq 0$ and polynomials $P_{i}(x)$ as in the statement such that $$\label{p_{n-2}} p_{n-2}(\eta)=2^{n-2} E + 2^{n-2}C(A^2+1)\eta X_{n}+\sum_{i=3}^{n} P_i(\eta) X_i.$$ To do so we prove by induction that, for every $1\leq k\leq n-2$, $p_{k}$ has the form $$\label{pkform} p_{k}(\eta)=2^{k}E+\sum_{i=4}^{n} S_i(\eta) X_i + \eta \sum_{i=k+2}^n\lambda_i X_{i},$$ where $\lambda_i$ are constants with $\lambda_{k+2}\neq 0$ and $S_i(x)$ are polynomials divisible by $x^2$. To begin proving , notice that Definition \[filiform\], , and give $$\label{p1form} p_1(\eta)=2E-(A^2+1)\eta X_3 + \sum_{i=4}^{n} Q_i(\eta) X_i,$$ where the polynomials $Q_i(x)$ are divisible by $x$. Hence holds for $k=1$. Next we suppose that $p_{k}$ satisfies for some $1\leq k \leq n-3$; we will show that $p_{k+1}$ has the desired form too. Firstly, the BCH formula gives $$\begin{aligned} \label{pkproduct} p_{k+1}(\eta)&=p_{k}(\eta)p_{k}(-\eta)\nonumber \\ &=p_k(\eta)+p_k(-\eta)+\frac{1}{2}[p_{k}(\eta),p_{k}(-\eta)]\nonumber \\ &\qquad \qquad+\mbox{brackets of length $\geq 3$}.\end{aligned}$$ A simple computation using yields $$\label{pksum} p_k(\eta)+p_k(-\eta)=2^{k+1} E+ \sum_{i=4}^{n} (S_i(\eta)+S_i(-\eta)) X_i$$ and $$\label{pkbracket} \frac{1}{2}[p_k(\eta),p_k(-\eta)]=2^{k}\eta \sum_{i=k+2}^{n-1} \lambda_i X_{i+1}+2^{k-1}\sum_{i=4}^{n-1} (S_i(\eta) -S_i(-\eta)) X_{i+1}.$$ Note that $(S_i(\eta)-S_i(-\eta))$ is divisible by $\eta^2$ because each polynomial $S_{i}(x)$ is divisible by $x^2$. The coefficient of $X_{k+3}$ in the first term on the right hand side of is $2^{k} \lambda_{k+2} \eta$, with $\lambda_{k+2}\neq 0$ coming from the induction hypothesis. The other terms in are linear combinations of nested commutators of length greater or equal than three, i.e. $$[Z_1,[Z_2,\ldots,[Z_{M-1},Z_M]]\ldots]$$ where $Z_i\in\{p_k(\eta),p_k(-\eta)\}$ and $M\geq 3$. By the definition of $Z_i$ we get that each of the previous commutators is a constant multiple of $$\begin{aligned} &[X_1,[X_1,\ldots,[Z_{M-1},Z_M]]\ldots]\\ &\qquad =2^{k+1}\eta \sum_{i=k+2}^{n+1-M}\lambda_i X_{i+M-1}+2^k\sum_{i=4}^{n+1-M} (S_i(\eta) -S_i(-\eta)) X_{i+M-1}.\end{aligned}$$ The leading term in the first sum above is a multiple of $X_{j}$ with $j\geq k+4$, while the second sum consists of terms with coefficients divisible by $\eta^2$. Combining this with and shows that $p_{k+1}$ satisfies with $k$ replaced by $k+1$. It follows by induction that $p_{k}$ has the desired form for every $k$. Evaluating at $k=n-2$ gives . This proves the lemma. We now use the Lemma \[Xn\] and an induction argument on the dimension of the filiform group to show how a horizontal line can be perturbed to reach a nearby point, without changing too much its length or direction. \[Filiformcurve\] For every $A_{0}\in {\mathbb{R}}$, there are numbers $$\varepsilon=\varepsilon(A_{0},n)>0, \qquad K=K(A_{0},n)>0, \qquad N=N(n)\in {\mathbb{N}}$$ so that the following holds for $A\in (A_{0}-\varepsilon, A_{0}+\varepsilon)$ and $a_{2}, \ldots, a_{n} \in (-\varepsilon, \varepsilon)$. There exist $\theta_{1}, \ldots, \theta_{N} \in {\mathbb{R}}$ which depend smoothly on $a_{2}, \ldots, a_{n}$ with $|\theta_{i}|\leq K\max_{j\geq 2}|a_{j}|$ for $1\leq i\leq N$ and $$\label{Filiformcurveeq} \prod_{i=1}^{N} \frac{1}{N} (E+\theta_{i}E') = E+a_{2}E'+a_{3}X_{3}+\cdots+a_{n}X_{n}.$$ We prove the lemma by induction on $n$. If $n=2$ the result is clear as ${\mathbb{E}}_{2}$ is simply Euclidean space ${\mathbb{R}}^{2}$ as a Carnot group. Suppose the statement holds in ${\mathbb{E}}_{n-1}$; we will show that it also holds for ${\mathbb{E}}_{n}$. By the induction hypothesis, there exist $$\varepsilon=\varepsilon(A_{0},n-1)>0, \qquad K=K(A_{0},n-1)>0, \qquad N=N(n-1) \in {\mathbb{N}}$$ such that the following holds in ${\mathbb{E}}_{n-1}$. For any choice of $A\in (A_{0}-\varepsilon, A_{0}+\varepsilon)$ and $a_{2}, \ldots, a_{n-1} \in (-\varepsilon, \varepsilon)$, there exist $\theta_{1}, \ldots, \theta_{N}$ which depend smoothly on $a_2, \ldots, a_{n-1}$ with $|\theta_{i}|\leq K\max_{j\geq 2}|a_{j}|$ for $1\leq i\leq N$ and $$\label{hypothesis} \prod_{i=1}^{N} \frac{1}{N} (E+\theta_{i}E') = E+a_{2}E'+a_{3}X_{3}+\cdots+a_{n-1}X_{n-1} \qquad \mbox{in }{\mathbb{E}}_{n-1}.$$ We now lift the above equation to ${\mathbb{E}}_{n}$. In other words, we consider $E$ and $E'$ as elements of the Lie algebra of ${\mathbb{E}}_{n}$ in the natural way. All calculations when computing the product using the BCH formula remain the same, except for $[X_{n-1},X_{1}]$ which will be equal to $X_{n}$ rather than $0$. An easy calculation shows $$\label{secondterm} [E+\theta_{1}E', E+\theta_{2}E']=(1+A^2)(\theta_{2}-\theta_{1})X_{3}.$$ From now on we work in ${\mathbb{E}}_{n}$. By the BCH formula , and : $$\begin{aligned} p_{1}&:=\prod_{i=1}^{N} \frac{1}{N} (E+\theta_{i}E') \\ &= E+a_{2}E'+a_{3}X_{3}+\cdots+a_{n-1}X_{n-1} + L(\theta_{1}, \ldots, \theta_{N})X_{n}\\ &= (1+Aa_{2})X_{1}+(A-a_{2})X_{2}+a_{3}X_{3}+\cdots+a_{n-1}X_{n-1} + L(\theta_{1}, \ldots, \theta_{N})X_{n},\end{aligned}$$ where $L(\theta_{1}, \ldots, \theta_{N})$ is a polynomial in $\theta_{1}, \ldots, \theta_{N}$ (with coefficients depending on $A$) with no constant term due to . By using Lemma \[EE’\], for any $\eta \in {\mathbb{R}}$ we can choose $\eta_{i}\in \{\pm \eta\}$ for $1\leq i\leq 2^{n-2}$ with sign depending on $i$ but not $\eta$ such that $$\begin{aligned} p_{2}&:=\prod_{i=1}^{2^{n-2}} \frac{1}{2^{n-2}} (E+\eta_i E')\\ &= E+C(A^2+1)\eta X_n +\sum_{i=3}^{n} P_i(\eta) X_i\\ &= X_{1}+AX_{2} +C(A^2+1)\eta X_n +\sum_{i=3}^{n} P_i(\eta) X_i,\end{aligned}$$ where $P_{i}(x)$ are polynomials divisible by $x^2$ and $C=C_{n}\neq 0$. We now analyze $p_{1}p_{2}$. First notice that $$\begin{aligned} p_{1}+p_{2}&= (2+Aa_{2})X_{1}+(2A-a_{2})X_{2}+a_{3}X_{3}+\ldots + a_{n-1}X_{n-1}\\ &\qquad \qquad +(L(\theta_{1}, \ldots, \theta_{N})+C(A^2+1)\eta)X_{n} + \sum_{i=3}^{n}P_{i}(\eta)X_{i}\end{aligned}$$ and $$\begin{aligned} [p_{1},p_{2}]&=-(A^2+1)a_{2}X_{3}+a_{3}X_{4}+\ldots +a_{n-1}X_{n}\\ & \qquad \qquad -\sum_{i=4}^{n} (1+Aa_{2})P_{i-1}(\eta)X_{i}.\end{aligned}$$ By using the BCH formula , the coefficients of $E, E', X_{3}, \ldots, X_{n}$ in $p_{1}p_{2}$ are given by $$\begin{aligned} &E \qquad &2\\ &E' &a_{2}\\ &X_{3} &a_{3}+F_{3}(a_{2})+Q_{3}(\eta)\\ &\ldots &\ldots\\ &X_{i} &a_{i}+F_{i}(a_{2}, \ldots, a_{i-1}) + Q_{i}(\eta, a_{2})\\ &\ldots &\ldots\\ &X_{n} &L(\theta_{1}, \ldots, \theta_{N})+C(A^2+1)\eta + F_{n}(a_{2}, \ldots, a_{n-1}) + Q_{n}(\eta, a_{2}),\end{aligned}$$ where, for $i=3,\ldots, n$, - $F_{i}(a_{2}, \ldots, a_{i-1})$ is a polynomial with no constant term whose coefficients depend smoothly on $A$, - $Q_{i}(\eta, a_{2})$ is a polynomial divisible by $\eta^{2}$ whose coefficients depend smoothly on $A$. Define $\Phi_{A}\colon (-\varepsilon,\varepsilon)^{n} \to {\mathbb{R}}^{n-1}$ to be the function of $a_{2}, \ldots, a_{n-1}, \eta$ whose coordinates are given by the coefficients of $E', X_{3}, X_{4}, \ldots, X_{n}$ in the above table. Recall that $\varepsilon=\varepsilon(A_{0}, n-1)$ was chosen using the induction hypothesis, which implies $\theta_{1}, \ldots, \theta_{N}$ depend smoothly on $a_{2}, \ldots, a_{n-1}$ whenever $(a_{2}, \ldots, a_{n-1}, \eta) \in (-\varepsilon,\varepsilon)^{n}$. Notice that $\Phi_{A}(0)=0$ and the equality $$\label{tosolve} \delta_{1/2}(p_{1}p_{2})= E+b_{2}E'+b_{3}X_{3}+\ldots +b_{n}X_{n}$$ is equivalent to $$\label{tosolve2} \Phi_{A}(a_{2}, \ldots, a_{n-1}, \eta)= (2b_{2}, 2^2 b_{3}, \ldots, 2^{i}b_{i}, \ldots, 2^{n}b_{n}).$$ \[claimIFT\] There exists $\tilde{\varepsilon}=\tilde{\varepsilon}(A_{0}, n)>0$ and $\tilde{K}=\tilde{K}(A_{0},n)>0$ such that for all $A\in (A_{0}-\tilde{\varepsilon}, A_{0}+\tilde{\varepsilon})$ and $b_{2}, \ldots, b_{n} \in (-\tilde{\varepsilon}, \tilde{\varepsilon})$, the equation can be solved for $a_{2}, \ldots, a_{n-1}, \eta$. One can choose the solutions so that: 1. $a_{2}, \ldots, a_{n-1}, \eta$ depend smoothly on $b_{2}, \ldots, b_{n}$, 2. $|a_{i}|, |\eta| \leq \tilde{K}\max_{j\geq 1} |b_{j}|$. Notice first that $(\partial Q_{i}/\partial \eta)(0)=0$ for each $i$, since $Q_{i}$ is divisible by $\eta^{2}$. Hence $T_{A}:=\Phi_{A}'(0)$ is a lower triangular matrix with determinant $C(A^2+1)\neq 0$. By the inverse function theorem, $\Phi_{A}$ is invertible with a $C^{1}$ inverse in a neighborhood of $0$. In other words for each $A$ and $n$, given $b_{2}, \ldots b_{n}$ sufficiently small, there exist $a_{2}, \ldots, a_{n-1}, \eta$ which depend smoothly on $b_{2}, \ldots, b_{n}$ such that holds. We must show that one can use a uniform neighborhood for all $A\in (A_{0}-\tilde{\varepsilon}, A_{0}+\tilde{\varepsilon})$, where $\tilde{\varepsilon}>0$ is sufficiently small and depends on $A_{0}$ and $n$. To establish such a neighborhood and the desired bounds on $|a_{i}|, |\eta|$, we briefly study the proof of the inverse function theorem from [@Rud76 9.2.4 Theorem]. Define $\lambda(A)>0$ by $1/\lambda(A)=2\|T_{A}^{-1}\|_{\mathrm{op}}$. As the determinant of $T_{A}$ is $C(A^2+1)$ and the entries of the adjoint of $T_{A}$ are linear combinations of products of entries of $T_{A}$, there exists $\lambda>0$ depending on $A_{0}$ and $n$ such that $\lambda(A)>\lambda$ whenever $A\in (A_{0}-\varepsilon, A_{0}+\varepsilon)$. By the mean value theorem, there is $0<\tilde{\varepsilon}<\varepsilon$ depending on $A_{0}$ and $n$ such that $\|\Phi_{A}'(x)-T_{A}\|_{\mathrm{op}}<\lambda$ whenever $\|x\|<\tilde{\varepsilon}$ and $A\in (A_{0}-\tilde{\varepsilon}, A_{0}+\tilde{\varepsilon})=:I$. It follows from [@Rud76] that for $A\in I$, the restricted map $\Phi_{A}\colon {B(0,\tilde{\varepsilon})}\to \Phi_{A}(B(0,\tilde{\varepsilon}))$ is bijective with $C^{1}$ inverse. Since the entries of $(\Phi_{A}^{-1})'(x)$ are bounded for every $A\in I$ and every $x\in B(0, \tilde{\varepsilon})$, it follows that $\Phi_{A}$ is bi-Lipschitz for $A\in I$, with bi-Lipschitz constants depending on $A_{0}$. Hence $\Phi_{A}(B(0,\tilde{\varepsilon}))$ contains a ball $B(0,\tilde{\varepsilon}/\tilde{K})$, where $\tilde{K}\geq 1$ is some constant depending on $n$ and $A_{0}$. Replacing $\tilde{\varepsilon}$ by a slightly smaller constant (still depending only on $A_{0}$ and $n$) gives the desired neighborhood for the inversion. Next, since $\Phi_{A}^{-1}$ is Lipschitz with some Lipschitz constant $\tilde{K}$ depending on $A_{0}$ and $n$ we have $$|\Phi_{A}^{-1}(2b_{2}, \ldots, 2^{n}b_{n})| \leq \tilde{K} |(2b_{2}, \ldots, 2^{n}b_{n})|,$$ which yields $$|(a_{2}, \ldots, a_{n-1}, \eta)| \leq 2^{n}\tilde{K} |(b_{2}, \ldots, b_{n})|.$$ This gives the desired bounds on $|a_{i}|$ and $|\eta|$. Since $\theta_{i}$ and $\eta_{i}$ depend smoothly on $a_{2}, \ldots, a_{n-1}$ and $\eta$, Claim \[claimIFT\](1) ensures that $\theta_{i}$ and $\eta_{i}$ depend smoothly on $b_{2}, \ldots, b_{n}$. Since $|\theta_{i}|\leq K\max_{j\geq 2}|a_{j}|$, Claim \[claimIFT\](2) implies that $|\theta_{i}|, |\eta|\leq K\tilde{K} \max_{j\geq 1} |b_{j}|$. To conclude, since and are equivalent, it suffices to check that $\delta_{1/2}(p_{1}p_{2})$ is of the form of the left hand side of . Indeed, since the BCH formula implies $XX=2X$ for any element $X$ of the Lie algebra, we have $$\begin{aligned} p_{1}p_{2} &= \prod_{i=1}^{N} \frac{1}{N} (E+\theta_{i}E') \prod_{i=1}^{2^{n-2}} \frac{1}{2^{n-2}} (E+\eta_i E')\\ &= \prod_{i=1}^{N2^{n-2}} \frac{1}{N2^{n-2}} (E+\tilde{\theta}_{i}E') \prod_{i=1}^{N2^{n-2}} \frac{1}{N2^{n-2}} (E+\tilde{\eta}_i E'),\end{aligned}$$ where the sequence $\tilde{\theta}_{i}$ repeats each term of $\theta_{i}$ for $2^{n-2}$ times and the sequence $\tilde{\eta}_i$ repeats each term of $\eta_{i}$ for $N$ times. Hence we can write $$\label{formproved} \delta_{1/2}(p_{1}p_{2})=\prod_{i=1}^{N2^{n-1}}\frac{1}{N2^{n-1}} (E+\Theta_{i}E'),$$ where the terms of the sequence $\Theta_{i}$ consist of the terms of $\tilde{\theta}_{i}$ followed by the terms of $\tilde{\eta}_i$. Notice that is of the form given in . This completes the proof. Deformability in filiform groups -------------------------------- We now use the horizontal curves built in Lemma \[Filiformcurve\] to show that every horizontal direction in ${\mathbb{E}}_{n}$, except possibly $\pm X_{2}$, is deformable. Recall that by Corollary \[pmX2\], the directions $\pm X_{2}$ are deformable if and only if $n=2$ or $n=3$. \[deformFiliform\] For $n\geq 3$, every $E\in \mathcal{E}_{n}$ with $\omega(E)=1$, except for possibly $\pm X_{2}$, is deformable. Moreover, the parameters $C_{E}, N_{E}, \Delta_{E}$ related to the deformability of $E$ can be chosen so that any $\tilde{E}$ sufficiently close to $E$ is also deformable with the same parameters $$C_{\tilde{E}}=C_{E}, \qquad N_{\tilde{E}}=N_{E}, \qquad \Delta_{\tilde{E}}(\eta)=\Delta_{E}(\eta).$$ Fix $E\in \mathcal{E}_{n}$ with $\omega(E)=1$ and $E\neq \pm X_{2}$. We need to show that for some $C_{E}, N_{E}$ and $\Delta_{E}\colon (0,\infty)\to (0,\infty)$ the following holds. Given $0<s<1$, $\eta\in (0,\infty)$, $u\in {\mathbb{E}}_{n}$ with $d(u)\leq 1$ and $0<\Delta<\Delta_{E}(\eta)$, there is a Lipschitz horizontal curve $g\colon {\mathbb{R}}\to {\mathbb{E}}_{n}$, formed by joining $N_{E}$ horizontal lines, such that 1. $g(t)=\exp(tE)$ for $|t|\geq s$, 2. $g(\zeta)=\delta_{\Delta s}(u)$, where $\zeta:= \langle \delta_{\Delta s}(u),E(0)\rangle$, 3. $\mathrm{Lip}(g)\leq 1+\eta \Delta$, 4. $|(p\circ g)'(t)-p(E)|\leq C_{E}\Delta$ for all but finitely many $t\in {\mathbb{R}}$. Moreover, the same parameters $C_{E}, N_{E}, \Delta_{E}$ should work for any direction sufficiently close to $E$. Notice that for $|t|\geq s$ the curve is explicitly defined by (1) and satisfies (3) and (4). Hence our task is to extend $g(t)$ for $-s<t<\zeta$ and $\zeta<t<s$. Since the two cases are similar, we show how to handle $-s<t<\zeta$. Up to left translations and reparameterizations of the curve, it suffices to verify the following equivalent claim. \[bigclaim\] There exist $C_{E}, N_{E}$ and $\Delta_{E}\colon (0,\infty) \to (0,\infty)$ such that the following holds. Given any $0<s<1$, $\eta>0$, $u\in {\mathbb{E}}_{n}$ with $d(u)\leq 1$ and $0<\Delta<\Delta_{E}(\eta)$, there is a Lipschitz horizontal curve $\varphi\colon [0,s+\zeta]\to {\mathbb{E}}_{n}$, where $\zeta:= \langle \delta_{\Delta s}(u),E(0)\rangle$, formed by joining at most $N_{E}$ horizontal lines, such that $\varphi(0)=0$, $\varphi(s+\zeta)=\exp(sE)\delta_{\Delta s}(u)$, and 1. $\mathrm{Lip}(\varphi)\leq 1+\eta \Delta$, 2. $|(p\circ \varphi)'(t)-p(E)|\leq C_{E}\Delta$ for all but finitely many $t\in {\mathbb{R}}$. Moreover, the same parameters $C_{E}, N_{E}, \Delta_{E}(\eta)$ work for any direction sufficiently close to $E$. Since $E\neq \pm X_{2}$, we can write $E=aX_{1}+bX_{2}$, where $a^2+b^2=1$ and $a\neq 0$. Without loss of generality, up to changing the direction of the curve, we can assume $a>0$. Let $u:=u_{1}X_{1}+u_{2}X_{2}+\cdots + u_{n}X_{n}$. Identifying ${\mathbb{E}}_{n}$ and $\mathcal{E}_{n}$, since $E\in V_{1}$ we have $$(sE)\delta_{\Delta s}(u)=\delta_{s}(E\delta_{\Delta}(u)),$$ and a simple computation gives $$E+\delta_{\Delta}(u)=(a+\Delta u_{1})X_{1} + (b+\Delta u_{2})X_{2} + \Delta^{2}u_{3}X_{3} + \cdots + \Delta^{n-1}u_{n}X_{n},$$ $$[E, \delta_{\Delta}(u)]=\Delta(bu_{1}-au_{2})X_{3} - a\Delta^{2}u_{3}X_{4} - \cdots - a\Delta^{n-2}u_{n-1}X_{n}.$$ By the BCH formula , it is then clear that $E \delta_{\Delta}(u)$ has the form $$E \delta_{\Delta}(u)=(a+\Delta u_{1})X_{1} + (b+\Delta u_{2})X_{2} + \sum_{i=3}^{n} \eta_{i}X_{i},$$ where $\eta_{i}$ satisfy $|\eta_{i}|\leq \tilde{Q}\Delta$ for a constant $\tilde{Q}$ depending only on $n$. Next we write $$E \delta_{\Delta}(u) = \delta_{a+\Delta u_{1}} \left( X_{1} + \frac{b+\Delta u_{2}}{a+\Delta u_{1}}X_{2} + \sum_{i=3}^{n} \frac{\eta_{i}}{(a+\Delta u_{1})^{i-1}}X_{i} \right),$$ and we define $$A:=\frac{b+\Delta u_{2}}{a+\Delta u_{1}}, \qquad a_{2}:=0, \qquad a_{i}:=\frac{\eta_{i}}{(a+\Delta u_{1})^{i-1}} \, \mbox{ for }3\leq i \leq n.$$ Let $A_{0}:=b/a$ and let $\varepsilon=\varepsilon(A_{0},n)$, $K=K(A_{0},n)$ and $N=N(n)$ be defined according to Lemma \[Filiformcurve\]. By making $\Delta$ sufficiently small, we can ensure that $A\in (A_{0}-\varepsilon, A_{0}+\varepsilon)$ and $|a_{j}|<\varepsilon$. If we consider a direction $\tilde{E}=\tilde{a}X_{1}+\tilde{b}X_{2}$ with $\tilde{a}$ and $\tilde{b}$ sufficiently close to $a$ and $b$ (bound depending on $a$, $b$, $\varepsilon$), then we can ensure that if $$\tilde{A}:=\frac{\tilde{b}+\Delta u_{2}}{\tilde{a}+\Delta u_{1}}, \qquad \tilde{a}_{2}:=0, \qquad \tilde{a}_{i}:=\frac{\eta_{i}}{(a+\Delta u_{1})^{i-1}} \, \mbox{ for }3\leq i \leq n,$$ then $\tilde{A}\in (A_{0}-\varepsilon, A_{0}+\varepsilon)$ and $|\tilde{a}_{j}|<\varepsilon$. Hence, in what follows, everything which applies to the direction $E$ will also apply to every direction $\tilde{E}$ sufficiently close to $E$ with the same parameters. Applying Lemma \[Filiformcurve\] with $E_{0}=X_{1}+AX_{2}$ and $E_{0}'=AX_{1}-X_{2}$ gives smooth functions $\theta_{1}, \ldots, \theta_{N}$ satisfying $|\theta_{i}|\leq K\max_{j\geq 2}|a_{j}|$ such that $$\label{eqfromFiliformcurve} \prod_{i=1}^{N} \frac{1}{N} (E_{0}+\theta_{i}E_{0}') = E_{0}+a_{2}E_{0}'+a_{3}X_{3}+\cdots+a_{n}X_{n}.$$ By definition of $a_{j}$, it follows that $|\theta_{i}|\leq Q\Delta$ for some constant $Q$ depending on $E$, provided that $\Delta$ is small compared to $a$. Using the definitions of $E_{0}$, $E_{0}'$ and dilating both sides of by $a+\Delta u_{1}$ gives $$\begin{aligned} &\prod_{i=1}^{N} \frac{1}{N} \Big( (a+\Delta u_{1})X_{1} + (b+\Delta u_{2})X_{2} + \theta_{i}( (b+\Delta u_{2})X_{1} - (a+\Delta u_{1})X_{2}) \Big)\\ &\qquad \qquad = (a+\Delta u_{1})X_{1} + (b+\Delta u_{2})X_{2} + \sum_{i=3}^{n} \eta_{i}X_{i}\\ &\qquad \qquad = E \delta_{\Delta}(u).\end{aligned}$$ Then, dilating both sides by $s$ and recalling that $$\zeta= \langle \delta_{\Delta s}(u),E(0)\rangle=\Delta s(au_{1}+bu_{2}),$$ we get $$\begin{aligned} &\prod_{i=1}^{N} \frac{s+\zeta}{N}\frac{s}{s+\zeta} \Big( (a+\Delta u_{1})X_{1} + (b+\Delta u_{2})X_{2} + \theta_{i}( (b+\Delta u_{2})X_{1} - (a+\Delta u_{1})X_{2}) \Big)\\ &\qquad \qquad = (sE)\delta_{\Delta s}(u).\end{aligned}$$ Define the horizontal curve $\varphi\colon [0,s+\zeta] \to {\mathbb{E}}_{n}$ by $\varphi(0)=0$ and $$\varphi'(t)=\frac{s}{s+\zeta} \Big( (a+\Delta u_{1})X_{1} + (b+\Delta u_{2})X_{2} + \theta_{i}( (b+\Delta u_{2})X_{1} - (a+\Delta u_{1})X_{2}) \Big)$$ whenever $$t\in I_{i}:=\left( \frac{(i-1)(s+\zeta)}{N},\, \frac{i(s+\zeta)}{N}\right), \qquad 1\leq i \leq N.$$ Then $\varphi(0)=0, \varphi(s+\zeta)=(sE)\delta_{\Delta s}(u)$ and $\varphi$ is a Lipschitz horizontal curve formed from joining $N$ horizontal lines. It remains to check that conditions (A) and (B) hold. To verify (A), notice that by Lemma \[lipschitzhorizontal\] it suffices to bound $|(p\circ \varphi)'|$. Recall that $|\theta_{i}|\leq Q\Delta$ for $1\leq i \leq N$ and $(1+x)^{1/2}\leq 1+x/2$ for $x\geq -1$. For any $t \in I_{i}$ and any sufficiently small $\Delta$ one has $$\begin{aligned} |(p\circ \varphi)'| &= \frac{s}{s+\zeta}\Big|\Big( (a+\Delta u_{1})+\theta_{i}(b+\Delta u_{2}),\, (b+\Delta u_{2})-\theta_{i}(a+\Delta u_{1})\Big)\Big|\\ & =\frac{1}{1+\Delta(au_{1}+bu_{2})} \Big(1+2\Delta(au_{1}+bu_{2})+\Delta^{2}(u_{1}^2+u_{2}^2)\Big)^{1/2}\Big(1+\theta_{i}^2\Big)^{1/2}\\ & \leq \frac{1}{1+\Delta(au_{1}+bu_{2})} \Big( 1 + \Delta(au_{1}+bu_{2}) + \Delta^{2}(u_{1}^2+u_{2}^2)/2 \Big)\Big(1+\theta_{i}^2/2 \Big)\\ & \leq 1+\eta\Delta,\end{aligned}$$ where in the last inequality we used that $|\theta_{i}|\leq Q\Delta$ and made $\Delta$ sufficiently small relative to $\eta$. This proves (A). To verify (B), we estimate as follows. For any $t\in I_{i}$ and any sufficiently small $\Delta$ one has $$\begin{aligned} |(p\circ \varphi)'(t)-p(E)|&\leq C\Delta + \left| \frac{s}{s+\zeta}-1 \right|\\ &= C\Delta + \left| \frac{\zeta}{s+\zeta} \right|\\ &= C\Delta + \left| \frac{\Delta(au_{1}+bu_{2})}{1+\Delta(au_{1}+bu_{2})} \right| \\ &\leq C\Delta.\end{aligned}$$ This shows that (B) holds. Since our conclusions also hold for any direction sufficiently close to $E$, the claim is proved. As described earlier, Claim \[bigclaim\] suffices to prove the theorem. Since deformability implies differentiability of the CC distance by Proposition \[deformimpliesdiff\], combining Proposition \[X2nogood\] and Theorem \[deformFiliform\] shows that a horizontal direction in a model filiform group ${\mathbb{E}}_{n}$ for $n\geq 4$ is deformable if and only if it is not $ \pm X_{2}$. Distances between piecewise linear curves {#sectiondistanceestimate} ========================================= In this section we prove a simple estimate for the distance between piecewise linear curves with similar directions in a general Carnot group ${\mathbb{G}}$. This will be needed to prove ‘almost maximality’ implies differentiability. We first recall the following useful lemma [@Mag13 Lemma 3.7]. \[Magnani\] Let ${\mathbb{G}}$ be a Carnot group of step $s$. Given any $\nu>0$, there exists a constant $K_{\nu}>0$ with the following property. If $N\in {\mathbb{N}}$ and $A_{j}, B_{j} \in {\mathbb{G}}$ defined for $j=1,\ldots,N$ satisfy $d(B_{j}B_{j+1}\cdots B_{N})\leq \nu$ and $d(A_{j},B_{j})\leq \nu$ for $j=1,\ldots,N$, then it holds that $$d(A_{1}A_{2}\cdots A_{N}, B_{1}B_{2}\cdots B_{N}) \leq K_{\nu} \sum_{j=1}^{N} d(A_{j}, B_{j})^{1/s}.$$ Our estimate for the distance between curves is given by the following lemma. \[closedirectioncloseposition\] Let ${\mathbb{G}}$ be a Carnot group of step $s$. Then there is a constant $C_{a}\geq 1$ depending on ${\mathbb{G}}$ for which the following is true. Suppose $E\in V_{1}$ with $\omega(E)\leq 1$, $0\leq D\leq 1$ and $N\in {\mathbb{N}}$. Let $g\colon (-R,R) \to {\mathbb{G}}$ be a Lipschitz horizontal curve with $g(0)=0$ satisfying the following conditions: 1. $g$ is formed by joining of at most $N$ horizontal lines, 2. $|(p\circ g)'(t)-p(E)|\leq D$ whenever $(p\circ g)'(t)$ exists. Then $d(g(t),\exp(tE)) \leq C_{a}ND^{1/s^2}|t|$ for every $t\in (-R,R)$. Fix $t\geq 0$ without loss of generality. We may write $$g(t)=\exp(t_{1}E_{1})\exp(t_{2}E_{2})\cdots \exp(t_{N}E_{N}),$$ where $t=t_{1}+t_{2}+\ldots +t_{N}$ with $t_{i}\geq 0$ and $E_{j}\in V_{1}$ with $|p(E_{j})-p(E)|\leq D$ for $1\leq j\leq N$. We intend to apply Lemma \[Magnani\] to estimate $$\frac{d(g(t),\exp(tE))}{|t|} = d\left(\exp\Big(\frac{t_{1}E_{1}}{t}\Big)\cdots \exp\Big(\frac{t_{N}E_{N}}{t}\Big),\, \exp\Big(\frac{t_{1}E}{t}\Big)\cdots \exp\Big(\frac{t_{N}E}{t}\Big)\right).$$ We first check that the hypotheses of Lemma \[Magnani\] hold with the choice $$A_{j}:=\exp ((t_{j}/t)E_{j}), \qquad B_{j}:=\exp((t_{j}/t)E), \qquad \nu:=3.$$ We first notice $$\begin{aligned} d(B_{j}B_{j+1}\cdots B_{N}) &= d\left( \exp\left( \frac{(t_{j}+\ldots + t_{N})E}{t}\right) \right)\\ &= \frac{(t_{j}+\ldots + t_{N})}{t}d(\exp(E))\\ &\leq 1.\end{aligned}$$ Second, we can estimate as follows $$\begin{aligned} d(A_{j},B_{j})&=(t_{j}/t)d(\exp(E_{j}),\exp(E))\\ &\leq d(\exp(E_{j}))+d(\exp(E))\\ &\leq 3.\end{aligned}$$ We can then combine Lemma \[Magnani\] with Proposition \[euclideanheisenberg\] to get $$\begin{aligned} \frac{d(g(t),\exp(tE))}{|t|} &\leq K_{3} \sum_{j=1}^{N} d(\exp((t_{j}/t)E_{j}),\, \exp( (t_{j}/t)E))^{1/s}\\ & \leq K_{3} \sum_{j=1}^{N} d(\exp(E_{j}),\, \exp(E))^{1/s}\\ & \leq C \sum_{j=1}^{N} |\exp(E_{j})-\exp(E)|^{1/s^2}\\ & \leq C \sum_{j=1}^{N} |p(E_{j})-p(E)|^{1/s^2}\\ & \leq C ND^{1/s^2}.\end{aligned}$$ The proof is complete noticing that $C\geq 1$ is a constant depending only on ${\mathbb{G}}$. Almost maximal directional derivatives and the UDS {#sectionUDS} ================================================== In this section we fix a Carnot group ${\mathbb{G}}$ satisfying the following condition. \[ass\] Assume ${\mathbb{G}}\neq {\mathbb{R}}$. We say that ${\mathbb{G}}$ admits a ball of uniformly deformable directions if there exists an open ball $B\subset V_{1}$ of directions such that every $E\in B$ is deformable with the same parameters $C_{B}, N_{B}$ and $\Delta_{B}$. We will show that every Carnot group ${\mathbb{G}}$ which admits a ball of uniformly deformable directions contains a CC-Hausdorff dimension one (hence measure zero) set $N$ so that almost maximality of a directional derivative $Ef(x)$ implies differentiability if $x\in N$ and $E\in B$. Combining this with Proposition \[DoreMaleva\] will lead to a proof of Theorem \[maintheorem\]. By Theorem \[deformFiliform\], all model filiform groups ${\mathbb{E}}_{n}$ with $n\geq 2$ admit a ball of uniformly deformable directions. In particular, Theorem \[maintheorem\] applies to Carnot groups of arbitrarily high step. The Carnot group ${\mathbb{G}}$ will be identified with ${\mathbb{R}}^{n}$ by means of exponential coordinates of the first kind. Let $B_{{\mathbb{Q}}}$ denote the set of $E\in B$ with $\omega(E)=1$ which are a rational linear combination of the basis vectors $X_{1}, \ldots, X_{r}$ of $V_{1}$. Note that $B_{{\mathbb{Q}}}$ is dense in $B$ since the Euclidean sphere contains a dense set of points with rational coordinates. The construction of our universal differentiability set is given by the following lemma. \[uds\] For each choice of $E \in B_{{\mathbb{Q}}}$, $u\in {\mathbb{Q}}^{n}$ with $d(u)\leq 1$ and rationals $$0<s<1, \qquad \eta>0, \qquad 0 < \Delta < \Delta_{B}(\eta),$$ let $\gamma_{E, u, s, \Delta, \eta}$ denote a curve granted by Definition \[deform\] with parameters $C_{B}, N_{B}, \Delta_{B}(\eta)$. Let $L$ be the countable union of images of all translated curves $x\gamma_{E, u, s, \Delta}$, where $x\in {\mathbb{Q}}^{n}$ and $E, u, s, \Delta$ are as above. Then there is a $G_{\delta}$ set $N\subset {\mathbb{G}}$ containing $L$ which has Hausdorff dimension one with respect to the CC metric. The proof of Lemma \[uds\] is essentially the same as that of [@LPS17 Lemma 5.4]. We also recall the following mean value type lemma for future use [@Pre90 Lemma 3.4]. \[preissmeanvalue\] Suppose $|\zeta|<s<\rho$, $0<v<1/32$, $\sigma>0$ and $L>0$ are real numbers and let $\varphi, \psi\colon \mathbb{R} \to \mathbb{R}$ be Lipschitz maps satisfying $\mathrm{Lip}_{\mathbb{E}}(\varphi)+\mathrm{Lip}_{\mathbb{E}}(\psi)\leq L$, $\varphi(t)=\psi(t)$ for $|t|\geq s$ and $\varphi(\zeta)\neq \psi(\zeta)$. Suppose, moreover, that $\psi'(0)$ exists and that $$|\psi(t)-\psi(0)-t\psi'(0)|\leq \sigma L|t|$$ whenever $|t|\leq \rho$, $$\rho\geq s\sqrt{(sL)/(v|\varphi(\zeta)-\psi(\zeta)|)},$$ and $$\sigma \leq v^{3}\Big( \frac{\varphi(\zeta)-\psi(\zeta)}{sL} \Big)^{2}.$$ Then there is $\tau\in (-s,s)\setminus \{\zeta\}$ such that $\varphi'(\tau)$ exists, $$\varphi'(\tau)\geq \psi'(0)+v|\varphi(\zeta)-\psi(\zeta)|/s,$$ and $$|(\varphi(\tau+t)-\varphi(\tau))-(\psi(t)-\psi(0))|\leq 4(1+20v)\sqrt{(\varphi'(\tau)-\psi'(0))L}|t|$$ for every $t\in \mathbb{R}$. \[meanvalueremark\] By examining the proof of Lemma \[preissmeanvalue\] in [@Pre90]. one can see that $\tau$ can additionally be chosen outside a given Lebesgue measure zero subset of $\mathbb{R}$. From now on we fix a set $N\subset {\mathbb{G}}$ as given by Lemma \[uds\]. \[D\^f\] For any Lipschitz function $f:{\mathbb{G}}\to {\mathbb{R}}$, define: $$D^{f}:=\{ (x,E) \in N\times V_{1} \colon \omega(E)=1,\, Ef(x) \mbox{ exists}\}.$$ \[almostmaximalityimpliesdifferentiability\] Let ${\mathbb{G}}$ be a Carnot group of step $s$ which admit a ball of uniformly deformable directions (Assumptions \[ass\]). Let $f\colon {\mathbb{G}}\to {\mathbb{R}}$ be Lipschitz with $\mathrm{Lip}_{{\mathbb{G}}}(f) \leq 1/2$ and suppose $(x_{\ast}, E_{\ast})\in D^{f}$ with $E_{\ast}\in B$. Let $M$ denote the set of pairs $(x,E)\in D^{f}$ such that $Ef(x)\geq E_{\ast}f(x_{\ast})$ and for every $t\in (-1,1)$: $$\begin{aligned} & |(f(x\exp(tE_{\ast}))-f(x)) - (f(x_{\ast}\exp(tE_{\ast}))-f(x_{\ast}))| \\ & \qquad \leq 6|t| ( (Ef(x)-E_{\ast}f(x_{\ast}))\mathrm{Lip}_{{\mathbb{G}}}(f))^{1/2s^{2}}.\end{aligned}$$ If $$\lim_{\delta \downarrow 0} \sup \{Ef(x)\colon (x,E)\in M \mbox{ and }d(x,x_{\ast})\leq \delta\}\leq E_{\ast}f(x_{\ast}),$$ then $f$ is differentiable at $x_{\ast}$ and its Pansu differential is given by $$L(x):=E_{\ast}f(x_{\ast})\langle x , E_{\ast}(0) \rangle=E_{\ast}f(x_{\ast})\langle p(x) , p(E_{\ast}) \rangle.$$ We assume $\mathrm{Lip}_{{\mathbb{G}}}(f)>0$, since otherwise the statement is trivial. Fix the following parameters: 1. $\varepsilon>0$ rational, 2. $0< v<1/32$ rational such that $4(1+20v)\sqrt{(2+v)/(1-v)}+v < 6$, 3. $\eta=\varepsilon v^{3}/3200$, 4. $\Delta_{B}(\eta/2)$, $C_{B}$ and $C_{a}$ according to Lemma \[closedirectioncloseposition\] and Assumptions \[ass\], 5. rational $0< \Delta < \min \{\eta v^2,\, \Delta_{B}(\eta/2),\, \Upsilon \}$, where $$\Upsilon := \frac{\varepsilon v^{2s^{2}+1}}{8C_{B}^{2}C_{a}^{2s^{2}}N_{B}^{2s^2}\mathrm{Lip}_{\mathbb{G}}(f)^{2s^{2}-1}},$$ 6. $\sigma=9\varepsilon^{2}v^{5}\Delta^2/256$, 7. $0<\rho<1$ such that $$\label{directionaldifferentiability} |f(x_{\ast}\exp(tE_{\ast})) - f(x_{\ast})-tE_{\ast}f(x_{\ast})|\leq \sigma \mathrm{Lip}_{\mathbb{G}}(f)|t| \quad \mbox{for }|t|\leq \rho,$$ 8. $0<\delta < \rho \sqrt{3\varepsilon v\Delta^{3}}/4$ such that $$Ef(x)<E_{\ast}f(x_{\ast})+\varepsilon v\Delta/2$$ whenever $(x,E)\in M$ and $d(x,x_{\ast})\leq 4\delta(1+1/\Delta)$. To prove Pansu differentiability of $f$ at $x_{\ast}$, we will show that $$|f(x_{\ast}\delta_{t}(h))-f(x_{\ast})-tE_{\ast}f(x_{\ast})\langle u, E_{\ast}(0) \rangle |\leq \varepsilon t \qquad \mbox{for }d(u)\leq 1,\, 0<t<\delta.$$ Suppose this is not true. Then there exist $u\in \mathbb{Q}^{n}$ with $d(u) \leq 1$ and rational $0<r<\delta$ such that $$\label{badpoint} |f(x_{\ast}\delta_{r}(u))-f(x_{\ast})-rE_{\ast}f(x_{\ast})\langle u, E_{\ast}(0) \rangle|> \varepsilon r.$$ Let ${\rm{s}}=r/ \Delta \in \mathbb{Q}$. To contradict , we first construct Lipschitz horizontal curves $g$ and $h$ for which we can apply Lemma \[preissmeanvalue\] with $\varphi:=f\circ g$ and $\psi:=f\circ h$. *Construction of $g$.* To ensure that the image of $g$ is a subset of the set $N$, we first introduce rational approximations to $x_{\ast}$ and $E_{\ast}$. Let $$\label{A1} A_{1}:=\left( \frac{\eta \Delta}{C_{a}(N_{B}+2)}\right)^{s^2}$$ and $$\label{A2} A_{2}:=\Big(6- \Big( 4(1+20v) \Big( \frac{2+v}{1-v} \Big)^{1/2}+v\Big)\Big)^{s^{2}} \frac{( \varepsilon v\Delta \mathrm{Lip}_{\mathbb{G}}(f)/2)^{1/2}}{C_{a}^{s^{2}}(N_{B}+2)^{s^2}\mathrm{Lip}_{\mathbb{G}}(f)^{s^{2}}}.$$ Notice that $A_{1}, A_{2}>0$. We choose $\tilde{x}_{\ast}\in \mathbb{Q}^{n}$ and $\tilde{E}_{\ast}\in B_{{\mathbb{Q}}}$ sufficiently close to $x_{\ast}$ and $E_{\ast}$ respectively to ensure: $$\label{nowlistingthese} d(\tilde{x}_{\ast}\delta_{r}(u),x_{\ast})\leq 2r,$$ $$\label{lista} d(\tilde{x}_{\ast}\delta_{r}(u), x_{\ast}\delta_{r}(u))\leq \sigma r,$$ $$\label{listvector1} \omega(\tilde{E}_{\ast}-E_{\ast})\leq \min \{ \sigma, \, C_{B}\Delta,\, A_{1},\, A_{2} \}.$$ Recall that $0<r<\Delta$ and ${{\rm{s}}}=r/\Delta$ are rational and that $0<{\rm{s}}<1$. To construct $g$ we first apply Definition \[deform\] with $E=\tilde{E}_{\ast}$ and parameters $\eta, {\rm{s}}, \Delta$, $\delta_{r}(u)$ and $u$ as defined above in . We then left translate this curve by $\tilde{x}_{\ast}$. This gives a Lipschitz horizontal curve $g\colon {\mathbb{R}}\to {\mathbb{G}}$ which is formed by joining at most $N_{B}$ horizontal lines such that - $g(t)=\tilde{x}_{\ast}\exp(t\tilde{E}_{\ast})$ for $|t|\geq {\rm{s}}$, - $g(\zeta)=\tilde{x}_{\ast}\delta_{r}(u)$, where $\zeta := r\langle u,\tilde{E}_{\ast}(0)\rangle$, - $\mathrm{Lip}_{\mathbb{G}}(g)\leq 1+\eta\Delta$, - for all but finitely many $t\in \mathbb{R}$, $g'(t)$ exists and $|(p\circ g)'(t) - p(\tilde{E}_{\ast})| \leq C_{B}\Delta$ for $t\in \mathbb{R}$. Since all the relevant quantities are chosen to be rational and $N$ is built according to Lemma \[uds\], it follows that the image of $g$ is contained in $N$. *Construction of $h$.* There exists a Lipschitz horizontal curve $h\colon \mathbb{R}\to {\mathbb{G}}$ such that $$h(t)= \begin{cases} \tilde{x}_{\ast}\exp(t\tilde{E}_{\ast}) &\mbox{if }|t|\geq {\rm{s}},\\ x_{\ast}\exp(tE_{\ast}) &\mbox{if }|t|\leq {\rm{s}}/2, and \end{cases}$$ in each of the regions $({\rm{s}}/2, {\rm{s}})$ and $(-{\rm{s}},-{\rm{s}}/2)$, $h$ is formed by joining at most $N_{B}$ horizontal lines. Moreover - $\mathrm{Lip}_{\mathbb{G}}(h)\leq 1+\eta\Delta/2,$ - for all but finitely many $t\in \mathbb{R}$, $h'(t)$ and satisfies the bound $|(p\circ h)'(t)-p(E_{\ast})|\leq \min \{A_{1}, A_{2}\}$. Up to a left translation, we may start by assuming that $x_{\ast}=0$. Clearly $h(t)$ is defined explicitly and satisfies the required conditions for $|t|\leq {\rm{s}}/2$ and $|t|\geq {\rm{s}}$. We now show how to extend $h$ in $({\rm{s}}/2,{\rm{s}})$. The extension to $(-{\rm{s}},-{\rm{s}}/2)$ is essentially the same. Recall $\Delta_{B}(1)$ and $C_{B}$ from Assumptions \[ass\]. Choose $0<\Gamma<\Delta(1)$ satisfying $$(1+\Gamma)^{2}\leq 1+\eta\Delta/2$$ and $$C_{B}\Gamma(1+\Gamma)+\Gamma \leq \min \{A_{1}, A_{2}\}.$$ Define $\lambda={\rm{s}}\Gamma/2<\Gamma$. Choose $v\in {\mathbb{G}}$ with $d(v)\leq 1$ such that $$\delta_{\lambda}(v)=\exp(-{\rm{s}}E_{\ast})\tilde{x}_{\ast}\exp({\rm{s}}\tilde{E}_{\ast}).$$ This is possible if the rational approximation introduced earlier is chosen correctly; note that the rational approximation was introduced after all quantities upon which $\lambda$ depends. We now apply Remark \[deform2\] with - $\eta=1$ and $\Delta$ replaced by $\Gamma$, - ${\rm{s}}$ replaced by ${\rm{s}}/2$, - $u$ replaced by $v$, - $\zeta$ replaced by $\tilde{\zeta}: = \langle \delta_{{\rm{s}}\Gamma /2}(v),E_{\ast}(0)\rangle$. We obtain a Lipschitz horizontal curve $\varphi: [0,{\rm{s}}/2+\tilde{\zeta}]\to {\mathbb{G}}$ that is formed by joining at most $N_{B}$ horizontal lines such that - $\varphi(0)=0$, - $\varphi({\rm{s}}/2+\tilde{\zeta})=\exp(-({\rm{s}}/2)E_{\ast})\tilde{x}_{\ast}\exp({\rm{s}}\tilde{E}_{\ast})$, - $\mathrm{Lip}_{{\mathbb{G}}}(\varphi)\leq 1+\Gamma$, - $\varphi'(t)$ exists and $|(p\circ \varphi)'(t)-p(E_{\ast})|\leq C_{B}\Gamma$ for all except finitely many $t\in [0,{\rm{s}}/2+\tilde{\zeta}]$. Then $\tilde{\varphi}:[0,1]\to {\mathbb{G}}$ defined by $\tilde{\varphi}(t)=\varphi(({\rm{s}}/2+\tilde{\zeta})t)$ is a Lipschitz horizontal curve such that - $\tilde{\varphi}(0)=0$, - $\tilde{\varphi}(1)=\exp(-({\rm{s}}/2)E_{\ast})\tilde{x}_{\ast}\exp({\rm{s}}\tilde{E}_{\ast})$, - $\mathrm{Lip}_{{\mathbb{G}}}(\tilde{\varphi})\leq (1+\Gamma)({\rm{s}}/2+\tilde{\zeta})$, - $\tilde{\varphi}'(t)$ exists and $|(p\circ \tilde{\varphi})'(t)-({\rm{s}}/2+\tilde{\zeta})p(E_{\ast})|\leq C_{B}\Gamma({\rm{s}}/2+\tilde{\zeta})$ for all but finitely many $t\in [0,1]$. Define $h_1 :[{\rm{s}}/2,{\rm{s}}]\to {\mathbb{G}}$ by $$h_1(t)=\exp(({\rm{s}}/2)E_{\ast}) \tilde{\varphi}((2/{\rm{s}})(t-{\rm{s}}/2)).$$ Then $h_1$ is a Lipschitz horizontal curve which satisfies $h_{1}({\rm{s}}/2)=\exp(({\rm{s}}/2)E_{\ast})$ and $h_{1}({\rm{s}})=\tilde{x}_{\ast}\exp({\rm{s}}\tilde{E}_{\ast})$. Note that $|p(v)|\leq d(v)\leq 1$ implies $|\tilde{\zeta}|\leq \lambda$. Hence we have $$\begin{aligned} \mathrm{Lip}_{{\mathbb{G}}}(h_1)&\leq \frac{2(1+\Gamma)({\rm{s}}/2+\tilde{\zeta})}{{\rm{s}}}\\ & \leq \frac{2(1+\Gamma)({\rm{s}}/2+\lambda)}{{\rm{s}}}\\ &\leq (1+\Gamma)^{2}\\ &\leq 1+\eta\Delta/2.\end{aligned}$$ Then, for all but finitely many $t\in [{\rm{s}}/2,{\rm{s}}]$ $$|(p\circ h_1)'(t)-(1+2\tilde{\zeta}/{\rm{s}})p(E_{\ast})| \leq C_{B}\Gamma (1+2\tilde{\zeta}/{\rm{s}}),$$ and this implies $$\begin{aligned} |(p\circ h_1)'(t)-p(E_{\ast})| &\leq C_{B}\Gamma(1+2\tilde{\zeta}/{\rm{s}}) + 2|\tilde{\zeta}|/{\rm{s}}\\ &\leq C_{B}\Gamma(1+\Gamma)+\Gamma\\ &\leq \min \{A_{1}, A_{2}\}.\end{aligned}$$ Defining $h(t):=h_{1}(t)$ for any $t\in [{\rm{s}}/2,{\rm{s}}]$ we obtain the desired properties. The extension of $h$ in $[-{\rm{s}},-{\rm{s}}/2]$ is similar. *Application of Lemma \[preissmeanvalue\].* We now prove that the hypotheses of Lemma \[preissmeanvalue\] hold with $L:=(2+\eta \Delta)\mathrm{Lip}_{{\mathbb{G}}}(f)$, $\varphi:=f\circ g$ and $\psi:=f\circ h$. The inequalities $|\zeta|<{\rm{s}}<\rho$, $0<v<1/32$ and the equality $\varphi(t)=\psi(t)$ for $|t|\geq {\rm{s}}$ are clear. Since $\mathrm{Lip}_{{\mathbb{G}}}(g), \mathrm{Lip}_{{\mathbb{G}}}(h) \leq 1+\eta\Delta/2$, we have $\mathrm{Lip}_{\mathbb{E}}(\varphi)+\mathrm{Lip}_{\mathbb{E}}(\psi)\leq L$. Notice that implies $$|f(\tilde{x}_{\ast}\delta_{r}(u)) - f(x_{\ast}\delta_{r}(u))| \leq \sigma r \mathrm{Lip}_{{\mathbb{G}}}(f).$$ Since $|\zeta|\leq r\leq \rho$, we may evaluate at $t=\zeta$ to obtain $$\begin{aligned} |f(x_{\ast}\exp(\zeta E_{\ast}))-f(x_{\ast})-\zeta E_{\ast}f(x_{\ast})| &\leq \sigma \mathrm{Lip}_{{\mathbb{G}}}(f)|\zeta| \\ &\leq \sigma r\mathrm{Lip}_{{\mathbb{G}}}(f).\end{aligned}$$ Next, note that implies $|\tilde{E}_{\ast}(0)-E_{\ast}(0)|\leq \sigma$. Recalling that $\zeta=r\langle u, \tilde{E}_{\ast}(0)\rangle$ we can estimate as follows: $$\begin{aligned} |\zeta E_{\ast}f(x_{\ast}) - r\langle u,E_{\ast}(0)\rangle E_{\ast}f(x_{\ast})| & = r|E_{\ast}f(x_{\ast})||\langle u, \tilde{E}_{\ast}(0) -E_{\ast}(0)\rangle|\\ &\leq r\mathrm{Lip}_{{\mathbb{G}}}(f)|\tilde{E}_{\ast}(0)-E_{\ast}(0)|\\ &\leq \sigma r \mathrm{Lip}_{{\mathbb{G}}}(f).\end{aligned}$$ Hence we obtain, $$\label{yetanother}|f(x_{\ast}\exp(\zeta E_{\ast}))-f(x_{\ast})-r\langle u,E_{\ast}(0)\rangle E_{\ast}f(x_{\ast})| \leq 2\sigma r\mathrm{Lip}_{{\mathbb{G}}}(f).$$ Since $|\zeta|\leq r=\Delta {\rm{s}}\leq {\rm{s}}/2$ we have $h(\zeta)=x_{\ast}\exp(\zeta E_{\ast})$. The definition of $g$ gives $g(\zeta)=\tilde{x}_{\ast}\delta_{r}(u)$. Using also and , we can estimate as follows: $$\begin{aligned} \label{differenceofcomposition} |\varphi(\zeta)-\psi(\zeta)|&= |f(g(\zeta)) - f(h(\zeta))| \nonumber \\ &= |f(\tilde{x}_{\ast}\delta_{r}(u)) - f(x_{\ast}\exp(\zeta E_{\ast}))| \nonumber\\ & \geq |f(x_{\ast}\delta_{r}(u)) - f(x_{\ast}\exp(\zeta E_{\ast}))| - |f(\tilde{x}_{\ast}\delta_{r}(u)) - f(x_{\ast}\delta_{r}(u))|\nonumber \\ & \geq |f(x_{\ast}\delta_{r}(u))-f(x_{\ast})-rE_{\ast}f(x_{\ast})\langle u, E_{\ast}(0) \rangle | \nonumber \\ & \quad -|f(x_{\ast}\exp(\zeta E_{\ast})) - f(x_{\ast}) - r E_{\ast}f(x_{\ast})\langle u, E_{\ast}(0) \rangle| \nonumber \\ & \quad - \sigma r\mathrm{Lip}_{{\mathbb{G}}}(f) \nonumber\\ &\geq \varepsilon r - 2\sigma r\mathrm{Lip}_{{\mathbb{G}}}(f) - \sigma r\mathrm{Lip}_{{\mathbb{G}}}(f)\nonumber \\ &= \varepsilon r - 3\sigma r\mathrm{Lip}_{{\mathbb{G}}}(f) \nonumber \\ &\geq 3\varepsilon r/4.\end{aligned}$$ In particular, $\varphi(\zeta)\neq \psi(\zeta)$. The derivative $\psi'(0)$ exists and equals $E_{\ast}f(x_{\ast})$, since $\psi(t)=f(x_{\ast}\exp(tE_{\ast}))$ for every $|t|\leq {\rm{s}}/2$. We next check that $$\label{psiprime} |\psi(t)-\psi(0)-t\psi'(0)| \leq \sigma L|t| \quad \mbox{for }|t|\leq \rho.$$ Recall that $h(0)=x_{\ast}$, $|(p\circ h)'-p(E_{\ast})|\leq A_{1}$ (see for the definition of $A_{1}$) and $h$ is formed by joining at most $N_{B}+2$ horizontal lines. Hence Lemma \[closedirectioncloseposition\] implies that $$d(x_{\ast}\exp(tE_{\ast}),h(t))\leq C_{a}(N_{B}+2)A_{1}^{1/s^2}|t|\leq \eta\Delta |t| \quad \mbox{ for every }t\in {\mathbb{R}}.$$ Hence, using also and $L=(2+\eta\Delta)\mathrm{Lip}_{{\mathbb{G}}}(f)$, one has $$\begin{aligned} |\psi(t)-\psi(0)-t\psi'(0)| &\leq |f(x_{\ast}\exp(tE_{\ast})) - f(x_{\ast})-tE_{\ast}f(x_{\ast})|\\ & \qquad + |f(x_{\ast}\exp(tE_{\ast})) - f(h(t))|\\ &\leq \sigma \mathrm{Lip}_{{\mathbb{G}}}(f)|t| + \mathrm{Lip}_{{\mathbb{G}}}(f)\eta\Delta|t|\\ &\leq \sigma L |t| \quad \mbox{ for }|t|\leq \rho.\end{aligned}$$ Since $\mathrm{Lip}_{{\mathbb{G}}}(f)\leq 1/2$ we have $L\leq 4$. By using $r< \delta$, ${\rm{s}}=r/\Delta$, and the definition of $r, \delta, \Delta$ and ${\rm{s}}$ we deduce $$\begin{aligned} {\rm{s}}\sqrt{ {\rm{s}}L/(v|\varphi(\zeta)-\psi(\zeta)|)} &\leq 4{\rm{s}}\sqrt{{\rm{s}}/(3\varepsilon rv)}\\ &= 4r/\sqrt{3\varepsilon v\Delta^3}\\ &\leq 4\delta/ \sqrt{3\varepsilon v\Delta^{3}}\\ &\leq \rho.\end{aligned}$$ Finally we use , $L\leq 4$ and the definition of $\sigma$ to get $$\begin{aligned} v^3 (|\varphi(\zeta)-\psi(\zeta)|/({\rm{s}}L))^2&\geq v^3(3\varepsilon r / 16{\rm{s}})^2\\ &= 9\varepsilon^2 v^3 \Delta^2 /256\\ & \geq \sigma.\end{aligned}$$ We can now apply Lemma \[preissmeanvalue\]. We obtain $\tau \in (-{\rm{s}},{\rm{s}})\setminus \{\zeta \}$ such that $\varphi'(\tau)$ exists and satisfies $$\label{bigderivative} \varphi'(\tau)\geq \psi'(0)+v|\varphi(\zeta)-\psi(\zeta)|/{\rm{s}},$$ and for every $t\in \mathbb{R}$: $$\label{incrementsbound} |(\varphi(\tau+t)-\varphi(\tau))-(\psi(t)-\psi(0))|\leq 4(1+20v)\sqrt{(\varphi'(\tau)-\psi'(0))L}|t|$$ Since $g$ is a horizontal curve, we may use Remark \[meanvalueremark\] to additionally choose $\tau$ such that $g'(\tau)$ exists and is in $\mathrm{Span}\{X_{i}(g(\tau))\colon 1\leq i\leq r\}$. *Conclusion.* Let $x:=g(\tau)\in N$ and choose $E\in V_{1}$ with $E(g(\tau))=g'(\tau)/|p(g'(\tau))|$, which implies that $\omega(E)=1$. From and we will obtain $$\label{betterpair1} Ef(x)\geq E_{\ast}f(x_{\ast}) + \varepsilon v\Delta/2,$$ $$\label{betterpair2} (x,E)\in M.$$ We first observe that this suffices to conclude the proof. Indeed, by and since $g(\zeta)=\tilde{x}_{\ast}\delta_{r}(u)$ one has $$\begin{aligned} d(x,x_{\ast}) &\leq d(g(\tau),g(\zeta))+d(\tilde{x}_{\ast}\delta_{r}(u),x_{\ast})\\ &\leq \mathrm{Lip}_{{\mathbb{G}}}(g)|\tau - \zeta| +2r\\ &\leq 4({\rm{s}}+r)\\ &= 4r(1+1/\Delta)\\ &\leq 4\delta(1+1/\Delta).\end{aligned}$$ Since $x\in N$, combining this with and contradicts the choice of $\delta$. This forces us to conclude that is false, finishing the proof. *Proof of .* Using and we have that $$\label{stanco} \varphi'(\tau)-\psi'(0)\geq 3\varepsilon vr/4{\rm{s}}=3\varepsilon v\Delta/4.$$ Notice that, by the definition of $E$, by Definition \[defdirectionalderivative\], and the fact that g is a concatenation of horizontal lines, we have $\varphi'(\tau)=Ef(x)|p(g'(\tau))|$. Since $\omega(E)=1$, we deduce that $|\varphi'(\tau)|/|p(g'(\tau))|\leq \mathrm{Lip}_{{\mathbb{G}}}(f)$. Similarly $|p(g'(\tau))| \leq \mathrm{Lip}_{{\mathbb{G}}}(g)\leq 1+\eta \Delta$. Since $\psi'(0)=E_{\ast}f(x_{\ast})$, by we have $$\begin{aligned} & Ef(x)-E_{\ast}f(x_{\ast})-(1-v)(\varphi'(\tau)-\psi'(0))\\ &\qquad = v(\varphi'(\tau)-\psi'(0)) + (1-|p(g'(\tau))|)\varphi'(\tau)/|p(g'(\tau))|\\ &\qquad \geq 3\varepsilon v^2\Delta/4 - \eta\Delta \mathrm{Lip}_{{\mathbb{G}}}(f)\\ &\qquad \geq 0.\end{aligned}$$ In the last inequality we used $\mathrm{Lip}_{{\mathbb{G}}}(f)\leq 1/2$ and $\eta\leq 3\varepsilon v^2 /2$. From this we use $0<v<1/32$ and again to deduce $$\label{noimagination} Ef(x)-E_{\ast}f(x_{\ast})\geq (1-v)(\varphi'(\tau)-\psi'(0))\geq \varepsilon v\Delta /2,$$ which proves . *Proof of .* Recall that $|(p\circ g)'(t)-p(\tilde{E}_{\ast})| \leq C_{B}\Delta$ for all but finitely many $t$. Using , this implies $|(p\circ g)'(t)- p(E_{\ast})|\leq 2C_{B}\Delta$ for all but finitely many $t$. Since $x=g(\tau)$ and $g$ is formed by joining at most $N_{B}$ horizontal lines, we can apply Lemma \[closedirectioncloseposition\] to obtain $$d(g(\tau+t),x\exp(tE_{\ast}))\leq C_{a}N_{B}(2C_{B}\Delta)^{1/s^{2}}|t| \qquad \mbox{for every }t\in {\mathbb{R}}.$$ By we have $\Delta \leq 2(Ef(x)-E_{\ast}f(x_{\ast}))/(\varepsilon v)$. Combining this fact with the definition of $\Delta$, we deduce that $$\begin{aligned} \label{add1} &|(f(x\exp(tE_{\ast}))-f(x))-(f(g(\tau+t))-f(g(\tau)))|\nonumber \\ &\qquad = |f(x\exp(tE_{\ast}))-f(g(\tau+t))|\nonumber \\ &\qquad \leq \mathrm{Lip}_{{\mathbb{G}}}(f) d(g(\tau+t), x\exp(tE_{\ast}))\nonumber \\ &\qquad \leq C_{a}N_{B}(2C_{B}\Delta)^{1/s^{2}} \mathrm{Lip}_{{\mathbb{G}}}(f)|t| \nonumber \\ &\qquad \leq C_{a}N_{B}(2C_{B}\sqrt{\Delta})^{1/s^{2}}\mathrm{Lip}_{{\mathbb{G}}}(f)|t| \Big( \frac{2(Ef(x)-E_{\ast}f(x_{\ast}))}{\varepsilon v} \Big)^{\frac{1}{2s^{2}}}\nonumber \\ &\qquad \leq v|t|\big((Ef(x)-E_{\ast}f(x_{\ast}))\mathrm{Lip}_{{\mathbb{G}}}(f) \big)^{\frac{1}{2s^{2}}} \Big(\frac{8C_{B}^{2}C_{a}^{2s^{2}}N_{B}^{2s^{2}}\Delta\mathrm{Lip}_{{\mathbb{G}}}(f)^{2s^{2}-1}}{\varepsilon v^{2s^{2}+1}} \Big)^{\frac{1}{2s^{2}}} \nonumber \\ &\qquad \leq v|t|\big((Ef(x)-E_{\ast}f(x_{\ast}))\mathrm{Lip}_{{\mathbb{G}}}(f) \big)^{\frac{1}{2s^{2}}} \quad \mbox{ for }t\in {\mathbb{R}}.\end{aligned}$$ Combining , and $L=(2+\eta \Delta)\mathrm{Lip}_{{\mathbb{G}}}(f)\leq (2+v)\mathrm{Lip}_{{\mathbb{G}}}(f)$ gives $$\begin{aligned} \label{add2} &|(\varphi(\tau+t)-\varphi(\tau))-(\psi(t)-\psi(0))| \nonumber \\ &\qquad \leq 4(1+20v)|t| \Big( \frac{(2+v)\mathrm{Lip}_{{\mathbb{G}}}(f)(Ef(x)-E_{\ast}f(x_{\ast}))}{1-v} \Big)^{\frac{1}{2}} \quad \mbox{ for }t\in {\mathbb{R}}.\end{aligned}$$ Since $\mathrm{Lip}_{{\mathbb{G}}}(f)\leq 1/2$, we easily get $$((Ef(x)-E_{\ast}f(x_{\ast}))\mathrm{Lip}_{{\mathbb{G}}}(f))^{\frac{1}{2}} \leq ((Ef(x)-E_{\ast}f(x_{\ast}))\mathrm{Lip}_{{\mathbb{G}}}(f))^{\frac{1}{2s^{2}}}$$ since both sides are less than $1$. Hence adding and and using the definition $\varphi=f\circ g$ gives for $t\in {\mathbb{R}}$: $$\begin{aligned} \label{add3} & |f(x\exp(tE_{\ast})-f(x))-(\psi(t)-\psi(0))|\nonumber \\ &\qquad \leq \Big( 4(1+20v) \Big( \frac{2+v}{1-v} \Big)^{\frac{1}{2}}+v\Big) |t|( (Ef(x)-E_{\ast}f(x_{\ast}))\mathrm{Lip}_{{\mathbb{G}}}(f))^{\frac{1}{2s^{2}}}.\end{aligned}$$ Recall that $\psi = f\circ h$ and that $h$ is a concatenation of at most $N_B+2$ horizontal lines such that $h(0)=x_{\ast}$ and the inequality $|(p\circ h)'-p(E_{\ast})|\leq A_{2}$ holds for all but finitely many $t\in {\mathbb{R}}$. Then, by Lemma \[closedirectioncloseposition\], and , we have $$\begin{aligned} \label{add4} &|(\psi(t)-\psi(0))-(f(x_{\ast}\exp(tE_{\ast}))-f(x_{\ast}))| \nonumber \\ &\quad =|f(h(t))-f(x_{\ast}\exp(tE_{\ast}))| \nonumber \\ &\quad \leq \mathrm{Lip}_{{\mathbb{G}}}(f)d(h(t), x_{\ast}\exp(tE_{\ast})) \nonumber \\ &\quad \leq \mathrm{Lip}_{{\mathbb{G}}}(f)C_{a}(N_{B}+2)A_{2}^{1/s^{2}}|t| \nonumber \\ &\quad = \Big(6- \Big( 4(1+20v) \Big( \frac{2+v}{1-v} \Big)^{\frac{1}{2}}+v\Big)\Big)|t|( \varepsilon v \Delta/2)\mathrm{Lip}_{{\mathbb{G}}}(f))^{\frac{1}{2s^{2}}} \nonumber \\ &\quad \leq \Big(6- \Big( 4(1+20v) \Big( \frac{2+v}{1-v} \Big)^{\frac{1}{2}}+v\Big)\Big)|t|((Ef(x)-E_{\ast}f(x_{\ast}))\mathrm{Lip}_{{\mathbb{G}}}(f))^{\frac{1}{2s^{2}}} \quad \mbox{ for }t\in \mathbb{R}.\end{aligned}$$ Adding and gives for every $t\in {\mathbb{R}}$ $$\begin{aligned} & |(f(x\exp(tE_{\ast}))-f(x)) - (f(x_{\ast}\exp(tE_{\ast}))-f(x_{\ast}))| \\ & \qquad \leq 6|t| \big( (Ef(x)-E_{\ast}f(x_{\ast}))\mathrm{Lip}_{{\mathbb{G}}}(f) \big)^{\frac{1}{2s^{2}}}.\end{aligned}$$ This implies , hence proving the theorem. Construction of an almost maximal directional derivative {#sectionconstruction} ======================================================== Assume $\mathbb{G}$ is a Carnot group of step $s$, rank $r$ and topological dimension $n$. Fix a $G_{\delta}$ set $N\subset \mathbb{G}$. The main result of this section is Proposition \[DoreMaleva\], which is an adaptation of [@DM11 Theorem 3.1] and of [@PS16 Theorem 6.1] to $\mathbb{G}$. It shows that given a Lipschitz function $f_{0}\colon \mathbb{G} \to \mathbb{R}$, there is a Lipschitz function $f\colon \mathbb{G} \to \mathbb{R}$ such that $f-f_{0}$ is $\mathbb{G}$-linear and $f$ has an almost locally maximal horizontal directional derivative at a point of $N$. \[D\] For any Lipschitz function $f:{\mathbb{G}}\to {\mathbb{R}}$, define $$D^{f}:=\{ (x,E) \in N\times V_{1} \colon \omega(E)=1,\, Ef(x) \mbox{ exists}\}.$$ Note that if $f-f_{0}$ is $\mathbb{G}$-linear then $D^{f}=D^{f_{0}}$ and also the functions $f$ and $f_{0}$ have the same points of Pansu differentiability. \[DoreMaleva\] Suppose $f_0:\mathbb{G}\to \mathbb{R}$ is a Lipschitz function, $(x_0,E_0)\in D^{f_0}$ and $\delta_0, \mu, \tau, K>0$. Then there is a Lipschitz function $f:\mathbb{G}\to \mathbb{R}$ such that $f-f_0$ is $\mathbb{G}$-linear with $\mathrm{Lip}_{\mathbb{G}}(f-f_{0})\leq \mu$, and a pair $(x_{\ast},E_{\ast})\in D^{f}$ with $d(x_{\ast},x_0)\leq \delta_0$ and $\omega(E_{\ast}-E_0)\leq \tau$ such that $E_{\ast}f(x_{\ast})>0$ is almost locally maximal in the following sense. For any $\varepsilon>0$ there is $\delta_{\varepsilon}>0$ such that, whenever $(x,E)\in D^{f}$ satisfies both: 1. $d(x,x_{\ast})\leq \delta_{\varepsilon}$, $Ef(x)\geq E_{\ast}f(x_{\ast})$ and 2. for any $t\in (-1,1)$ $$\begin{aligned} &|(f(x\exp(tE_{\ast}))-f(x))-(f(x_{\ast}\exp(tE_{\ast}))-f(x_{\ast}))|\\ & \qquad \leq K|t| ( Ef(x)-E_{\ast}f(x_{\ast}))^{\frac{1}{2s^2}},\end{aligned}$$ then $$Ef(x)<E_{\ast}f(x_{\ast})+\varepsilon.$$ We use the remainder of this section to prove Proposition \[DoreMaleva\]. We recall the following constants: - $C_{\mathrm{a}}\geq 1$ chosen as in Lemma \[closedirectioncloseposition\], - $C_D \geq 1$ as in \[conjugatedistance\]. Fix $f_{0}, x_0, E_0, \delta_0, \tau, \mu, K$ as given in the statement of Proposition \[DoreMaleva\] and define $t_0:=\min \{1/4,\, \mu/2\}$. \[Ass\] Without loss of generality, we make the following assumptions: - $K\geq 4s^2$, since increasing $K$ makes the statement of Proposition \[DoreMaleva\] stronger, - $\mathrm{Lip}_{\mathbb{G}}(f_0)\leq \min\{1/2, t_0 \tau^2 / 32\}$, after multiplying $f_0$ by a positive constant and possibly increasing $K$, - $E_0f(x_0)\geq 0$, by replacing $E_0$ by $-E_0$ if necessary. We prove Proposition \[DoreMaleva\] using a technique similar to the one implemented in [@PS16 Theorem 6.1], namely by using Algorithm \[alg\] below to construct a sequence of Lipschitz functions $(f_m)$ and a sequence of pairs $(x_m,E_m)$ in $D^{f_m}$ so that $E_{m}f(x_m)$ converges to an almost locally maximal directional derivative for $f$. More precisely, we show that the limits $(x_{\ast},E_{\ast})$ and $f$ have the properties stated in Proposition \[DoreMaleva\]. \[comparison\] Suppose $h:\mathbb{G}\to\mathbb{R}$ is Lipschitz, the pairs $(x,E)$ and $(x',E')$ belong to $D^h$, and $\sigma \geq 0$. We write $$(x,E)\leq_{(h,\sigma)} (x',E')$$ if $E h(x)\leq E' h(x')$ and for all $t\in (-1,1)$ $$\begin{aligned} &|(h(x'\exp(tE))-h(x'))-(h(x\exp(tE))-h(x))|\\ &\qquad \leq K (\sigma+ (E'f(x')-Ef(x))^{\frac{1}{2s^2}})|t|.\end{aligned}$$ In the language of Notation \[comparison\], Proposition \[DoreMaleva\](2) means $(x_{\ast},E_{\ast})\leq_{(f,0)} (x,E)$. Since $N$ is $G_{\delta}$ we can fix open sets $U_k\subset \mathbb{G}$ such that $N=\cap_{k=0}^{\infty} U_k$. We may assume that $U_{0}=\mathbb{G}$. We point out that, in Algorithm \[alg\] below, the order in which the parameters are chosen plays a crucial role in what follows. \[alg\] Let $f_0, x_0, E_0, \tau$ and $\delta_0$ be as in the assumptions of Proposition \[DoreMaleva\]. Let $\sigma_0:=2$ and $t_0:=\min \{1/4,\, \mu/2\}$. Then we can recursively define 1. $f_m(x):=f_{m-1}(x)+t_{m-1} \langle x, E_{m-1}(0) \rangle$, 2. $\sigma_m\in (0, \sigma_{m-1}/4)$, 3. $t_m\in (0, \min\{t_{m-1}/2,\, \sigma_{m-1}/(s^2 m)\})$, 4. $\lambda_m\in (0, \min\{t_m\sigma_m^{2s^2}/(2C_{\mathrm{a}}^{2s^2}),\, t_m\tau^2/2^{2m+3}\})$, 5. $D_m$ to be the set of pairs $(x,E)\in D^{f_m}=D^{f_0}$ such that $d(x,x_{m-1})<\delta_{m-1}$ and $$(x_{m-1}, E_{m-1})\leq_{(f_m,\sigma_{m-1}-\varepsilon)} (x,E)$$ for some $\varepsilon\in (0,\sigma_{m-1})$, 6. $(x_m,E_m)\in D_m$ such that $Ef_m(x)\leq E_mf_m(x_m)+\lambda_m$ for every pair $(x,E)\in D_m$, 7. $\varepsilon_m\in (0,\sigma_{m-1})$ such that $(x_{m-1}, E_{m-1})\leq_{(f_m,\sigma_{m-1}-\varepsilon_m)} (x_m, E_m)$, 8. $\delta_m\in (0, (\delta_{m-1}-d(x_m,x_{m-1}))/2)$ such that $\overline{B_{\mathbb{G}}(x_m,\delta_m)}\subset U_m$ and for all $|t|<C_D\delta_m^{\frac{1}{s}}/\varepsilon_m$ $$\begin{aligned} &|(f_m(x_m\exp(tE_{m}))-f_m(x_m))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))| \\ &\qquad \leq( E_mf_m(x_m)-E_{m-1}f_m(x_{m-1})+\sigma_{m-1}) |t|.\end{aligned}$$ Clearly one can make choices satisfying (1)–(5). For (6)–(8) we can proceed exactly as in [@PS16 Proof of Algorithm 6.4] using Lemma \[lemmascalarlip\] and Lemma \[lipismaximal\] instead of [@PS16 Lemma 5.2] and [@PS16 Lemma 3.3], respectively. We omit the proof of the following Lemma since it is exactly the same as the one of [@PS16 Lemma 6.5] for the Heisenberg group. \[inclusionballs\] The sequences $\sigma_m, t_m, \lambda_m, \delta_m, \varepsilon_m$ converge to $0$, and for every $m\geq 1$ the inclusion $$\overline{B_{\mathbb{G}}(x_m,\delta_m)}\subset B_{\mathbb{G}}(x_{m-1},\delta_{m-1})$$ holds. We record for later use that $\mathrm{Lip}_{\mathbb{G}}(f_m)\leq 1$ for all $m\geq 1$ and we define $\varepsilon'_m>0$ by letting $$\label{defepsprimo} \varepsilon'_m:=\min\{\varepsilon_m/2,\, \sigma_{m-1}/2\}.$$ We next show that the sets $D_{m}$ form a decreasing sequence. This is an adaptation of [@DM11 Lemma 3.3]. \[lemmachiave\] The following statements hold. 1. If $m\geq 1$ and $(x,E)\in D_{m+1}$, then $$(x_{m-1}, E_{m-1})\leq_{(f_m, \sigma_m-\varepsilon'_m)} (x,E).$$ 2. If $m\geq 1$, then $D_{m+1}\subset D_m$. 3. If $m\geq 0$ and $(x,E)\in D_{m+1}$, then $ d(E(0),E_m(0))\leq \sigma_m$. If $m=0$ then (3) holds since $$d(E(0),E_0(0))\leq d(E(0),0)+d(0,E_0(0))\leq 2=\sigma_0.$$ It is enough to check that, whenever $m\geq 1$ and (3) holds for $m-1$, then (1), (2) and (3) hold for $m$. Fix $m\geq 1$ and assume that (3) holds for $m-1$, i.e. $$d(E(0),E_{m-1}(0)) \leq \sigma_{m-1}\quad \mbox{for all}\quad (x, E)\in D_m.$$ *Proof of (1).* Algorithm \[alg\](6) states that $(x_m,E_m)\in D_m$ and hence $$\label{stimavett} d(E_m(0),E_{m-1}(0))\leq \sigma_{m-1}.$$ Let $(x, E)\in D_{m+1}$. In particular, by Algorithm \[alg\](5) we have $Ef_{m+1}(x)\geq E_{m}f_{m+1}(x_{m})$. Notice that, since $\omega(E_{m})=\omega(E)=1$, we have $\langle E_m(0),E_m(0)\rangle=1$ and $\langle E(0), E_m(0) \rangle\leq 1$. Let $A:=Ef_m(x)-E_mf_m(x_m)$. Lemma \[lemmascalarlip\] and the inequality $Ef_{m+1}(x)\geq E_{m}f_{m+1}(x_{m})$ give $$Ef_{m+1}(x)-E_mf_{m+1}(x_m)-t_m\langle E(0), E_m(0) \rangle+t_m\geq 0.$$ Combining again Algorithm \[alg\](5) with the above inequality, gives $$Ef_m(x)\geq E_mf_m(x_m)\geq E_{m-1} f_m(x_{m-1}).$$ In particular, $Ef_{m}(x)\geq E_{m-1}f_{m}(x_{m-1})$ which is the first requirement for $(x_{m-1}, E_{m-1})\leq_{(f_m, \sigma_m-\varepsilon'_m)} (x,E)$. Let $B:=E f_m(x)-E_{m-1}f_m(x_{m-1})\geq 0$. Lemma \[lipismaximal\] and $\mathrm{Lip}_{\mathbb{G}}(f_m)\leq 1$ implies that $0\leq A\leq B\leq 2$. Using these inequalities and $K\geq 4s^2$ gives $$\begin{aligned} \label{factorize} K(B^{\frac{1}{2s^2}}-A^{\frac{1}{2s^2}})&\geq (B^{\frac{2s^2-1}{2s^2}}+B^{\frac{2s^2-2}{2s^2}}A^{\frac{1}{2s^2}}+\ldots+B^{\frac{1}{2s^2}}A^{\frac{2s^2-2}{2s^2}}+A^{\frac{2s^2-1}{2s^2}})(B^{\frac{1}{2s^2}}-A^{\frac{1}{2s^2}})\nonumber \\ &=B-A\nonumber \\ &=E_mf_m(x_m)-E_{m-1}f_m(x_{m-1}).\end{aligned}$$ Since $A\geq Ef_{m+1}(x)-E_mf_{m+1}(x_m)$, implies that $$\begin{aligned} \label{estimate3} & E_mf_m(x_m)-E_{m-1}f_m(x_{m-1})+K( Ef_{m+1}(x)-E_mf_{m+1}(x_m))^{\frac{1}{2s^2}}\nonumber \\ & \qquad \leq K B^{\frac{1}{2s^2}}.\end{aligned}$$ To prove the second requirement of $(x_{m-1}, E_{m-1})\leq_{(f_m, \sigma_m-\varepsilon'_m)} (x,E)$ we need to estimate $$\label{thingtoestimate} |(f_m(x\exp(tE_{m-1}))-f_m(x))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|.$$ We consider two cases, depending on whether $t$ is small or large. *Suppose $|t|<3C_D\delta_m^{\frac{1}{s}}/\varepsilon_m$.* Estimate as follows $$\begin{aligned} \label{estimate} &|(f_m(x\exp(tE_{m-1})) - f_m(x))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\nonumber \\ &\qquad \leq |(f_m(x\exp(tE_m))-f_m(x)) - (f_m(x_m\exp(tE_m))-f_m(x_m))| \nonumber \\ &\qquad \quad + |(f_m(x_m\exp(tE_m))-f_m(x_m))\nonumber \\ &\qquad \quad \qquad -(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))| \nonumber \\ &\qquad \quad +| f_m(x\exp(tE_{m-1}))-f_m(x\exp(tE_m))|.\end{aligned}$$ We consider the three terms on the right hand side of separately. Firstly, Algorithm \[alg\](1) and Lemma \[lemmascalarlip\] give $$\begin{aligned} \label{eqz1} &(f_m(x\exp(tE_m))-f_m(x))-(f_m (x_{m}\exp(tE_{m}))- f_m(x_{m}))\\ &\qquad =(f_{m+1}(x\exp(tE_m))-f_{m+1}(x)) - ( f_{m+1}(x_m\exp(tE_m)) - f_{m+1}(x_m)) \nonumber \\ &\qquad \quad -t_m\langle x\exp(tE_m), E_m(0)\rangle +t_m\langle x,E_m(0)\rangle \nonumber \\ &\qquad \quad+t_m\langle x_m\exp(tE_m), E_m(0)\rangle- t_m\langle x_m, E_m(0)\rangle \nonumber \\ &\qquad=(f_{m+1}(x\exp(tE_m))-f_{m+1}(x)) - ( f_{m+1}(x_m\exp(tE_m)) - f_{m+1}(x_m))\nonumber .\end{aligned}$$ Since $(x,E)\in D_{m+1}$, using gives $$\begin{aligned} \label{estimate2} &|(f_m(x\exp(tE_{m}))-f_m(x))-(f_m(x_{m}\exp(tE_{m}))-f_m(x_{m}))| \nonumber \\ &\qquad \leq K(\sigma_m+(Ef_{m+1}(x)-E_mf_{m+1}(x_m))^{\frac{1}{2s^2}})|t|.\end{aligned}$$ For the second term in we recall that, for the values of $t$ we are considering, Algorithm \[alg\](8) states that $$\begin{aligned} &|(f_m(x_m\exp(tE_m))-f_m(x_m))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))| \label{feb22a}\\ &\qquad \leq( E_mf_m(x_m)-E_{m-1}f_m(x_{m-1})+\sigma_{m-1}) |t|.\nonumber\end{aligned}$$ The final term in is estimated using $\mathrm{Lip}_{\mathbb{G}}(f_{m})\leq 1$ and : $$\begin{aligned} |f_m(x\exp(tE_{m-1}))-f_m(x\exp(tE_m))| &\leq d(x\exp(tE_{m-1}), x\exp(tE_m)) \label{feb22b}\\ &= d(tE_{m-1}(0),tE_{m}(0)) \nonumber \\ & \leq \sigma_{m-1}|t|.\nonumber \end{aligned}$$ Adding , and , then using , and Algorithm \[alg\](2), gives $$\begin{aligned} &|(f_m(x\exp(tE_{m-1}))-f_m(x))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))| \\ & \qquad \leq K(\sigma_m + (Ef_{m+1}(x)-E_mf_{m+1}(x_m))^{\frac{1}{2s^2}})|t| \\ &\qquad \quad + ( E_mf_m(x_m)-E_{m-1}f_m(x_{m-1})+\sigma_{m-1}) |t|\\ &\qquad \quad + \sigma_{m-1}|t|\\ & \qquad \leq K(\sigma_{m-1}-\varepsilon_{m}' + (Ef_{m}(x)-E_{m-1}f_{m}(x_{m-1}))^{\frac{1}{2s^2}})|t|,\end{aligned}$$ which gives the required estimate of for all $t$ with $|t|<3C_D\delta_m^{\frac{1}{s}}/\varepsilon_m$. *Suppose $3C_D\delta_m^{\frac{1}{s}}/\varepsilon_m \leq |t| < 1$.* In particular, this implies $$\label{refest} \delta_{m} \leq \varepsilon_{m}^{s}t^{s}/3C_D \leq \varepsilon_{m}|t|,$$ where in the last inequality above we used that $$\varepsilon_{m}|t|/3C_D\leq \varepsilon_{m}/3C_D\leq 1,$$ which follows from $\varepsilon_{m}\leq 2$ and $C_D\geq 1$. We estimate as follows: $$\begin{aligned} &|(f_m(x\exp(tE_{m-1}))-f_m(x))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\\ &\qquad \leq |(f_m(x_m\exp(tE_{m-1}))-f_m(x_m))\\ &\qquad \quad \quad -(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\\ &\qquad \quad + |f_{m}(x) - f_{m}(x_{m})|\\ &\qquad \quad + |f_m(x\exp(tE_{m-1})) - f_m(x_m\exp(tE_{m-1}))|,\end{aligned}$$ and again we separately consider the three terms on the right hand side. By Algorithm \[alg\](7) we have $$(x_{m-1}, E_{m-1})\leq_{(f_m, \sigma_{m-1}-\varepsilon_m)} (x_m, E_m),$$ which gives $$\begin{aligned} &|(f_m(x_m\exp(tE_{m-1}))-f_m(x_m))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\nonumber \\ &\qquad \leq K(\sigma_{m-1}-\varepsilon_{m} + (E_{m}f_{m}(x_{m})-E_{m-1}f_{m}(x_{m-1}))^{\frac{1}{2s^2}})|t|.\label{feb221}\end{aligned}$$ For the estimate of the second term we use $\mathrm{Lip}_{\mathbb{G}}(f_{m})\leq 1$ and to get $$\label{feb222} |f_m(x)-f_m(x_m)|\leq d(x,x_{m}) \leq \delta_m\leq \varepsilon_m |t|\leq K\varepsilon_m |t|/(4s^2).$$ Notice that $x\exp(tE_{m-1})$ and $x_{m}\exp(tE_{m-1})$ belong to $\overline{B_{\mathbb{G}}(x_{0},2+\delta_{0})}$. Using Proposition \[euclideanheisenberg\] and Proposition \[conjugatedistance\], recalling that $\delta_m <1$, we get $$\begin{aligned} &|f_m(x\exp(tE_{m-1}))-f_m(x_m\exp(tE_{m-1}))|\nonumber \\ &\qquad \leq d(x\exp(tE_{m-1}),x_{m}\exp(tE_{m-1}))\nonumber \\ &\qquad = d(\exp(tE_{m-1})^{-1}x_m^{-1} x\exp(tE_{m-1}))\nonumber \\ &\qquad \leq C_D(d(x_m, x)+ t^{\frac{1}{s}} d(x_m, x)^{\frac{s-1}{s}}+t^{\frac{s-1}{s}} d(x_m, x)^{\frac{1}{s}})\nonumber \\ &\qquad \leq C_D(\delta_m+ t^{\frac{1}{s}} \delta_m^{\frac{s-1}{s}}+t^{\frac{s-1}{s}} \delta_m^{\frac{1}{s}}) \nonumber \\ &\qquad \leq 3C_D \delta_m^{\frac{1}{s}} \leq \varepsilon_{m}|t| \leq K\varepsilon_{m} |t|/(4s^2). \label{feb223}\end{aligned}$$ Combine , and to obtain $$\begin{aligned} &|(f_m(x\exp(tE_{m-1}))-f_m(x))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\\ &\qquad \leq K(\sigma_{m-1}-\varepsilon_m/2+(E_mf_m(x_m)-E_{m-1}f_m(x_{m-1}))^{\frac{1}{2s^2}})\\ &\qquad \leq K(\sigma_{m-1}-\varepsilon'_m+(Ef_m(x)-E_{m-1}f_m(x_{m-1}))^{\frac{1}{2s^2}}),\end{aligned}$$ which gives the required estimate of for all $t$ satisfying $3C_D\delta_m^{\frac{1}{s}}/\varepsilon_m \leq |t| < 1$. This completes the proof of (1). *Proof of (2).* Suppose $(x,E)\in D_{m+1}$. Then $(x,E)\in D^{f_{m+1}}=D^{f_{m}}$ and Lemma \[inclusionballs\] implies that $d(x,x_{m-1})<\delta_{m-1}$. Combining this with (1) gives $(x,E)\in D_{m}$. This completes the proof of (2). *Proof of (3).* Suppose $(x,E)\in D_{m+1}$. Then by Algorithm \[alg\](5) we have $E_m f_{m+1}(x_m)\leq E f_{m+1}(x)$. Moreover, by Algorithm \[alg\](1) we have $$\label{aaa} E_m f_{m}(x_m)+t_m\langle E_{m}(0),E_{m}(0) \rangle \leq E f_{m}(x)+t_m\langle E(0),E_{m}(0) \rangle.$$ By (2), we have also that $(x,E)\in D_m$, so Algorithm \[alg\](6) implies $$\label{bbb} Ef_m(x)\leq E_m f_m(x_m)+\lambda_m.$$ Combining and gives $t_m\leq t_m\langle E(0),E_m(0)\rangle+\lambda_m$, which, up to rearrangements, implies $$\langle E(0),E_{m}(0) \rangle\geq 1-\lambda_m/t_m.$$ Therefore one has $$\label{stimam} |p(E)-p(E_m)|=|E(0)-E_m(0)|=(2-2\langle E(0), E_m(0))^{\frac{1}{2}}\leq (2\lambda_m/t_m)^{\frac{1}{2}}.$$ Combining Algorithm \[alg\](5) and Lemma \[closedirectioncloseposition\] with $g(t):=\exp(tE_m)$, $N=1$ and $D=(2\lambda_m/t_m)^{\frac{1}{2}}$, we get $$\begin{aligned} d(E(0),E_m(0))=d(\exp(E),\exp(E_m)) &\leq C_{\mathrm{a}}(2\lambda_m/t_m)^{\frac{1}{2s^2}}\\& \leq \sigma_m,\end{aligned}$$ which proves (3). We next study the convergence of $(x_{m}, E_{m})$ and $f_{m}$. We show that the directional derivatives converge to a directional derivative of the limiting function, and the limit of $(x_{m},E_{m})$ belongs to $D_{m}$ for every $m$. This is an adaptation of [@DM11 Lemma 3.4]. \[lemmaquasifinale\] The following statements hold: 1. $f_{m}\to f$ pointwise, where $f:\mathbb{G}\to \mathbb{R}$ is Lipschitz and $\mathrm{Lip}_{\mathbb{G}}(f)\leq 1$, 2. $f-f_m$ is $\mathbb{G}$-linear and $\mathrm{Lip}_{\mathbb{G}}(f-f_m) \leq 2t_m$ for $m\geq 0$, 3. There exist $x_{\ast}\in N$ and $E_{\ast} \in V$ with $\omega(E_{\ast})=1$ such that for $m \geq 0$ we have $$d(x_{\ast},x_m)< \delta_m, \quad\mbox{and}\quad d(E_{\ast}(0),E_m(0))\leq \sigma_m.$$ 4. $E_{\ast}f(x_{\ast})$ exists, is strictly positive and $E_mf_m(x_m)\uparrow E_{\ast}f(x_{\ast})$, 5. $(x_{m-1},E_{m-1})\leq_{(f_m, \sigma_{m-1}-\varepsilon'_m)} (x_{\ast},E_{\ast})$ for $m\geq 1$, 6. $(x_{\ast},E_{\ast})\in D_m$ for $m\geq 1$, 7. $\omega(E_{\ast}-E_0)<\tau$. We prove each statement individually. *Proof of (1).* Algorithm \[alg\](1) gives $f_m(x)=f_0(x)+\langle x,\sum_{k=0}^{m-1}t_kE_{k}(0)\rangle$. Define $f:\mathbb{G}\to\mathbb{R}$ by $$\label{deff} f(x):=f_0(x)+\Big\langle x,\sum_{k=0}^{\infty}t_k E_{k}(0)\Big\rangle.$$ Notice $|f(x)-f_m(x)|\leq | x | \sum_{k=m}^{\infty} t_k |E_{k}(0)|$. Hence by Algorithm \[alg\](3) $f_m\to f$ pointwise and, since $\mathrm{Lip}_{\mathbb{G}}(f_m)\leq 1$, we deduce $\mathrm{Lip}_{\mathbb{G}}(f)\leq 1$. *Proof of (2).* Lemma \[lemmascalarlip\] shows that $f-f_{m}$ is $\mathbb{G}$-linear. Moreover, by Algorithm \[alg\](3), we have that for every $m\geq 0$ $$\mathrm{Lip}_{\mathbb{G}}(f-f_m) \leq \sum_{k=m}^{\infty} t_k \leq t_m \sum_{k=m}^{\infty} \frac{1}{2^{k-m}} \leq 2t_m.$$ *Proof of (3).* Let $q\geq m\geq 0$. The definition of $D_{q+1}$ in Algorithm \[alg\](5) shows that $(x_q,E_q)\in D_{q+1}$. Hence points 2 and 3 of Lemma \[lemmachiave\] imply that $(x_q,E_q)\in D_{m+1}$, and consequently $$\label{Cauchy1} d(E_q(0),E_m(0)) \leq \sigma_m.$$ Since $(x_q,E_q)\in D_{m+1}$, Algorithm \[alg\](5) implies $$\label{Cauchy2} d(x_q,x_m)< \delta_m.$$ Since, by Lemma \[inclusionballs\], $\sigma_m, \delta_m \to 0$ the sequences $(x_m)_{m=1}^{\infty}$ and $(E_{m}(0))_{m=1}^{\infty}$ are Cauchy, and therefore they converge to some $x_{\ast}\in\mathbb{G}$ and $v\in \mathbb{G}$, respectively. Since $E_{m}\in V_1$ and $\omega(E_{m})=1$, we know that $|p(v)|=1$ and $v=(p(v),0)$. Using group translations, we can extend $v$ to a vector field $E_{\ast}\in V_1$ with $\omega(E_{\ast})=1$ and $E_{\ast}(0)=v$. Letting $q\to \infty$ in and implies that $d(E_{\ast}(0),E_m(0)) \leq \sigma_m$ and $d(x_{\ast},x_m)\leq \delta_m$. Combining Lemma \[inclusionballs\] and the fact that $\delta_m<\delta_{m-1}/2$, we have the strict inequality $d(x_{\ast},x_m)< \delta_m$. We now know that $x_{\ast}\in \overline{B_{\mathbb{G}}(x_m,\delta_m)}$ for every $m\geq 1$. Recall that $N=\cap_{m=0}^{\infty} U_m$ for open sets $U_m \subset \mathbb{G}$, and Algorithm \[alg\](8) states that $\overline{B_{\mathbb{G}}(x_{m},\delta_{m})}\subset U_{m}$. Hence $x_{\ast}\in N$. *Proof of (4).* As in the proof of (3) we have $(x_q,E_q)\in D_{m+1}$ for every $q\geq m\geq 0$. Therefore, by Lemma \[lemmachiave\](1), for every $q\geq m\geq 1$ we have $$\label{bla} (x_{m-1}, E_{m-1})\leq_{(f_m,\sigma_{m-1}-\varepsilon'_m)} (x_q,E_q).$$ Algorithm \[alg\](1) and (with $m$ and $q$ replaced by $q+1$) give $$\label{stima} E_qf_q(x_q)< E_q f_{q+1}(x_q)\leq E_{q+1}f_{q+1}(x_{q+1}) \quad \mbox{for every}\quad q \geq 0.$$ Hence, since $E_0f_0(x_0)\geq 0$, the sequence $(E_q f_q(x_q))_{q=0}^{\infty}$ is strictly increasing and positive. Since $\mathrm{Lip}_{\mathbb{G}}(f_q)\leq 1$ for every $q\geq 1$, by Lemma \[lipismaximal\], the sequence $(E_qf_q(x_q))_{q=1}^{\infty}$ is bounded above by $1$. Consequently, $E_qf_q(x_q)\to L$ for some $0<L\leq 1$. Inequality implies that also $E_q f_{q+1}(x_q) \to L$, and, moreover, one has $$E_q f(x_q)=E_q f_q(x_q)+E_q (f-f_q)(x_q)$$ and $|E_q (f-f_q)(x_q)|\leq \mathrm{Lip}_{\mathbb{G}}(f-f_q) \leq 2t_{q} \to 0$. Hence also $E_qf(x_q) \to L$. Let $q\geq m\geq 0$ and consider $$s_{m,q}:=E_qf_m(x_q)-E_{m-1}f_m(x_{m-1}).$$ By we have that $s_{m,q}\geq 0$. Letting $q\to \infty$, writing $f_{m}=f+(f_{m}-f)$, and using the $\mathbb{G}$-linearity of $f_{m}-f$ one gets $$\label{defiC} s_{m,q}\to s_{m}:=(f_m-f)(E_{\ast}(0))+L-E_{m-1}f_m(x_{m-1})\geq 0.$$ Since $\mathrm{Lip}_{\mathbb{G}}(f_{m}-f)\leq 2t_{m}$ and $E_{m-1}f_{m}(x_{m-1})\to L$, also $s_{m} \to 0$ as $m\to \infty$. implies that $$\begin{aligned} \label{bla2} &|(f_m(x_q\exp(tE_{m-1})) - f_m(x_q))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\nonumber \\ &\qquad \leq K (\sigma_{m-1}-\varepsilon'_m+(s_{m,q})^{\frac{1}{2s^2}}) |t| \quad \mbox{ for }t\in (-1,1).\end{aligned}$$ Letting $q\to \infty$ in shows that $$\begin{aligned} \label{eqncruc} &|(f_m(x_{\ast}\exp(tE_{m-1}))-f_m(x_{\ast}))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\nonumber \\ &\qquad \leq K(\sigma_{m-1}-\varepsilon'_m+(s_{m})^{\frac{1}{2s^2}})|t| \quad \mbox{ for }t\in (-1,1).\end{aligned}$$ Since $\mathrm{Lip}_{\mathbb{G}}(f)\leq 1$ and $d(E_{\ast}(0),E_{m-1}(0))\leq \sigma_{m-1}$, we obtain $$\begin{aligned} |f(x_{\ast}\exp(tE_{\ast}))-f(x_{\ast}\exp(tE_{m-1}))|&\leq d(x_{\ast}(tE_{\ast}(0)),x_{\ast}(tE_{m-1}(0)))\nonumber \\ &\leq \sigma_{m-1}|t|. \label{yyy}\end{aligned}$$ Since $f-f_{m}$ is $\mathbb{G}$-linear and $\mathrm{Lip}_{\mathbb{G}}(f-f_m) \leq 2t_m$ we can estimate $$\begin{aligned} |(f-f_m)(x_{\ast}\exp(tE_{m-1}))-(f-f_m)(x_{\ast})| &= |(f-f_{m})(\exp(tE_{m-1}))|\nonumber \\ &= |(f-f_{m})(\delta_t(\exp(E_{m-1})))|\nonumber \\ &\leq t\mathrm{Lip}_{\mathbb{G}}(f-f_{m})\nonumber \\ &\leq 2t_{m}|t|.\label{zzz}\end{aligned}$$ Combining , and shows that for $t\in (-1,1)$: $$\begin{aligned} &|(f(x_{\ast}\exp(tE_{\ast}))-f(x_{\ast}))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\\ &\qquad \leq |(f_m(x_{\ast}\exp(tE_{m-1}))-f_m(x_{\ast}))\\ &\qquad \quad \quad -(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\\ &\qquad \quad +|f(x_{\ast}\exp(tE_{\ast}))-f(x_{\ast}\exp(tE_{m-1}))|\\ &\qquad \quad +|(f-f_m)(x_{\ast}\exp(tE_{m-1}))-(f-f_m)(x_{\ast})|\\ &\qquad \leq (K(\sigma_{m-1}-\varepsilon'_m+(s_{m})^{\frac{1}{2s^2}})+\sigma_{m-1}+2t_m)|t|.\end{aligned}$$ Fix $\varepsilon>0$ and choose $m\geq 1$ such that $$K(\sigma_{m-1}-\varepsilon'_m+(s_{m})^{\frac{1}{2s^2}})+\sigma_{m-1}+2t_m\leq \varepsilon/3$$ and $$|E_{m-1}f_m(x_{m-1})-L|\leq \varepsilon/3.$$ Using the definition of $E_{m-1}f_{m}(x_{m-1})$, we find $0<\delta<1$ such that for every $|t|< \delta$ $$|f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1})-tE_{m-1}f_m(x_{m-1})|\leq \varepsilon|t|/3.$$ Hence, for every $|t|< \delta$ $$\begin{aligned} &|f(x_{\ast}\exp(tE_{\ast}))-f(x_{\ast})-tL|\\ &\qquad \leq|(f(x_{\ast}\exp(tE_{\ast}))-f(x_{\ast}))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\\ &\qquad \quad +|f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1})-tE_{m-1}f_m(x_{m-1})|\\ &\qquad \quad +|E_{m-1}f_m(x_{m-1})-L| |t|\\ &\qquad \leq \varepsilon |t|.\end{aligned}$$ This proves that $E_{\ast}f(x_{\ast})$ exists and is equal to $L$. We have already seen that $(E_qf_q(x_q))_{q=1}^{\infty}$ is a strictly increasing sequence of positive numbers. This proves (4). *Proof of (5).* The definition of $L$ and Lemma \[lemmascalarlip\] imply $$E_{\ast}f_m(x_{\ast})=L+E_{\ast}(f_{m}-f)(x_{\ast})=L+(f_m-f)(E_{\ast}(0)).$$ Using shows $s_{m}=E_{\ast}f_m(x_{\ast})-E_{m-1}f_m(x_{m-1})\geq 0$. Substituting this in gives (5). *Proof of (6).* Property (6) is a consequence of (3), (4) and (5). *Proof of (7).* We start by estimating $\omega(E_1-E_0)$. By Algorithm \[alg\](6), for every $(x,E)\in D_1$ we have $$Ef_1(x)\leq E_1f_1(x_1)+\lambda_1,$$ where $$\begin{aligned} \label{defF} f_1(x)=f_0(x)+t_0\left\langle x, E_0(0)\right\rangle.\end{aligned}$$ Clearly $$E_{0}f_{1}(x_{0})=E_{0}f_{0}(x_{0})+t_{0}$$ and $$E_1 f_1(x_1)=E_{1}f_{0}(x_{1})+t_{0}\langle E_1(0), E_0(0)\rangle.$$ By Algorithm \[alg\](5), $(x_0,E_0)\in D_1$ and therefore $$\begin{aligned} \label{disE} E_0f_1(x_0)\leq E_1f_1(x_1)+\lambda_1.\end{aligned}$$ A simple calculation using then gives $$\omega(E_1,E_0)=\left\langle E_1(0), E_0(0)\right\rangle\geq 1-\frac{\lambda_1}{t_0}+\frac{E_0f_0(x_0)-E_1f_0(x_1)}{t_0}.$$ Since $\omega$ is an inner product norm we can estimate as follows: $$\begin{aligned} \omega(E_1-E_0)&=\left(\omega(E_1)^2+\omega(E_0)^2-2\omega(E_1,E_0)\right)^{\frac{1}{2}}\\ &=\left(2-2\omega(E_1,E_0)\right)^{\frac{1}{2}}\\ &\leq \left(\frac{2\lambda_1}{t_0}+\frac{2|E_0f_0(x_0)-E_1f_0(x_1)|}{t_0}\right)^{\frac{1}{2}}\\ &\leq \left(\frac{2\lambda_1}{t_0}+\frac{4\mathrm{Lip}_{{\mathbb{G}}}(f_0)}{t_0}\right)^{\frac{1}{2}}\\ &\leq \frac{\tau}{2},\end{aligned}$$ where in the last inequality above we used the estimate on $\mathrm{Lip}_{{\mathbb{G}}}(f_0)$ in Assumption \[Ass\] and the estimate on $\lambda_{1}$ in Algorithm \[alg\](3). Next, as proved in , for every $m\geq 1$ and $(x, E)\in D_{m+1}$ we have $$\omega(E-E_{m})\leq \left(2\lambda_m/t_m\right)^{\frac{1}{2}}.$$ Using the estimate in Algorithm \[alg\](4), this implies that for every $m\geq 1$: $$\begin{aligned} \omega(E_{m+1}-E_{m})< \frac{\tau}{2^{m+1}}.\end{aligned}$$ Therefore, $$\begin{aligned} \omega(E_{\ast}-E_0)=\lim_{m\to \infty} \omega(E_m-E_0)&\leq \sum_{m=2}^{\infty}\omega(E_m-E_{m-1})+\omega(E_1-E_0)\\ &< \tau \sum_{m=2}^{\infty}\frac{1}{2^{m}}+ \frac{\tau}{2}\\ &=\tau.\end{aligned}$$ We now prove that the limit directional derivative $E_{\ast}f(x_{\ast})$ is almost locally maximal in horizontal directions. This is an adaptation of [@DM11 Lemma 3.5]. \[almostlocmax\] For all $\varepsilon>0$ there is $\delta_{\varepsilon}>0$ such that if $(x,E)\in D^f$ satisfies $d(x_{\ast},x)\leq \delta_{\varepsilon}$ and $(x_{\ast},E_{\ast})\leq_{(f,0)}(x,E)$, then $$Ef(x)<E_{\ast}f(x_{\ast})+\varepsilon.$$ Fix $\varepsilon>0$. By Lemma \[inclusionballs\] we choose $m\geq 1$ such that $$\label{param} m\geq 4/\varepsilon^{\frac{2s^{2}-1}{2s^{2}}}\quad \mbox{and}\quad \lambda_m,t_m\leq \varepsilon/4.$$ Recall that $\varepsilon'_m=\min\{\varepsilon_m/2,\, \sigma_{m-1}/2\}$. Using Lemma \[lemmaquasifinale\](3) and \[lemmaquasifinale\](6) , fix $\delta_{\varepsilon}>0$ such that $$\delta_{\varepsilon}< \delta_{m-1}-d(x_{\ast},x_{m-1})$$ such that for every $|t|< 3D\delta_{\varepsilon}^{\frac{1}{s}}/\varepsilon_{m}'$ $$\begin{aligned} \label{estimated2} &|(f_m(x_{\ast}\exp(tE_{\ast}))-f_m(x_{\ast}))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\nonumber \\ &\qquad \leq (E_{\ast}f_m(x_{\ast})-E_{m-1}f_m(x_{m-1})+\sigma_{m-1})|t|.\end{aligned}$$ Such $\delta_{\varepsilon}$ exists since, by Lemma \[lemmaquasifinale\](5), we have $E_{\ast}f_m(x_{\ast})\geq E_{m-1}f_m(x_{m-1})$. We argue by contradiction and we suppose that $(x,E)\in D^f$ satisfies $d(x_{\ast},x)\leq \delta_{\varepsilon}$, $(x_{\ast},E_{\ast})\leq_{(f,0)} (x,E)$ and $Ef(x)\geq E_{\ast}f(x_{\ast})+\varepsilon$. We plan to show that $(x,E)\in D_m$. We first observe that this gives a contradiction. Indeed, Algorithm \[alg\](6) and the monotone convergence $E_mf_m(x_m)\uparrow E_{\ast}f(x_{\ast})$ would then imply $$Ef_m(x)\leq E_mf_m(x_m)+\lambda_m\leq E_{\ast}f(x_{\ast})+\lambda_m.$$ By Lemma \[lemmaquasifinale\](2) and we would deduce that $$\begin{aligned} Ef(x)-E_{\ast}f(x_{\ast})&=(Ef_m(x)-E_{\ast}f(x_{\ast}))+E(f-f_m)(x)\\ &\leq \lambda_m+2t_m\\ &\leq 3\varepsilon /4,\end{aligned}$$ which contradicts the assumption that $Ef(x)\geq E_{\ast}f(x_{\ast})+\varepsilon$. *Proof that $(x,E)\in D_m$.* Since $f-f_{m}$ is $\mathbb{G}$-linear we have $D^f=D^{f_{m}}$ and therefore $(x,E)\in D^{f_{m}}$. Next observe that $$d(x,x_{m-1})\leq d(x,x_{\ast})+d(x_{\ast},x_{m-1}) < \delta_{m-1}.$$ Hence, it suffices to show that $(x_{m-1},E_{m-1})\leq_{(f_m, \sigma_{m-1}-\varepsilon'_m/2)} (x,E)$. Lemma \[lipismaximal\] implies $$|E(f-f_{m})(x)|,\, |E_{\ast}(f-f_{m})(x_{\ast})|\leq \mathrm{Lip}_{\mathbb{G}}(f-f_{m}).$$ Hence by definition of $(x,E)$ and by we have $$\begin{aligned} Ef_m(x)-E_{\ast}f_m(x_{\ast})&\geq Ef(x)-E_{\ast}f(x_{\ast})-2\mathrm{Lip}_{\mathbb{G}}(f_m-f)\\ &\geq \varepsilon -4t_m\geq 0.\end{aligned}$$ Lemma \[lemmaquasifinale\](6) states that $(x_{\ast}, E_{\ast})\in D_{m}$, which implies $E_{m-1}f_m(x_{m-1})\leq E_{\ast}f_m(x_{\ast})$ and hence $$Ef_m(x)\geq E_{\ast}f_m(x_{\ast})\geq E_{m-1}f_m(x_{m-1}).$$ In particular, the inequality $Ef_{m}(x)\geq E_{m-1}f_m(x_{m-1})$ proves the first requirement of $(x_{m-1},E_{m-1})\leq_{(f_m, \sigma_{m-1}-\varepsilon'_m/2)} (x,E)$. We next deduce several inequalities from our hypotheses. Let - $A:=Ef(x)-E_{\ast}f(x_{\ast})$, - $B:=Ef_m(x)-E_{\ast}f_m(x_{\ast})$, - $C:=Ef_m(x)-E_{m-1}f_m(x_{m-1})$. By definition of $(x,E)$ we have $A\geq \varepsilon$, while the inequalities above give $0\leq B\leq C$. By Lemma \[lipismaximal\] we have that $A,\, B,\, C\leq 2$. Recalling the factorization $$\label{factorizerepeat} A-B=\left(B^{\frac{2s^2-1}{2s^2}}+B^{\frac{s^2-1}{s^2}}A^{\frac{1}{2s^2}}+\ldots+B^{\frac{1}{2s^2}}A^{\frac{s^2-1}{s^2}}+A^{\frac{2s^2-1}{2s^2}}\right)\left(A^{\frac{1}{2s^2}}-B^{\frac{1}{2s^2}}\right)$$ and using Lemma \[lemmaquasifinale\](2), and Algorithm \[alg\](3), we obtain $$\begin{aligned} A^{\frac{1}{2s^2}}-B^{\frac{1}{2s^2}} &\leq (A-B)/\varepsilon^{\frac{2s^2-1}{2s^2}}\\ &=(E(f-f_m)(x)-E_{\ast}(f-f_m)(x_{\ast}))/\varepsilon^{\frac{2s^2-1}{2s^2}}\\\ &\leq 4t_m /\varepsilon^{\frac{2s^2-1}{2s^2}}\\\ & \leq mt_m\\ &\leq \sigma_{m-1}/s^2.\end{aligned}$$ Since $B,\, C\leq 2$ and $K\geq 4s^2$ we have $$B^{\frac{2s^2-1}{2s^2}}+B^{\frac{s^2-1}{s^2}}A^{\frac{1}{2s^2}}+\ldots+B^{\frac{1}{2s^2}}A^{\frac{s^2-1}{s^2}}+A^{\frac{2s^2-1}{2s^2}}\leq 4s^2 \leq K.$$ Hence using with $A$ replaced by $C$ gives $$KC^{\frac{1}{2s^2}}-KB^{\frac{1}{2s^2}}\geq C-B=E_{\ast}f_m(x_{\ast})-E_{m-1}f_m(x_{m-1}).$$ Combining our estimates we eventually find $$\begin{aligned} \label{stima32} &E_{\ast}f_m(x_{\ast})-E_{m-1}f_m(x_{m-1})+K(Ef(x)-E_{\ast}f(x_{\ast}))^{\frac{1}{2s^2}}\nonumber \\ &\qquad =E_{\ast}f_m(x_{\ast})-E_{m-1}f_m(x_{m-1})+KA^{\frac{1}{2s^2}}\nonumber\\ &\qquad \leq KC^{\frac{1}{2s^2}}-KB^{\frac{1}{2s^2}}+K(B^{\frac{1}{2s^2}}+\sigma_{m-1}/s^2) \nonumber\\ &\qquad = K((Ef_m(x)-E_{m-1}f_m(x_{m-1}))^{\frac{1}{2s^2}}+\sigma_{m-1}/s^2).\end{aligned}$$ We now prove the second requirement of $(x_{m-1},E_{m-1})\leq_{(f_m, \sigma_{m-1}-\varepsilon'_m/2)} (x,E)$. We need to estimate $$\label{incases} |(f_m(x\exp(tE_{m-1}))-f_m(x))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|.$$ We consider two cases, depending on whether $t$ is small or large. *Suppose $|t|\leq 3C_D\delta_{\varepsilon}^{\frac{1}{s}}/\varepsilon_{m}'$.* To estimate we use the inequality $$\begin{aligned} \label{toestimate} &|(f_m(x\exp(tE_{m-1})) - f_m(x))-(f_m(x_{m-1}\exp(tE_{m-1})) - f_m(x_{m-1})|\nonumber \\ &\qquad \leq |(f_m(x\exp(tE_{\ast})) - f_m(x))-(f_m(x_{\ast}\exp(tE_{\ast})) - f_m(x_{\ast}))| \nonumber \\ &\qquad \quad + |(f_m(x_{\ast}\exp(tE_{\ast})) - f_m(x_{\ast}))\nonumber \\ &\qquad \quad \qquad -(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))| \nonumber \\ &\qquad \quad +|f_m(x\exp(tE_{m-1}))-f_m(x\exp(tE_{\ast})|.\end{aligned}$$ Since $(x_{\ast},E_{\ast})\leq_{(f,0)} (x,E)$, by Lemma \[lemmascalarlip\] and $\mathbb{G}$-linearity of $f_{m}-f$ we can estimate the first term in by $$\begin{aligned} \label{estimated1} &|(f_m(x\exp(tE_{\ast}))-f_m(x))-(f_m(x_{\ast}\exp(tE_{\ast}))-f_m(x_{\ast}))|\nonumber \\ &\qquad \leq |f(x\exp(tE_{\ast})-f(x))-(f(x_{\ast}\exp(tE_{\ast})-f(x_{\ast}))| \nonumber \\ &\qquad \leq K(Ef(x)-E_{\ast}f(x_{\ast}))^{\frac{1}{2s^2}}|t|.\end{aligned}$$ Since $t$ is small, by we get $$\begin{aligned} &|(f_m(x_{\ast}\exp(tE_{\ast})) - f_m(x_{\ast})) -(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\\ &\qquad \leq (E_{\ast}f_m(x_{\ast})-E_{m-1}f_m(x_{m-1})+\sigma_{m-1})|t|.\end{aligned}$$ Lemma \[lemmaquasifinale\] implies that the third term of is bounded above by $\sigma_{m-1}|t|$. By combining the estimates of each term and using we get $$\begin{aligned} \label{hi} &|(f_m(x\exp(tE_{m-1}))-f_m(x)) - (f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))| \nonumber \\ &\qquad \leq (K(Ef(x)-E_{\ast}f(x_{\ast}))^{\frac{1}{2s^2}} + E_{\ast}f_m(x_{\ast})-E_{m-1}f_m(x_{m-1})+2\sigma_{m-1})|t| \nonumber \\ &\qquad \leq (K((Ef_{m}(x)-E_{m-1}f_{m}(x_{m-1}))^{\frac{1}{2s^2}}+\sigma_{m-1}/s^2) + 2\sigma_{m-1})|t|\nonumber \\ &\qquad \leq K(\sigma_{m-1}-\varepsilon'_m/2+(Ef_m(x)-E_{m-1}f_m(x_{m-1}))^{\frac{1}{2s^2}})|t|,\end{aligned}$$ where we have used $\varepsilon'_m\leq \sigma_{m-1}/2$ and $K\geq 4s^2$ in the final line. This gives the required estimate of for small $t$. *Suppose $3C_D\delta_{\varepsilon}^{\frac{1}{s}}/\varepsilon_{m}' \leq |t|\leq 1$.* To estimate we use the inequality $$\begin{aligned} &|(f_m(x\exp(tE_{m-1}))-f_m(x))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\\ &\qquad \leq |(f_m(x_{\ast}\exp(tE_{m-1}))-f_m(x_{\ast}))\\ &\qquad \quad \quad -(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))|\\ &\qquad \quad +|f_m(x_{\ast}) - f_m(x)| + |f_m(x\exp(tE_{m-1})) - f_m(x_{\ast}\exp(tE_{m-1}))|.\end{aligned}$$ Lemma \[lemmaquasifinale\](5) shows that the first term on the right hand side is bounded above by $$K(\sigma_{m-1}-\varepsilon'_m+( E_{\ast}f_m(x_{\ast})-E_{m-1}f_m(x_{m-1}))^{\frac{1}{2s^2}})|t|.$$ The second term is bounded by $d(x_{\ast},x)\leq \delta_{\varepsilon} \leq \varepsilon_{m}'|t|\leq K\varepsilon_{m}'|t|/s^2$. For the third term, we use Lemma \[conjugatedistance\] with $x=\exp(t E_{m-1})$ and $y=x_{\ast}^{-1}x$ to get $$\begin{aligned} &|f_m(x\exp(tE_{m-1})) - f_m(x_{\ast}\exp(tE_{m-1}))|\\ &\qquad \leq d(x\exp(tE_{m-1}), x_{\ast}\exp(tE_{m-1}))\\ &\qquad \leq C_D\left( d(x_{\ast}, x) + t^{\frac{1}{s}}d(x_{\ast}, x)^{\frac{s-1}{s}}+ t^{\frac{s-1}{s}}d(x_{\ast}, x)^{\frac{1}{s}}\right)\\ &\qquad \leq C_D(\delta_m+\delta_m^{\frac{s-1}{s}}+\delta_m^{\frac{1}{s}})\\ &\qquad \leq 3C_D \delta_m^{\frac{1}{s}}\\ &\qquad \leq \varepsilon_{m}'|t|\\ &\qquad \leq K\varepsilon_{m}' |t|/(4s^2).\end{aligned}$$ Combining the three estimates and using $E_{\ast}f_m(x_{\ast})\leq Ef_m(x)$ gives $$\begin{aligned} &|(f_m(x\exp(tE_{m-1}))-f_m(x))-(f_m(x_{m-1}\exp(tE_{m-1}))-f_m(x_{m-1}))| \\ &\qquad \leq K(\sigma_{m-1}-\varepsilon'_m/2+( Ef_m(x)-E_{m-1}f_m(x_{m-1}))^{\frac{1}{2s^2}})|t|.\end{aligned}$$ This gives the required estimate of for large $t$ and therefore $$(x_{m-1},E_{m-1})\leq_{(f_m,\sigma_{m-1}-\varepsilon'_m/2)}(x,E),$$ which concludes the proof. We conclude this section proving Proposition \[DoreMaleva\] and Theorem \[maintheorem\]. Proposition \[DoreMaleva\] easily follows from Lemma \[lemmaquasifinale\] and Lemma \[almostlocmax\]. Indeed, Lemma \[lemmaquasifinale\] states that there is $f\colon \mathbb{G} \to \mathbb{R}$ Lipschitz such that $f-f_{0}$ is linear and $\mathrm{Lip}_{\mathbb{G}}(f-f_{0})\leq 2t_{0}\leq \mu$. It also states there is $(x_{\ast}, E_{\ast})\in D^{f}$ satisfying $d(x_{\ast},x_{0})<\delta_{0}$ and $E_{\ast}f(x_{\ast})>0$. Lemma \[almostlocmax\] then shows that $E_{\ast}f(x_{\ast})$ is almost locally maximal in the sense of Proposition \[DoreMaleva\]. Let $B\subset V_1$ be a ball of directions as in Assumption 6.1. Let $f_{0}\colon \mathbb{G}\to \mathbb{R}$ be a Lipschitz function. Multiplying $f_{0}$ by a non-zero constant does not change the set of points where it is Pansu differentiable. Hence we can assume without loss of generality that $\mathrm{Lip}_{\mathbb{G}}(f_0)\leq 1/4$. Fix an arbitrary pair $(x_{0},E_{0})\in D^{f_{0}}$ and apply Proposition \[DoreMaleva\] with $\delta_{0}=1$, $\mu=1/4$ and $K=4s^2$. This gives a Lipschitz function $f\colon \mathbb{G}\to \mathbb{R}$ such that $f-f_{0}$ is $\mathbb{G}$-linear with $\mathrm{Lip}_{\mathbb{G}}(f-f_{0})\leq 1/4$ and a pair $(x_{\ast},E_{\ast})\in D^{f}$ with $x_{\ast}\in N$ and $E_{\ast}f(x_{\ast})>0$ which is almost locally maximal in the following sense. For any $\varepsilon>0$ there is $\delta_{\varepsilon}>0$ such that whenever $(x,E)\in D^{f}$ satisfies both 1. $d(x,x_{\ast})\leq \delta_{\varepsilon}$, $Ef(x)\geq E_{\ast}f(x_{\ast})$, and 2. for any $t\in (-1,1)$: $$\begin{aligned} &|(f(x\exp(tE_{\ast}))-f(x))-(f(x_{\ast}\exp(tE_{\ast}))-f(x_{\ast}))|\\ & \qquad \leq 4s^2|t| ( Ef(x)-E_{\ast}f(x_{\ast}) )^{\frac{1}{2s^2}},\end{aligned}$$ then $$Ef(x)<E_{\ast}f(x_{\ast})+\varepsilon.$$ Since $\mathrm{Lip}_{\mathbb{G}}(f_{0})\leq 1/4$ and $\mathrm{Lip}_{\mathbb{G}}(f-f_{0})\leq 1/4$ we have $\mathrm{Lip}_{\mathbb{G}}(f)\leq 1/2$. Notice that $(x_{\ast},E_{\ast})$ is also almost locally maximal in the sense of Theorem \[almostmaximalityimpliesdifferentiability\], since the restriction on pairs above is weaker than that in Theorem \[almostmaximalityimpliesdifferentiability\]. Hence Theorem \[almostmaximalityimpliesdifferentiability\] implies that $f$ is Pansu differentiable at $x_{\ast}\in N$. Since a $\mathbb{G}$-linear function is Pansu differentiable everywhere, it follows $f_{0}$ is Pansu differentiable at $x_{\ast}$. This proves Theorem \[maintheorem\]. [99]{} Agrachev, A., Barilari, D., Boscain, U. *Introduction to Riemannian and Sub-Riemannian geometry*, available at https://webusers.imj-prg.fr/ davide.barilari/Notes.php Alberti, G., Marchese, A.: *On the differentiability of Lipschitz functions with respect to measures in Euclidean spaces*, Geometric and Functional Analysis 26(1) (2016), 1–66. Alberti, G., Csornyei, M., Preiss, D.: *Differentiability of Lipschitz functions, structure of null sets, and other problems*, Proceedings of the International Congress of Mathematicians III (2010), 1379–1394. Bonfiglioli, A., Lanconelli, E., Uguzzoni, F.: *Stratified Lie Groups and Potential Theory for Their Sub-Laplacians*, Springer Monographs in Mathematics (2007). Cheeger, J.: *Differentiability of Lipschitz functions on metric measure spaces*, Geometric and Functional Analysis 9(3) (1999), 428–517. Chow, W. L.: *Uber Systeme von linearen partiellen Differentialgleichungen erster Ordnung*, Mathematische Annalen 117 98–105. Capogna, L., Danielli, D., Pauls, S., Tyson, J.: *An introduction to the Heisenberg group and the sub-Riemannian isoperimetric problem*, Birkhäuser Progress in Mathematics 259 (2007). Csornyei, M., Jones, P.: *Product formulas for measures and applications to analysis and geometry*, announcement of result is in slides available at: http://www.math.sunysb.edu/Videos/dfest/PDFs/38-Jones.pdf. Doré, M., Maleva, O.: *A compact null set containing a differentiability point of every Lipschitz function*, Mathematische Annalen 351(3) (2011), 633–663. Doré, M., Maleva, O.: *A compact universal differentiability set with Hausdorff dimension one*, Israel Journal of Mathematics 191(2) (2012), 889–900. Doré, M., Maleva, O.: *A universal differentiability set in Banach spaces with separable dual*, Journal of Functional Analysis 261 (2011), 1674–1710. Dymond, M., Maleva, O.: *Differentiability inside sets with upper Minkowski dimension one*, Michigan Mathematical Journal 65 (2016), 613–636. Fitzpatrick, S.: *Differentiation of real-valued functions and continuity of metric projections*, Proceedings of the American Mathematical Society 91(4) (1984), 544–548. Franchi, B., Serapioni, R.: *Intrinsic Lipschitz graphs within Carnot groups*, Journal of Geometric Analysis, 26(3) (2016), 1946–1994. Gromov, M.: *Carnot-Carathéodory spaces seen from within*, Progress in Mathematics 144 (1996), 79–323. Le Donne, E., Montgomery, R., Ottazzi, A., Pansu, P., Vittone, D.: *Sard Property for the endpoint map on some Carnot groups*, Annales de l’Institut Henri Poincaré, Analyse Non Lineairé 33(6) (2016), 1639–1666. Le Donne, E., Pinamonti, A., Speight, G.: *Universal Differentiability Sets and Maximal Directional Derivatives in Carnot Groups*, Journal de Mathématiques Pures et Appliquées 121 (2019), 83–112. Lindenstrauss, J., Preiss, D., Tiser, J.: *Fréchet differentiability of Lipschitz functions and porous sets in Banach spaces*, Annals of Mathematics Studies 179, Princeton University Press (2012). Magnani, V.: *Differentiability and area formula on stratified Lie groups*, Houston Journal of Mathematics 27(2) (2001), 297–323. Magnani, V.: *Towards differential calculus in stratified groups*, Journal of the Australian Mathematical Society 95 (2013), 76–128. Magnani, V., Rajala, T.: *Radon-Nikodym property and area formula for Banach homogeneous group targets*, International Mathematics Research Notices 23 (2014), 6399–6430. Magnani, V., Pinamonti, A., Speight, G.: *Differentiability for Lipschitz maps from stratified groups to Banach homogeneous groups*, preprint available at arXiv:1706.01782. Montgomery, R.: *A tour of subriemannian geometries, their geodesics and applications*, American Mathematical Society, Mathematical Surveys and Monographs 91 (2006). Nagel, A., Stein, E., Wainger, S.: *Balls and metrics defined by vector fields I: Basic properties*, Acta Mathematica 155 (1985), 103–147. Ottazzi, A.: *A sufficient condition for nonrigidity of Carnot groups*, Mathematische Zeitschrift 259(3) (2008), 617–629. Ottazzi, A., Vittone, D.: *On the codimension of the abnormal set in step two Carnot groups*, to appear in ESAIM: Control, Optimisation and Calculus of Variations, preprint available at arXiv:1709.02854. Pansu, P.: *Metriques de Carnot-Carathéodory et quasiisometries des espaces symetriques de rang un*, Annals of Mathematics 129(1) (1989), 1–60. Pinamonti, A., Speight, G.: *A Measure Zero Universal Differentiability Set in the Heisenberg Group*, Mathematische Annalen 368, no. 1-2, (2017) 233–278. Pinamonti, A., Speight, G.: *A Measure Zero UDS in the Heisenberg Group*, Bruno Pini Mathematical Analysis Seminar, Vol. 7 (2016) 85–96. Preiss, D.: *Differentiability of Lipschitz functions on Banach spaces*, Journal of Functional Analysis 91(2) (1990), 312–345. Preiss, D., Speight, G.: *Differentiability of Lipschitz functions in Lebesgue null sets*, Inventiones mathematicae 199(2) (2015), 517–559. Preiss, D., Maleva, O.: *Cone unrectifiable sets and non-differentiability of Lipschitz functions*, to appear in Israel Journal of Mathematics, preprint available at arXiv:1709.04233. Rudin, W.: *Principles of mathematical analysis, third edition*, International Series in Pure and Applied Mathematics, McGraw-Hill Book Co. (1976). Varadarajan, V.S.: *Lie Groups, Lie Algebras, and Their Representations*, Graduate Textbooks in Mathematics 102, Springer-Verlag, 1984. Vittone, D.: *The regularity problem for sub-Riemannian geodesics*, pages 193–226 in *Geometric Measure Theory and Real Analysis*, publications of the Scuola Normale Superiore 17. Warhurst, B.: *Contact and quasiconformal mappings on real model filiform groups*, Bulletin of the Australian Mathematical Society 68(2) (2003), 329–343. Xie, X.: *Quasi-conformal maps on model filiform groups*, Michigan Mathematical Journal 64(1) (2015), 169–202. Zahorski, Z.: *Sur l’ensemble des points de non-dérivabilité d’une fonction continue*, Bulletin de la Société Mathématique de France 74 (1946), 147–178.
--- abstract: 'This work is motivated by discrete-to-continuum modeling of the mechanics of a graphene sheet, which is a single-atom thick macromolecule of carbon atoms covalently bonded to form a hexagonal lattice. The strong covalent bonding makes the sheet essentially inextensible and gives the sheet a resistance to bending. We study a one-dimensional atomistic model that describes the cross-section of a graphene sheet as a collection of rigid links connected by torsional springs. $\Gamma$-convergence is used to rigorously justify an upscaling procedure for the discrete bending energy of the atomistic model. Our result establishes that as the bond length in the atomistic model goes to 0, the bending energies $\Gamma$-converge to Euler’s elastica.' author: - 'Malena I. Espa\~ nol' - Dmitry Golovaty - 'J. Patrick Wilber' bibliography: - 'gamma-converg-references.bib' title: 'Euler elastica as a $\Gamma$-Limit of discrete bending energies of one-dimensional chains of atoms' --- $\Gamma$-convergence, graphene, carbon nanotubes, elastica, bending energy Introduction ============ This work is motivated by discrete-to-continuum modeling of the mechanics of graphene. A graphene sheet is a single-atom thick macromolecule of carbon atoms arranged in a hexagonal lattice. Graphene has been intensively studied since 2004 when individual graphene sheets were first isolated by Geim and Novoselov using mechanical exfoliation, a Nobel-prize-winning achievement [@novoselov2004electric]. Graphene has exceptional physical properties and yields insights on fundamental physics of two-dimensional materials. It is also of interest as a basic building block for other extensively studied carbon nanostructures (CNS), including single-walled and multi-walled carbon nanotubes, as well as bilayers and stacks of graphene sheets. Interactions between carbon atoms in a CNS are fundamental for explaining the arrangement and relative orientations of the nanostucture’s constituent graphene layers. Each atom on a given graphene sheet is covalently bonded to its three nearest neighbors to form a hexagonal lattice. This strong covalent bonding makes the sheet essentially inextensible and gives the sheet a resistance to bending. Atoms on a sheet interact with atoms on nearby sheets by relatively weak van der Waals forces. This interaction defines an equilibrium distance between pairs of weakly interacting atoms, and deviations from this distance cost energy. The weak interaction energy is minimized when nearby lattices are in registry. The lattices of two adjacent sheets adjust or shift to allow a typical atom on one sheet to be close to the equilibrium distance from its several nearest neighbors on the adjacent sheet. Registry effects are significant for understanding the mechanical behavior and equilibrium configurations of various CNS. For example, registry effects lead to polygonization of multi-walled carbon nanotubes of large diameter [@golovaty2008continuum]. To understand the influence of registry effects on CNS at the macroscopic level, it is important to construct continuum models that retain atomistic information. In [@golovaty2008continuum], the authors derive a continuum theory of multi-walled carbon nanotubes by upscaling a simple one-dimensional atomistic model that takes into account both strong covalent bonds between the atoms in a graphene sheet and weak bonds between the atoms in adjacent sheets. Part of this model is based on upscaling a resistance to bending described atomistically. The resulting continuum bending energy takes the form of the classical Euler’s elastica model [@antman2006nonlinear]. Given a sufficiently smooth curve $\mathcal C\subset\mathbb{R}^2$, the Euler’s elastica model assigns to $\mathcal C$ the bending energy $\int_{\mathcal C}\kappa^2$, where $\kappa$ is the curvature of $\mathcal C$. Within a larger effort to provide a rigorous justification for the procedure used in [@golovaty2008continuum], a first step is to consider upscaling from a discrete to a continuum bending energy of a single graphene sheet. Hence, in this paper we use $\Gamma$-convergence [@braides2002gamma] to rigorously justify the upscaling procedure used in [@golovaty2008continuum] for the bending energy of a chain of atoms. Our work is related to that of Bruckstein et al. [@bruckstein107; @bruckstein_igl], who studied discrete approximations of the classical elastica model motivated by problems in image processing. In [@bruckstein107; @bruckstein_igl], the discrete energies were defined on piecewise-affine curves and assumed to depend on the exterior angles between the straight segments of the curve. The authors considered $\Gamma$-convergence for several related families of discrete energy functionals defined on the space of rectifiable planar curves of finite total absolute curvature. The convergence in the space of rectifiable curves was defined in the sense of Fréchet distance. An advantage of working in the space of rectifiable curves of finite total absolute curvature is that it contains both smooth and piecewise-affine curves. The limiting energy functionals in [@bruckstein107; @bruckstein_igl] are essentially the $L^{\alpha}$-norm of the curvature, where $\alpha\geq1$. The main distinguishing feature between the motivation in this paper and that in [@bruckstein107; @bruckstein_igl] is that we are approximating a discrete chain of atoms by a continuum curve, while in [@bruckstein107; @bruckstein_igl] the goal is to approximate a continuum curve by a polygon. As a result, our discrete model is determined by the physics of the problem, while in [@bruckstein107; @bruckstein_igl] the discrete framework is determined by the convenience of the approximation. In a nondimensional setting, our model represents a cross-section of a graphene sheet as a chain of atoms in which all links connecting the atoms have equal length $\varepsilon>0$ while the total length of the chain is $1$. One may think of this chain as a polygon in the plane. The parameter $\varepsilon$ representing the interatomic bond length is assumed to be small. To mimic the situation in [@golovaty2008continuum], we assume that the chain is closed, thus describing a graphene sheet rolled into a carbon nanotube. This assumption, however, is not essential to our analysis and can easily be removed. As $\varepsilon\to0$, the corresponding chains converge to a curve on the plane that is a continuum description of a cross-section of the nanotube. Instead of working with curves directly, as in [@bruckstein107; @bruckstein_igl], we represent each arc-length-parametrized curve by its corresponding angle function. A discrete atomic chain then is described by a piecewise-constant function whose values are the angles between the links of the chain and the $x$-axis. The angles remain constant on each successive subinterval of the length $\varepsilon$ of $[0,1]$. To pass from the discrete to a continuum description, we assume that as $\varepsilon\to0$, the sequence of angle functions converges in an appropriate sense to a limiting function defined on $[0,1]$. We define the bending energy of the chain as a function of the angles between the adjacent links of the chain and thus of the increments of the angle function. In the limit $\varepsilon\to 0$, the bending energy becomes a function of the derivative of the limiting angle function. Then, if we expect that the bending energy reduces to Euler’s elastica as $\varepsilon\to 0$, the limiting angle function must have a square integrable derivative. From this we would conclude that the limiting angle function is smoother than the discrete angle functions that converge to it. To prove $\Gamma$-convergence of the discrete bending energies to Euler’s elastica, we follow the strategy employed in [@braides2002gamma] and replace the piecewise-constant functions with auxiliary piecewise-affine functions that have the same discrete-level energy and that belong to the same space as the limiting angle function. The principal difficulty in proving $\Gamma$-convergence is the construction of the recovery sequence of the angle functions. In particular, here we need to design a recovery sequence that satisfies the constraints that the piecewise-affine curves must be closed and have unit length. Because of the length constraint our construction is more complicated than its analog in [@bruckstein107; @bruckstein_igl]. This paper is organized as follows. In Section \[s1\], we formulate our discrete model of the bending energy of an atomistic chain. The following section introduces the continuum analogue of this discrete model and sets up the appropriate function spaces for our $\Gamma$-convergence result. Section \[s3\] contains this result, Theorem 4.3, and its proof. Discrete formulation {#s1} ==================== Given a small $\varepsilon>0$ such that $N_\varepsilon:=1/\varepsilon\in\mathbb N$, let $\left\{\bar r_i^\varepsilon\right\}_{i=1}^{N_\varepsilon} \subset \mathbb{R}^2$ be an ordered set of position vectors for $N_\varepsilon$ points in the plane such that $\left|\bar r_{i+1}^\varepsilon-\bar r_i^\varepsilon\right|=\varepsilon$ for $i=1,\ldots,N_\varepsilon-1$ and $\left|\bar r_{N_\varepsilon}^\varepsilon-\bar r_1^\varepsilon\right|=\varepsilon$. We refer to $\left\{\bar r_i^\varepsilon\right\}_{i=1}^{N_\varepsilon}$ as a [*chain*]{} $\mathcal C^\varepsilon$. As Fig. \[fig:1\] shows, $\mathcal C^\varepsilon$ can be associated with a piecewise-affine curve $\mathcal C_\varepsilon$ in $\mathbb{R}^2$ by connecting the consecutive points in $\left\{\bar r_i^\varepsilon\right\}_{i=1}^{N_\varepsilon}$. This piecewise-affine curve has length 1. Notice that we use superscript $\varepsilon$ for the discrete chain and subscript $\varepsilon$ for the associated curve. ![Geometry of the problem: a discrete chain $\mathcal C^\varepsilon$ and an associated piecewise-affine curve $\mathcal C_\varepsilon$.[]{data-label="fig:1"}](figure1.pdf){height="1.5in"} For notational convenience in what follows, we define $\bar r_{0}^\varepsilon := \bar r_{N_{\varepsilon}}^\varepsilon$ and $\bar r_{N_\varepsilon+1}^\varepsilon := \bar r_{1}^\varepsilon$. Given a chain $\mathcal C^\varepsilon$, we can choose a collection of angles $\theta_i^\varepsilon$ that satisfy $$\bar r_{i+1}^\varepsilon - \bar r_{i}^\varepsilon = \varepsilon(\cos(\theta_i^\varepsilon), \sin(\theta_i^\varepsilon)),\ \ i=0,\ldots,N_\varepsilon, \label{ee46}$$ and that satisfy $\theta_{i}^\varepsilon-\theta_{i-1}^\varepsilon\in(-\pi,\pi)$ for every $i=1,\ldots,N_\varepsilon$. (We can make the choice of $\theta_i^\varepsilon$ for $i=0,\ldots,N_\varepsilon$ unique by also requiring that, say, $\theta_1^\varepsilon \in (-\pi, \pi]$.) As a consequence $\theta_0^\varepsilon=\theta^\varepsilon_{N_\varepsilon}-2k\pi$ for some integer $k$, even though $\theta_{0}^{\varepsilon}$ and $\theta_{N_{\varepsilon}}^{\varepsilon}$ are angles that both correspond to the vector $\bar r_{1}^\varepsilon - \bar r_{N_{\varepsilon}}^\varepsilon$. Based on the geometry of our problem, we shall consider only chains for which $k=1$ (if $k\geq2$ the piecewise-affine curve $\mathcal C_\varepsilon$ must self-intersect). In what follows, we denote a vector of angles associated with a chain by $\Theta^\varepsilon:=\left(\theta_1^\varepsilon,\ldots,\theta_{N_\varepsilon}^\varepsilon\right)\in\mathbb{R}^{N_\varepsilon}$. Because $\sum_{i=1}^{N_\varepsilon} (\bar r_{i}^\varepsilon - \bar r_{i-1}^\varepsilon) = 0,$ the vector $\Theta^\varepsilon$ satisfies the constraints $$\sum_{i=1}^{N_\varepsilon}\cos(\theta_{i}^\varepsilon) = \sum_{i=1}^{N_\varepsilon} \sin(\theta_{i}^\varepsilon) = 0. \label{ee44}$$ Note that, for a given $\varepsilon$, there is a one-to-one correspondence between $\Theta^\varepsilon$ and $\mathcal C^\varepsilon$, up to a rigid rotation and translation of $\mathcal C^\varepsilon$ in $\mathbb R^2$. We define the energy ${\mathcal E}^{\varepsilon}[\mathcal C^\varepsilon]$ of a chain $\mathcal C^\varepsilon$ by $$\label{eq:prop} {\mathcal E}^{\varepsilon}[\mathcal C^\varepsilon]:=\frac{1}{\varepsilon} \sum_{i=1}^{N_{\varepsilon}} f\left( \frac{\left(\bar r_{i+1}^\varepsilon - \bar r_{i}^\varepsilon\right)\cdot\left(\bar r_{i}^\varepsilon - \bar r_{i-1}^\varepsilon\right)}{\varepsilon^2}\right),$$ where the function $f\colon (-1,1] \to \mathbb{R}$ satisfies $$\ f\in C^{\infty}((-1,1]),\ \ f'(x) < 0 \mbox{ on}(-1,1],\ \ \lim_{x\to -1^+} f(x) = \infty,\mbox{ and }f(1)=0. \label{ee43}$$ We can rewrite the energy functional in terms of angles $${\mathcal E}^{\varepsilon}[\mathcal C^\varepsilon]=E^{\varepsilon}[\Theta^\varepsilon]:=\frac{1}{\varepsilon} \sum_{i=1}^{N_{\varepsilon}} f\left( \cos(\theta_i^\varepsilon -\theta_{i-1}^\varepsilon) \right).$$ To simplify the notation, we introduce the function $\psi\colon (-\pi,\pi)\to \mathbb{R}$ as $$\psi(\theta) := f(\cos(\theta)).$$ By $\psi$ is an infinitely differentiable, even function on $(-\pi,\pi)$ satisfying $$\label{eq:prop_psi} \psi(\theta)>\psi(0)=0\mbox{ for every }\theta\in(-\pi,0)\cup(0,\pi),\ \ \psi''(0)>0,\ \ \lim_{x\to \pm \pi} \psi(\theta) = \infty.$$ We next set $$\label{eq:1.23} \psi_{\varepsilon}(\xi) :=\varepsilon^{-2}\psi(\varepsilon \xi)$$ for every $\xi\in\left(-\frac{\pi}{\varepsilon},\frac{\pi}{\varepsilon}\right)$, so that $$E^{\varepsilon}[\Theta^\varepsilon] = \varepsilon \sum_{i=1}^{N_{\varepsilon}} \psi_{\varepsilon}\left(\frac{\theta_i^\varepsilon -\theta_{i-1}^\varepsilon}{\varepsilon} \right). \label{ee39}$$ The [*admissible set of angles*]{} $T_{N_\varepsilon}$ is defined by $$\begin{gathered} T_{N_\varepsilon} := \left\{\Theta^\varepsilon\in\mathbb R^{N_\varepsilon} : \text{\eqref{ee44} is satisfied, } |\theta_{i}^\varepsilon-\theta_{i-1}^\varepsilon|<\pi \ \mbox{for}\ i=2,\ldots,N_\varepsilon\vphantom{\sum_{i=1}^{N_\varepsilon}}, \right. \\ \left. \text{and } |\theta_{1}^\varepsilon-(\theta_{N}^\varepsilon-2\pi)|<\pi \phantom{\int\hspace{-3mm}}\right\}. \label{eq:adm} \end{gathered}$$ Here for $\Theta^\varepsilon\in T_{N_\varepsilon}$, we define $\theta_0^\varepsilon=\theta^\varepsilon_{N_\varepsilon}-2\pi$, which we need to compute the right-hand side of . We now consider the discrete minimization problem $$\label{eq:minprob} \Theta_{\mathrm{min}}^\varepsilon = \operatorname*{arg\,min}_{\Theta^\varepsilon\in T_{N_\varepsilon}}E^\varepsilon\left[\Theta^\varepsilon\right].$$ Although the geometry of our problem demands that the piecewise-affine curve $\mathcal C_\varepsilon$ associated with $\Theta^\varepsilon$ must not be self-intersecting, we do not impose a corresponding condition on the members of $T_{N_\varepsilon}$. Indeed, since we are interested in [ *minimizers*]{} of $E^\varepsilon[\Theta^\varepsilon]$ over the physically relevant admissible set $T^\prime_{N_\varepsilon}\subset T_{N_\varepsilon}$, if we can show that a minimizer of $E^\varepsilon[\Theta^\varepsilon]$ over the larger set $T_{N_\varepsilon}$ is not self-intersecting, it is also a minimizer of $E^\varepsilon[\Theta^\varepsilon]$ over $T^\prime_{N_\varepsilon}$. The problem has a (unique) solution that corresponds to a non self-intersecting curve in $\mathbb R^2$. Indeed, let $\tilde \Theta^\varepsilon$ satisfy $\tilde \theta_i^\varepsilon-\tilde \theta_{i-1}^\varepsilon= 2\pi/N_\varepsilon$ for $i=1,\ldots,N_\varepsilon$. The chain corresponding to $\tilde \Theta^\varepsilon$, which we denote by $\tilde{\mathcal C}^\varepsilon$, is the set of vertices for a regular $N_\varepsilon$-sided convex polygon $\tilde{\mathcal C}_\varepsilon$ in $\mathbb R^2$. It can be easily verified that $\tilde \Theta^\varepsilon\in T_{N_\varepsilon}$ and $$\label{eq:polygon} E^{\varepsilon}[\tilde \Theta^\varepsilon]= \varepsilon N_\varepsilon \psi_\varepsilon\left(\frac{2\pi}{\varepsilon N_\varepsilon}\right) = \psi_\varepsilon\left(2\pi\right).$$ Furthermore, we have the following proposition. There exists an $\varepsilon_0>0$ such that $\tilde\Theta^\varepsilon$ is a minimizer of $E^\varepsilon$ over $T_{N_\varepsilon}$ when $\varepsilon<\varepsilon_0$. By we can choose $\delta>0$ such that the function $\psi$ is convex on the interval $(-\delta,\delta).$ Suppose that $M\geq\psi_\varepsilon(2\pi)$ and consider an arbitrary $\Theta^\varepsilon\in T_{N_\varepsilon}$ satisfying $E^\varepsilon\left[\Theta^\varepsilon\right]\leq M$. Then $$\varepsilon M\geq\varepsilon^2 \psi_{\varepsilon}\left(\frac{\theta_i^\varepsilon -\theta_{i-1}^\varepsilon}{\varepsilon} \right)=\psi\left(\theta_i^\varepsilon -\theta_{i-1}^\varepsilon\right)\geq0,$$ and by and it follows that $$\label{eq:1.24} \left|\theta_i^\varepsilon -\theta_{i-1}^\varepsilon\right|\leq \delta,$$ for every $i=1,\ldots,N_\varepsilon$ uniformly in $\varepsilon$ when $\varepsilon<\varepsilon_0$ and $\varepsilon_0$ is sufficiently small. Using the discrete version of Jensen’s inequality, we have $$E^\varepsilon[\Theta^\varepsilon]=\varepsilon\sum_{i=1}^{N_\varepsilon} \psi_\varepsilon\left(\frac{\theta_i^\varepsilon - \theta_{i-1}^\varepsilon}{\varepsilon}\right)\geq \varepsilon N_\varepsilon \psi_\varepsilon \left( \sum_{i=1}^{N_\varepsilon} \frac{\theta_i^\varepsilon - \theta_{i-1}^\varepsilon}{\varepsilon N_\varepsilon}\right)=\psi_\varepsilon \left(2\pi\right) = E^\varepsilon[\tilde \Theta^\varepsilon],$$ for any $\Theta^\varepsilon\in T_{N_\varepsilon}$ as long as $\varepsilon<\varepsilon_0$. We conclude that the minimum of $E^\varepsilon$ is achieved at $\tilde \Theta^\varepsilon$. Continuum Formulation {#s2} ===================== To motivate the subsequent developments, observe that , , and the smallness of $\varepsilon$ give $$E^{\varepsilon}[\tilde \Theta^\varepsilon]= \psi_\varepsilon\left(2\pi\right) = \frac{1}{\varepsilon^2} \psi(2\pi\varepsilon)=2\pi^2\psi^{\prime\prime}(0) + o(1).$$ It follows that $$\label{eq:elim_poly} \lim_{\varepsilon\to0}\left\{\inf_{\Theta^\varepsilon\in T_{N_\varepsilon}} E^\varepsilon [\Theta^\varepsilon]\right\}=\lim_{\varepsilon\to0}E^\varepsilon [\tilde\Theta^\varepsilon]=2\pi^2\psi^{\prime\prime}(0)$$ and therefore $$\inf_{\Theta^\varepsilon\in T_{N_\varepsilon}} E^\varepsilon[\Theta^\varepsilon]<C,$$ for some $C>0$ uniformly in $\varepsilon$ when $\varepsilon$ is small enough. As noted in the previous section, the curve $\tilde{\mathcal C}_\varepsilon$ associated with the minimizer $\tilde\Theta^\varepsilon$ is a regular $N_\varepsilon$-sided convex polygon in $\mathbb{R}^2$. When $\varepsilon\to0$, the number of sides $N_\varepsilon=1/\varepsilon\to\infty$ while the total perimeter of the polygon remains equal to $\varepsilon N_\varepsilon=1.$ The sequence $\left\{\tilde{\mathcal C}_\varepsilon\right\}_{\varepsilon>0}$ thus converges (uniformly) to a circle $\mathcal C_0$ of radius $\frac{1}{2\pi}$ when $\varepsilon\to0$. Based on , it seems natural to associate to this limiting circle $\mathcal C_0$ the energy $E_0[\mathcal C_0]:=2\pi^2\psi^{\prime\prime}(0)$. Further, given an arbitrary smooth, simple curve $\mathcal C\subset\mathbb R^2$ such that there exists a sequence of piecewise-affine curves $\mathcal C_\varepsilon$ (which corresponds to a sequence of chains $\mathcal C^\varepsilon$) converging to $\mathcal C ,$ it might be tempting to extend the notion of energy to $\mathcal C$ by defining $E_0[\mathcal C]:=\lim_{\varepsilon\to0} E_\varepsilon[\mathcal C_\varepsilon]$. However, a priori it is not clear that this limit exists or if its value is the same for all sequences of chains converging to $\mathcal C$. In addition, if the notion of the limiting energy for curves can be made precise, it would be desirable that minimizers of the discrete problem $\Theta_{\mathrm{min}}^\varepsilon=\operatorname*{arg\,min}_{\Theta^\varepsilon\in T_{N_\varepsilon}}E^\varepsilon\left[\Theta^\varepsilon\right]$ for chains converge to minimizers of the limiting energy $E_0$ over an appropriate function space. The established framework to study convergence of energies that preserves the variational structure of the discrete problem is that of $\Gamma$-convergence, which we consider next. Before proving $\Gamma$-convergence, we need to select a common function space that contains both the discrete chains $C^\varepsilon$ and the limiting curves. Note first that, since the limiting energy should correspond to Euler’s elastica, the curvature of a limiting curve $\mathcal C$ must be square integrable, i.e., the angle function for $\mathcal C$ must be an element of the Sobolev space $H^1([0,1])$. Our goal is to show that the Euler’s elastica energy of a closed curve with an angle function in $H^1([0,1])$ is the limit of the discrete energies of a sequence of chains. However, the angle function for a chain is a step function and hence is not in $H^1([0,1])$. To put our construction into the framework of $\Gamma$-convergence, we use the idea of [@braides2002gamma] and replace a sequence of piecewise-constant angle functions for chains by a sequence of piecewise-affine functions in $H^1([0,1])$. We then introduce an energy functional defined over the piecewise-affine functions so that for each $\varepsilon$, the new energy of each piecewise-affine function is the same as the old discrete energy of a corresponding chain. This yields a sequence of affine functions in $H^1([0,1])$. Note that the affine functions considered below do not need to correspond to a closed curve on the plane and are not required to have length one. The physically relevant geometric constraints are imposed only on the piecewise-constant angle functions for the discrete chains and on the limiting angle function. To make these ideas more precise, consider a partition of the interval $[0,1]$ by the points $$\left\{0, \varepsilon/2, 3\varepsilon/2, \ldots, 1-\varepsilon/2,1\right\}$$ and denote by $\tilde A_{\varepsilon}(0,1)\subset C([0,1])$ the set of functions affine on each subinterval of this partition. From now on, we will identify with a vector $\Theta^\varepsilon\in T_{N_\varepsilon}$ a piecewise-affine function $\theta_\varepsilon\in \tilde A_{\varepsilon}(0,1)$ given by $$\begin{gathered} \label{eq:theta_fun} \theta_\varepsilon(s):=\left(\frac{\theta^\varepsilon_1-2\pi+\theta_{N_\varepsilon}^\varepsilon}{2}+\frac{s}{\varepsilon}\left(\theta^\varepsilon_1+2\pi-\theta_{N_\varepsilon}^\varepsilon\right)\right)\chi_{\left[0,\frac{\varepsilon}{2}\right)}(s) \\ +\sum_{i=1}^{N_\varepsilon-1}\left(\theta_i^\varepsilon+\frac{\theta_{i+1}^\varepsilon-\theta_i^\varepsilon}{\varepsilon}\left(s-\frac{2i-1}{2}\varepsilon \right)\right)\chi_{\left[\frac{2i-1}{2}\varepsilon,\frac{2i+1}{2}\varepsilon\right)}(s) \\ +\left(\frac{\theta_{1}^\varepsilon+2\pi+\theta_{N_\varepsilon}^\varepsilon}{2}+\frac{s-1}{\varepsilon}\left(\theta_{1}^\varepsilon+2\pi-\theta_{N_\varepsilon}^\varepsilon\right)\right)\chi_{\left[1-\frac{\varepsilon}{2},1\right]}(s)\end{gathered}$$ for $s\in[0,1]$, where $\chi_S$ is the indicator function of the set $S\subset\mathbb R$. Then $$\label{eq:Thet_thet} \theta_\varepsilon\left(\frac{2i-1}{2}\varepsilon\right)=\theta^\varepsilon_i$$ for $i=1,\ldots,N_\varepsilon$ and $$\label{eq:degree} \theta_\varepsilon(0) = \frac{\theta^\varepsilon_1+\theta_{N_\varepsilon}^\varepsilon}{2}-\pi, \quad \theta_\varepsilon(1)=\frac{\theta_{1}^\varepsilon+\theta_{N_\varepsilon}^\varepsilon}{2}+\pi,$$ so that $\theta_\varepsilon(1)=\theta_\varepsilon(0)+2\pi$. We use – and the definition of $T_{N_\varepsilon}$ to define the [*admissible set of functions*]{} $$\begin{gathered} \label{eq:adm_fun} A_{\varepsilon}(0,1):=\left\{\theta\in \tilde A_{\varepsilon}(0,1)\colon \theta(0)=\frac{\theta(\varepsilon/2)+\theta(1-\varepsilon/2)}{2}-\pi,\right.\\\left.\theta(1)=\frac{\theta(\varepsilon/2)+\theta(1-\varepsilon/2)}{2}+\pi;\left(\theta\left(\varepsilon/2\right),\ldots,\theta\left(1-\varepsilon/2\right)\right)\in T_{N_\varepsilon}\right\}.\end{gathered}$$ Note that for $\theta_\varepsilon\in A_{\varepsilon}(0,1)$, we have $$\label{eq:theta_fun_der} \theta^\prime_\varepsilon(s):=\frac{\theta_{1}^\varepsilon+2\pi-\theta_{N_\varepsilon}^\varepsilon}{\varepsilon}\left(\chi_{\left[0,\frac{\varepsilon}{2}\right)}(s)+\chi_{\left(1-\frac{\varepsilon}{2},1\right]}(s)\right) +\sum_{i=1}^{N_\varepsilon-1}\frac{\theta_{i+1}^\varepsilon-\theta_i^\varepsilon}{\varepsilon}\chi_{\left(\frac{2i-1}{2}\varepsilon,\frac{2i+1}{2}\varepsilon\right)}(s)$$ for all $s\in[0,1]$, where $\theta_i^\varepsilon=\theta_{\varepsilon}\left((2i-1)\varepsilon/2\right)$ for $i=1,\ldots,N_{\varepsilon}$. It follows from that $$\int_0^1\psi_\varepsilon\left(\theta_\varepsilon^\prime\right)\,ds = \varepsilon\sum_{i=1}^{N_\varepsilon} \psi_\varepsilon\left(\frac{\theta^\varepsilon_{i}-\theta_{i-1}^\varepsilon}{\varepsilon}\right) = E^\varepsilon\left[\Theta^\varepsilon\right], \label{eq:int}$$ for every $\theta_\varepsilon\in A_\varepsilon(0,1)$, where $\theta_0^\varepsilon=\theta_{N_{\varepsilon}}^\varepsilon-2\pi$. If we define the functional $F_\varepsilon:H^1\left([0,1];\mathbb{R}\right)\to \bar{\mathbb{R}}$ by $$\label{eq:ffun} F_\varepsilon[\theta]:=\left\{ \begin{array}{ll} \int_0^1\psi_\varepsilon\left(\theta^\prime\right)\,ds, & \theta\in A_\varepsilon(0,1), \\ \infty, & \mathrm{otherwise}, \end{array} \right.$$ for every $\varepsilon>0$, then implies that $$\label{eq:equiv} F_\varepsilon\left[\theta_\varepsilon\right]=E^\varepsilon\left[\Theta^\varepsilon\right],$$ whenever $\theta_\varepsilon\in A_\varepsilon(0,1)$, where $\Theta^\varepsilon\in\mathbb{R}^{N_\varepsilon}$ is the vector corresponding to $\theta_\varepsilon$. The discrete minimization problem has an associated continuum minimization problem $$\label{eq:contminprob} \theta_{\varepsilon,\min} = \operatorname*{arg\,min}_{\theta\in H^1([0,1])}F_\varepsilon\left[\theta\right].$$ Because the functionals $\{ F_\varepsilon \}_{\varepsilon>0}$ in are all defined on the same space $H^1([0,1])$, an asymptotic limit of $\{ F_\varepsilon \}_{\varepsilon>0}$ can be studied using $\Gamma$-convergence. The $\Gamma$-Limit {#s3} ================== In this section we state and prove the asymptotic limit of the sequence of continuum energies $\{ F_\varepsilon \}_{\varepsilon>0}$. We state two lemmas, whose proofs are in Appendices 1 and 2. The first lemma shows that the constraints imposed on the piecewise-constant angle functions are preserved under the weak-$H^1$ convergence. \[r1\] Suppose $\{\theta_\varepsilon\}_{\varepsilon>0}$ converges weakly to $\theta$ in $H^1([0,1])$ as $\varepsilon\rightarrow 0$. If there is a sequence $\{ \varepsilon_{n} \}$ of positive numbers such that $\underset{n\rightarrow \infty}{\lim }\varepsilon_{n}=0$ and $\theta_{\varepsilon_{n}}\in A_{\varepsilon_{n}}(0,1)$ for all $n$, then $$\int_0^1\cos(\theta)\,ds = \int_0^1\sin(\theta)\,ds =0 \quad \text{and} \quad \theta(1)-\theta(0)=2\pi. \label{ee45}$$ The second lemma establishes that any function in $H^1([0,1])$ satisfying can be approximated by a twice continuously differentiable function on $[0,1]$ that also satisfies . \[r2\] Suppose $\theta\in H^1([0,1])$ satisfies . Then for all $\delta>0$ there is a function $\theta^{*}\in C^{2}([0,1])$ such that $\|\theta-\theta^{*}\|_{H^{1}([0,1])}<\delta$ and $\theta^{*}$ also satisfies . Now we state the main result of this paper. Let $F_\varepsilon \colon$$H^1([0,1])\to\bar{\mathbb{R}}$ be defined by . Let $E_0 \colon$$H^1([0,1])\to\bar{\mathbb{R}}$ be defined by $$\label{eq:ffun0} E_0(\theta) := \left\{ \begin{array}{ll} \int_0^1 \alpha(\theta')^2\,ds, & \theta\in H^1_c([0,1]), \\ \infty, & \mathrm{otherwise}, \end{array} \right.$$ where $\alpha = \psi''(0)/2$ and $$H^1_c([0,1]) := \left\{\theta\in H^1([0,1])\ :\ \theta \ \text{satisfies}\ \eqref{ee45} \right\}. \label{eq:cont_adm_fun}$$ Then $\Gamma\text{-}\lim_{\varepsilon\to 0} F_{\varepsilon} = E_0$ in the weak topology of $H^1([0,1])$, that is 1. For every $\theta \in H^1([0,1])$, there exists a sequence $\{\theta_\varepsilon\}_{\varepsilon>0}$ converging weakly to $\theta$ in $H^1([0,1])$ such that $\lim_{\varepsilon\to 0} F_\varepsilon[\theta_\varepsilon] = E_0[\theta]$. 2. For every sequence $\{\theta_\varepsilon\}_{\varepsilon>0}$ converging weakly to $\theta$ in $H^1([0,1])$, $$\liminf_{\varepsilon\to 0} F_\varepsilon[\theta_\varepsilon] \geq E_0[\theta].$$ Furthermore, if a sequence $\left\{\theta_\varepsilon\right\}_{\varepsilon>0}\subset H^1([0,1])$ satisfies a uniform energy bound $F_\varepsilon\left[\theta_\varepsilon\right]<C$ then there is a subsequence $\{\theta_{\varepsilon_j}\}$ such that $\theta_{\varepsilon_j}\stackrel{H^1}\rightharpoonup\theta$ as $\varepsilon_j\to 0$ for some $\theta\in H^1([0,1]).$ Note that the last assertion of the theorem also tells us that, if there is a sequence of chains $\left\{\Theta^\varepsilon\right\}_{\varepsilon>0}$ that satisfies a uniform energy bound $E^\varepsilon\left[\Theta^\varepsilon\right]<C$, then there is a subsequence of the corresponding affine angle functions $\{\theta_{\varepsilon_j}\}$ such that $\theta_{\varepsilon_j}\stackrel{H^1}\rightharpoonup\theta$ as $\varepsilon_j\to 0$ for some $\theta\in H^1([0,1]).$ One can easily check that the sequence of piecewise-constant angle functions with values given by $\left\{\Theta^\varepsilon\right\}_{\varepsilon>0}$ converges to the same $\theta$ strongly in $L^2([0,1])$. We begin by proving the final statement of the theorem. In what follows $C$ denotes a generic positive constant. Suppose that a sequence $\left\{\theta_\varepsilon\right\}_{\varepsilon>0}\subset H^1([0,1])$ satisfies a uniform energy bound $F_\varepsilon\left[\theta_\varepsilon\right]<C$. Then implies that $\theta_\varepsilon\in A_\varepsilon (0,1)$ so that $\theta_\varepsilon$ is piecewise-affine for every $\varepsilon$. By and the definition of $\psi_\varepsilon$, we have $$\frac{1}{\varepsilon}\sum_{i=1}^{N_\varepsilon} \psi\left({\theta^\varepsilon_{i}-\theta_{i-1}^\varepsilon}\right)=\varepsilon\sum_{i=1}^{N_\varepsilon} \psi_\varepsilon\left(\frac{\theta^\varepsilon_{i}-\theta_{i-1}^\varepsilon}{\varepsilon}\right)\leq C.$$ Because $\psi(0)=0$ is the unique global minimum of $\psi$, the previous estimate implies $$\psi\left({\theta^\varepsilon_{i}-\theta_{i-1}^\varepsilon}\right)\leq C\varepsilon$$ for all $\varepsilon$. By a similar argument that led to , we obtain $$\left|\theta_i^\varepsilon -\theta_{i-1}^\varepsilon\right|\leq \delta,$$ for every $i=1,\ldots,N_\varepsilon$ where $\delta=o(1)$ in $\varepsilon$. This along with enables us to conclude that $${\left({\theta^\varepsilon_{i}-\theta_{i-1}^\varepsilon}\right)^2}\leq C\psi\left({\theta^\varepsilon_{i}-\theta_{i-1}^\varepsilon}\right)$$ for all $i=1,\ldots,N_\varepsilon$ and some $C>0$ when $\varepsilon$ is sufficiently small. This yields the inequality $$\int_0^1\left(\theta_\varepsilon^\prime\right)^2\,ds=\varepsilon\sum_{i=1}^{N_\varepsilon} \left(\frac{\theta^\varepsilon_{i}-\theta_{i-1}^\varepsilon}{\varepsilon}\right)^2\leq\varepsilon C\sum_{i=1}^{N_\varepsilon} \psi_\varepsilon\left(\frac{\theta^\varepsilon_{i}-\theta_{i-1}^\varepsilon}{\varepsilon}\right)\leq C.$$ Because we can always assume that $\theta_\varepsilon(0)\in[-\pi,\pi]$, the boundedness and hence the weak compactness of $\left\{\theta_\varepsilon\right\}_{\varepsilon>0}$ in $H^1([0,1])$ now follow from the Poincare inequality in one dimension. We now proceed with proving $\Gamma$-convergence. Let $\theta \in H^1([0,1])$. Suppose $\theta \notin H^1_c([0,1])$, so that $E_0[\theta] = \infty$. We define a constant sequence by setting $\theta_\varepsilon = \theta$ for all $\varepsilon$. If $\theta_{\varepsilon}\in A_\varepsilon (0,1)$ for arbitrarily small $\varepsilon$, then Lemma \[r1\] would imply that $\theta \in H^1_c([0,1])$. So there is an $\bar{\varepsilon}>0$ such that $\theta=\theta_\varepsilon \notin A_{\varepsilon}(0,1)$ and hence $F_\varepsilon\left[\theta_\varepsilon\right]=\infty$ for all $0<\varepsilon\leq \bar{\varepsilon}$. We assume now that $\theta \in H^1_c([0,1])$. By Lemma \[r2\], we can assume as well that $\theta\in C^{2}([0,1])$. Working with a smooth function will allow us to bound first and second derivatives of $\theta$ uniformly on $[0,1]$, which we need to do for several later estimates. Recall that in what follows $\varepsilon>0$ is such that $N_\varepsilon=1/\varepsilon$ is in $\mathbb{N}$. We divide the rest of the proof of (a) into several steps. Let $\bar{r}$ denote the curve whose angle function is $\theta$. We construct a chain with $N_\varepsilon$ sides that is uniformly close to $\bar{r}$. Later we shall demonstrate that the corresponding affine function—which has the same energy as the discrete energy of the chain—approximates $\theta$ in $H^1([0,1])$. Since $\bar{r}$ and any admissible chains have length $1$, we cannot inscribe an admissible chain in $\bar{r}$. Instead, for $h>0$ we define the ‘inflated’ curve $\bar{r}_{h}(s)=\bar{r}(s)+h\bar{N}(s)$, where $\bar{N}$ denotes the (outward) normal to the curve $\bar{r}$ (see Fig. \[fig:2\]). ![A secant line and tangent vector for the inflated curve $\bar{r}_{h}$.[]{data-label="fig:2"}](figure2.pdf){height="1.5in"} The length of $\bar{r}_{h}$ is $1+2\pi h$. Given $\varepsilon$ sufficiently small, it is clear that there exists an $h$ such that we can inscribe a chain with $N_\varepsilon$ sides each of length $\varepsilon$ in $\bar{r}_{h}$. So there exist $\bar{s}_{1},\ldots,\bar{s}_{N_\varepsilon}\in [0,1]$ such that $\left\{\bar{r}_h(\bar{s}_{i})\right\}_{i=1}^{N_\varepsilon}$ is a chain. Without loss of generality we can assume that $\bar{s}_{1}=0$. We let $\bar{\Theta}^\varepsilon:=\left(\bar{\theta}_1,\ldots,\bar{\theta}_{N_\varepsilon}\right)\in\mathbb{R}^{N_\varepsilon}$ denote the vector of angles associated with the chain (see ). Then $\bar{\Theta}^{\varepsilon}\in T_{N_\varepsilon}$ and has a discrete energy $E^{\varepsilon}[\bar{\Theta}^{\varepsilon}]$ defined by . By we construct a piecewise-affine function $\hat{\theta}_{\varepsilon}$ such that $F_\varepsilon[\hat{\theta}_{\varepsilon}]=E^{\varepsilon}[\bar{\Theta}^{\varepsilon}]$. Our goal is to show that $\hat{\theta}_{\varepsilon}$ is close to $\theta$ in $H^1([0,1])$ and that $F_\varepsilon[\hat{\theta}_{\varepsilon}]$ is close to $E_0[\theta]$. We derive two preliminary estimates $$\bar{s}_{i+1} = \bar{s}_{i} + \varepsilon + O(\varepsilon^{3}) \quad \text{and} \quad \bar{\theta}_{i} = \theta(\bar{s}_{i}+\varepsilon/2) + O(\varepsilon^{2}). \label{ee47}$$ We begin with an initial estimate on $\bar{s}_{i+1}-\bar{s}_{i}$. To attain this, we define $F(\sigma):=|\bar{r}_{h}(\sigma)-\bar{r}_{h}(\bar{s}_{i})|$. One can check that $F_{\sigma}(\bar{s}_{i})=1+h\theta'(\bar{s}_{i})$ and that $F_{\sigma\sigma}(\bar{s}_{i})=h\theta''(\bar{s}_{i})$. Because $F_{\sigma}(\bar{s}_{i})\neq 0$ for $h$ sufficiently small, the equation $F(\sigma)=\varepsilon$ defines a function ${\sigma}(\varepsilon)$ for small $\varepsilon$, where $\sigma(\varepsilon)=\bar{s}_{i+1}$. Observe that $F({\sigma}(\varepsilon))=\varepsilon$ implies $$F_{\sigma}\sigma_{\varepsilon}=1, \qquad F_{\sigma\sigma}\sigma^{2}_{\varepsilon} + F_{\sigma}\sigma_{\varepsilon\varepsilon} = 0, \label{ee24}$$ so that $$\begin{aligned} \sigma_{\varepsilon} &= (1+h\theta')^{-1} =1-h\theta'+O(h^{2}), \label{ee25}\\ \sigma_{\varepsilon\varepsilon} &= -F_{\sigma\sigma}\sigma^{2}_{\varepsilon}/F_{\sigma} = -\theta''h(1+h\theta')^{-3} = -\theta''h + O(h^{2}). \label{ee27}\end{aligned}$$ Hence $${\sigma}(\varepsilon) = {\sigma}(0) + (1-h\theta'+O(h^{2}))\varepsilon + \frac{1}{2}(-\theta''h + O(h^{2}))\varepsilon^{2} + O(\varepsilon^{3}). \label{ee26}$$ Because ${\sigma}(0)=\bar{s}_{i}$ and ${\sigma}(\varepsilon)=\bar{s}_{i+1}$, implies that $$\bar{s}_{i+1} - \bar{s}_{i} = \varepsilon + O(h\varepsilon) + O(\varepsilon^3), \label{ee28}$$ Building on , we have $$1 = \sum_{i=1}^{N_\varepsilon}(\bar{s}_{i+1}-\bar{s}_{i}) = \sum_{i=1}^{N_\varepsilon}(\varepsilon + O(h\varepsilon) + O(\varepsilon^3)) = N_\varepsilon\varepsilon + \sum_{i=1}^{N_\varepsilon}(O(h\varepsilon) + O(\varepsilon^3)). \label{ee29}$$ Because $N_\varepsilon\varepsilon=1$ and $\sum_{i=1}^{N_\varepsilon}(O(h\varepsilon)+ O(\varepsilon^3))=O(h) + O(\varepsilon^2)$, implies that $0=O(h)+ O(\varepsilon^2)$, or $h=O(\varepsilon^{2})$. Now, returning to and using $h=O(\varepsilon^{2})$ implies ${ \eqref{ee47}\raisebox{-1mm}{\scriptsize 1} }$. Next we show ${ \eqref{ee47}\raisebox{-1mm}{\scriptsize 2} }$. Note that $\bar{\theta}_{i}-\theta(\bar{s}_{i}+\varepsilon/2)$ is the angle between the vectors and $\bar{r}'_{h}(\bar{s}_{i}+\varepsilon/2)$ (see Fig. \[fig:2\]). We can write $$\begin{aligned} \frac{\bar{r}_{h}(\bar{s}_{i+1})-\bar{r}_{h}(\bar{s}_{i})}{\varepsilon} &= \frac{\bar{r}'_{h}(\bar{s}_{i})(\bar{s}_{i+1}-\bar{s}_{i}) +\bar{r}''_{h}(\bar{s}_{i})(\bar{s}_{i+1}-\bar{s}_{i})^{2}/2 +O((\bar{s}_{i+1}-\bar{s}_{i})^{3})} {\varepsilon} \nonumber \\ &=\bar{r}'_{h}(\bar{s}_{i})+\bar{r}''_{h}(\bar{s}_{i})\varepsilon/2 +O(\varepsilon^{2}), \label{ee19}\end{aligned}$$ where we have used ${ \eqref{ee47}\raisebox{-1mm}{\scriptsize 1} }$. Likewise, we can write $$\begin{aligned} \bar{r}'_{h}(\bar{s}_{i}+\varepsilon/2) &= \bar{r}'_{h}(\bar{s}_{i}) + \bar{r}''_{h}(\bar{s}_{i})\varepsilon/2 + O(\varepsilon^{2}). \label{ee20}\end{aligned}$$ From and , we see that $$\bar{r}'_{h}(\bar{s}_{i}+\varepsilon/2) = \frac{\bar{r}_{h}(\bar{s}_{i+1})-\bar{r}_{h}(\bar{s}_{i})}{\varepsilon} + O(\varepsilon^{2}), \label{ee21}$$ where the leading order term on the right hand side is a unit vector. The largest angle between $\bar{r}'_{h}(\bar{s}_{i}+\varepsilon/2)$ and ${\varepsilon}^{-1}\left({\bar{r}_{h}(\bar{s}_{i+1})-\bar{r}_{h}(\bar{s}_{i})}\right)$ for a small fixed magnitude of their difference is achieved when this difference is perpendicular to ${\varepsilon}^{-1}\left({\bar{r}_{h}(\bar{s}_{i+1})-\bar{r}_{h}(\bar{s}_{i})}\right)$. It then immediately follows that $\bar{\theta}_{i}-\theta(\bar{s}_{i}+\varepsilon/2) = O(\varepsilon^{2})$. We now use the estimates to show that (i) the piecewise-affine function $\hat{\theta}_{\varepsilon}$ constructed at the end of Step 1 is close to $\theta$ in $H^1([0,1])$ and (ii) the energy $F_\varepsilon[\hat{\theta}_{\varepsilon}]$ is close to $E_0[\theta]$. \(i) First we demonstrate that $\hat{\theta}_{\varepsilon}$ is close to $\theta$ in $H^1([0,1])$. We have $$\begin{aligned} \int_{0}^{1}\!\left|\hat{\theta}_{\varepsilon}'(s)-{\theta}'(s)\right|^{2}ds &= \int_{0}^{s_{1}}\! \left|\frac{\bar{\theta}_{1}-(\bar{\theta}_{N_\varepsilon}-2\pi+\bar{\theta}_{1})/2}{\varepsilon/2} -{\theta}'(s)\right|^{2}ds \nonumber \\ &\qquad + \sum_{i=1}^{N_\varepsilon-1}\int_{s_{i}}^{s_{i+1}}\!\left| \frac{\bar{\theta}_{i+1}-\bar{\theta}_{i}}{\varepsilon} -{\theta}'(s)\right|^{2}ds \label{ee9}\\ &\qquad\qquad + \int_{s_{N_\varepsilon}}^{1}\! \left|\frac{(\bar{\theta}_{1}+2\pi+\bar{\theta}_{N_\varepsilon})/2 - \bar{\theta}_{N_\varepsilon}}{\varepsilon/2} -{\theta}'(s)\right|^{2}ds. \nonumber\end{aligned}$$ To estimate the right-hand side of , recall that $\bar{r}_h(\bar{s}_i),\ i=1,\ldots,N_\varepsilon$, denote the vertices of the chain inscribed into the inflated curve $\bar{r}_h$. We observe that $$\frac{\bar{\theta}_{i+1}-\bar{\theta}_{i}}{\varepsilon} = \frac{\theta(\bar{s}_{i+1}+\varepsilon/2)-\theta(\bar{s}_{i}+\varepsilon/2)+O(\varepsilon^{2})}{\varepsilon} = \theta'(\xi_{i})+O(\varepsilon), \label{ee4}$$ where the first equality uses ${ \eqref{ee47}\raisebox{-1mm}{\scriptsize 2} }$ and where $\xi_{i}\in (\bar{s}_{i}+\varepsilon/2,\bar{s}_{i+1}+\varepsilon/2)$. The equations $\bar{s}_{1}=0$ and ${ \eqref{ee47}\raisebox{-1mm}{\scriptsize 1} }$ imply that $\bar{s}_{i+1}=i\varepsilon + O(\varepsilon^{2})$ for $i=1,\ldots,N_{\varepsilon}$. Then, because $s_{i}=(2i-1)\varepsilon/2$, one has $$|\xi-s| < 2\varepsilon + O(\varepsilon^{2}) \quad \text{for $s_{i}<s<s_{i+1}$ and $\bar{s}_{i}+\varepsilon/2 < \xi < \bar{s}_{i+1}+\varepsilon/2$}. \label{ee7}$$ Combining and yields $$\begin{aligned} \int_{s_{i}}^{s_{i+1}}\!\left| \frac{\bar{\theta}_{i+1}-\bar{\theta}_{i}}{\varepsilon} -\theta'(s)\right|^{2}ds &= \int_{s_{i}}^{s_{i+1}}\!\left|\theta'(\xi_{i})-\theta'(s)+O(\varepsilon)\right|^{2}ds \nonumber \\ &= \int_{s_{i}}^{s_{i+1}}\!\left|\theta''(\hat{\xi}_{i})(\xi_{i}-s)+O(\varepsilon)\right|^{2}ds \label{ee6}\\ &\leq \int_{s_{i}}^{s_{i+1}}\!O(\varepsilon^{2})\,ds=O(\varepsilon^{3}). \nonumber\end{aligned}$$ We need an estimate like for the first and third terms on the right-hand side of . However, both the curve $\bar{r}$ and the associated chain constructed in Step 1 are closed in the plane, hence their parametrizations can be extended periodically with period $1$ to $\mathbb{R}$. Selecting a different vertex in the chain to correspond to $s=0$ is equivalent to translating the parametrization by a number less than $1$. In this case, the first and the third integrals in become one of the integrals in the sum in the middle term. Thus the first and the third integrals in together admit the same $O(\varepsilon^3)$-estimate as in . Returning to , we conclude that $$\int_{0}^{1}\!\left|\hat{\theta}_{\varepsilon}'(s)-\theta'(s)\right|^{2}ds = \sum_{i=1}^{N_\varepsilon}\!O(\varepsilon^{3})=O(\varepsilon^2). \label{ee13}$$ (ii) Now we estimate the difference between $F_{\varepsilon}[\hat{\theta}_{\varepsilon}]$ and $E_{0}[\theta]$. Straightforward but tedious calculations based upon expanding $\psi$ show that $$\begin{aligned} |F_{\varepsilon}[\hat{\theta}_{\varepsilon}]-E_{0}[\theta]| &= \left\lvert \int_{0}^{s_{1}}\! \left\{\frac{\psi''(0)}{2} \left[ \left(\frac{\theta'(\xi_{0}) + \theta'(\xi_{N_\varepsilon})}{2}\right)^{2} - \theta'(s)^{2} \right] + O(\varepsilon)\right\} \,ds \right. \nonumber \\ &\quad+ \sum_{i=1}^{N_\varepsilon-1} \int_{s_{i}}^{s_{i+1}}\! \left\{\frac{\psi''(0)}{2} \left[ \theta'(\xi_{i})^{2} - \theta'(s)^{2} \right] + O(\varepsilon) \right\}\,ds \nonumber \\ &\quad+ \left. \int_{s_{N_\varepsilon}}^{1}\! \left\{\frac{\psi''(0)}{2} \left[ \left(\frac{\theta'(\xi_{0}) + \theta'(\xi_{N_\varepsilon})}{2}\right)^{2} - \theta'(s)^{2} \right] + O(\varepsilon) \right\} \,ds \right\rvert. \label{ee15}\end{aligned}$$ Estimating the right-hand side in can be done in a way similar to the estimates that led from to and demonstrates that $$|F_{\varepsilon}[\hat{\theta}_{\varepsilon}]-E_{0}[\theta]|=O(\varepsilon^2).$$ We suppose $\theta\in H^1([0,1])$, $\{\theta_\varepsilon\}\subset H^1([0,1])$, and $\theta_\varepsilon \rightharpoonup \theta$ in $H^1([0,1])$. We show that $\liminf_{\varepsilon\to 0} F_\varepsilon[\theta_\varepsilon] \geq E_0[\theta]$. If $\theta\notin H^1_{c}([0,1])$, then by Lemma \[r1\] there is an $\bar{\varepsilon}>0$ such that $\theta_\varepsilon \notin A_{\varepsilon}(0,1)$ and hence $F_\varepsilon[\theta_\varepsilon]=\infty$ for all $0<\varepsilon\leq \bar{\varepsilon}$. So we assume that $\theta\in H^1_{c}([0,1])$. We can further assume that $\theta_\varepsilon\in A_\varepsilon(0,1)$ for all $\varepsilon>0$. Because $\{\theta_\varepsilon\}$ converges weakly in $H^1([0,1])$, $\{\theta_\varepsilon'\}$ is bounded in $L^{2}([0,1])$. Using , we see that there is a constant $C$ such that for $j=1,\ldots,N_{\varepsilon}$ $$\varepsilon^{-1}\left( \theta_{j}^{\varepsilon}-\theta_{j-1}^{\varepsilon} \right)^{2} \leq \sum_{i=1}^{N_{\varepsilon}}\varepsilon \left(\frac{\theta_{i}^{\varepsilon}-\theta_{i-1}^{\varepsilon}}{\varepsilon}\right)^{2} = \int_0^1(\theta_\varepsilon^\prime)^{2}\,ds \leq C \label{e40}$$ (recall that $\theta_i^\varepsilon=\theta_{\varepsilon}\left((2i-1)\varepsilon/2\right)$ for $i=1,\ldots,N_{\varepsilon}$ and that $\theta_0^\varepsilon=\theta_{N_{\varepsilon}}^\varepsilon-2\pi$). It follows that $\varepsilon\theta_\varepsilon'\leq (C\varepsilon)^{1/2}$ uniformly in $s$. Therefore $$\varepsilon^{-2}\psi(\varepsilon\theta_{\varepsilon}') = \frac{1}{2}\psi''(0)(\theta_{\varepsilon}')^{2}+o(1) \label{e42}$$ uniformly in $\varepsilon$ and $s\in[0,1]$. Now we have $$\begin{aligned} \liminf_{\varepsilon \rightarrow 0} \int_0^1 \psi_{\varepsilon}(\theta_{\varepsilon}')\,ds &= \liminf_{\varepsilon \rightarrow 0} \int_0^1 \varepsilon^{-2}\psi(\varepsilon\theta_{\varepsilon}')\,ds \nonumber\\ &= \liminf_{\varepsilon \rightarrow 0} \int_0^1\left( \frac{1}{2}\psi''(0)(\theta_{\varepsilon}')^{2}+o(1)\right)\,ds \nonumber\\ &= \liminf_{\varepsilon \rightarrow 0} \int_0^1 \frac{1}{2}\psi''(0)(\theta_{\varepsilon}')^{2}\,ds \nonumber\\ &\geq \int_0^1 \frac{\psi''(0)}{2} (\theta')^2\,ds \label{e41},\end{aligned}$$ where the last inequality follows from the weak lower semicontinuity of the $L^{2}$ norm. Appendix 1. Proof of Lemma \[r1\] {#appendix-1.-proof-of-lemmar1 .unnumbered} ================================= To simplify notation, we write just $\theta_{n}$ and $N_{n}$ for $\theta_{\varepsilon_{n}}$ and $N_{\varepsilon_{n}}$. Because $\theta_{n} \rightharpoonup \theta$ in $H^1([0,1])$, $\theta_{n}$ converges uniformly to $\theta$ on $[0,1]$ and hence $\cos \theta_{n}$ converges uniformly to $\cos \theta$ on $[0,1]$. Thus $$\begin{aligned} \int_{0}^{1}\!\cos \theta &= \underset{n\rightarrow \infty}{\lim } \int_{0}^{1}\!\cos\theta_{n} \nonumber \\ &= \underset{n\rightarrow \infty}{\lim } \left[ \int_{0}^{{\varepsilon_{n}}/2}\!\cos\theta_{n} + \sum_{i=1}^{N_{n}-1} \int_{s_{i}}^{s_{i+1}}\!\cos\theta_{n} + \int_{1-\varepsilon_{n}/2}^{1}\!\cos\theta_{n} \right]. \label{ee48}\end{aligned}$$ The sequence $\{ \theta_{n} \}$ is uniformly bounded, so the first and third terms on the right-hand side of go to zero with $\varepsilon_{n}$. For the sum between these terms, we have $$\begin{aligned} \sum_{i=1}^{N_{n}-1} \int_{s_{i}}^{s_{i+1}}\!\!\!\cos\theta_{n}(s)\,ds &= \sum_{i=1}^{N_{n}-1} \left[ \int_{s_{i}}^{s_{i+1}}\!\!\!\cos\theta_{n}(s_{i})\,ds + \int_{s_{i}}^{s_{i+1}}\!\!\! \left(\cos\theta_{n}(s)-\cos\theta_{n}(s_{i})\right)\,ds \right] \nonumber \\ &= \varepsilon\sum_{i=1}^{N_{n}-1} \cos\theta_{n}(s_{i}) + \sum_{i=1}^{N_{n}-1} \int_{s_{i}}^{s_{i+1}}\! \left(\cos\theta_{n}(s)-\cos\theta_{n}(s_{i})\right)\,ds. \label{ee49}\end{aligned}$$ On the right-hand side of , the first sum is 0 because $\theta_{n}\in A_{\varepsilon_{n}}(0,1)$ and the second sum is easily shown to go to $0$ as $\varepsilon_{n}\rightarrow 0$ using the uniform convergence of $\{ \theta_{n} \}$ and the uniform continuity of $\theta$ on $[0,1]$. $\square$ Appendix 2. Proof of Lemma \[r2\] {#appendix-2.-proof-of-lemmar2 .unnumbered} ================================= Note that $\tilde{\theta}(s):=\theta(s)-\theta(0)$ still satisfies and $\tilde{\theta}(0)=0$. If $\tilde{\theta}^{*}$ is a smooth function approximating $\tilde{\theta}$ in $H^1([0,1])$, then $\tilde{\theta}^{*}+\theta(0)$ is a smooth function approximating $\theta$ in $H^1([0,1])$. Hence without loss of generality we assume that $\theta(0)=0$. Because $\theta$ is in $H^1([0,1])$, it is continuous. So there exist $s_{1}, s_{2}, s_{3}, s_{4}\in (0,1)$ such that $\pi/2<\theta(s)<\pi$ for $s_{1}<s<s_{2}$ and $\pi<\theta(s)<3\pi/2$ for $s_{3}<s<s_{4}$. By adding appropriately defined bump functions to $\theta(s)$, we can produce functions $\theta_{1}$, $\theta_{2}$, $\theta_{3}$, and $\theta_{4}$ each close to $\theta$ in $H^1([0,1])$ such that $$\begin{aligned} &\theta_{1}(s)=\theta(s)\ \text{for} \ s\notin (s_{1},s_{2}) \ \text{and} \ \pi/2<\theta_{1}(s)<\theta(s) \ \text{for} \ s_{1}<s<s_{2}, \label{ee31} \\ &\theta_{2}(s)=\theta(s) \ \text{for}\ s\notin(s_{3},s_{4}) \ \text{and}\ \pi<\theta_{2}(s)<\theta(s)\ \text{for} \ s_{3}<s<s_{4}, \label{ee36}\\ &\theta_{3}(s)=\theta(s) \ \text{for}\ s\notin(s_{1},s_{2}) \ \text{and}\ \theta(s)<\theta_{3}(s)<\pi \ \text{for} \ s_{1}<s<s_{2}, \label{ee37}\\ &\theta_{4}(s)=\theta(s) \ \text{for}\ s\notin(s_{3},s_{4}) \ \text{and}\ \theta(s)<\theta_{4}(s)<3\pi/2 \ \text{for}\ s_{3}<s<s_{4}. \label{ee38}\end{aligned}$$ Each of these new functions still satisfies $\theta_{i}(0)=0$ and $\theta_{i}(1)=2\pi$ since the outputs of $\theta$ need not be modified near the endpoints of $[0,1]$. Now we define $G[\vartheta]:=(\int_{0}^{1}\!\cos\vartheta(s)\,ds, \int_{0}^{1}\!\sin\vartheta(s)\,ds)$ and $H\colon [0,1]^{2}\rightarrow \mathbb{R}^{2}$ by $$\begin{aligned} H(\delta_{1},\delta_{2}) &= (H_{1}(\delta_{1},\delta_{2}),H_{2}(\delta_{1},\delta_{2})) \nonumber \\ &:= G[ \delta_{1}(\delta_{2}\theta_{1} + (1-\delta_{2})\theta_{4}) + (1-\delta_{1})(\delta_{2}\theta_{2} + (1-\delta_{2})\theta_{3}) ]. \label{ee18}\end{aligned}$$ We show that $H_{1}(0,\delta_{2}) =\int_{0}^{1}\!\cos(\delta_{2}\theta_{2}+(1-\delta_{2})\theta_{3})\,ds<0$ for $0\leq \delta_{2} \leq 1$. To see this, we first observe that for $s\notin (s_{1},s_{2})\cup (s_{3},s_{4})$, $\delta_{2}\theta_{2}(s)+(1-\delta_{2})\theta_{3}(s)=\theta(s)$. Next, if $s\in (s_{1},s_{2})$, then $\theta_{2}(s)=\theta(s)$ and $\theta_{3}(s)>\theta(s)$, so that $$\pi/2 < \theta(s) < \delta_{2}\theta_{2}(s)+(1-\delta_{2})\theta_{3}(s) < \pi, \label{ee32}$$ which in turn implies that $$\int_{s_{1}}^{s_{2}}\!\cos(\theta(s)) \,ds > \int_{s_{1}}^{s_{2}}\!\cos(\delta_{2}\theta_{2}(s)+(1-\delta_{2})\theta_{3}(s)) \,ds. \label{ee33}$$ In a similar way we can show that $$\int_{s_{3}}^{s_{4}}\!\cos(\theta(s)) \,ds > \int_{s_{3}}^{s_{4}}\!\cos(\delta_{2}\theta_{2}(s)+(1-\delta_{2})\theta_{3}(s)) \,ds. \label{ee34}$$ Hence $$H_{1}(0,\delta_{2}) = \int_{0}^{1}\!\cos(\delta_{2}\theta_{2}(s)+(1-\delta_{2})\theta_{3}(s)) \,ds < \int_{0}^{1}\!\cos(\theta(s)) \,ds = 0. \label{ee35}$$ Because $H_{1}(0,\delta_{2})<0$ for $0\leq \delta_{2} \leq 1$ and $H_{1}(0,\cdot)$ is continuous on $[0,1]$, we can conclude that $H_{1}(0,\delta_{2})\leq -\eta<0$ for $0\leq \delta_{2} \leq 1$ for some $\eta>0$. Similarly, we can show that $H_{1}(1,\delta_{2})\geq \eta>0$ for $0\leq \delta_{2} \leq 1$ and that $H_{2}(\delta_{1},0)\leq -\eta<0$, $H_{2}(\delta_{1},1)\geq \eta>0$ for $0\leq \delta_{1} \leq 1$. Next, for each $i$ we define $\hat{\theta}_{i}(x):=\theta_{i}(x)-2\pi x$. Then $\hat{\theta}_{i}(0)=\hat{\theta}_{i}(1)$, and we can extend $\hat{\theta}_{i}$ to a function on $\mathbb{R}$ with period 1. We use convolution to approximate $\hat{\theta}_{i}$ in $H^{1}(\mathbb{R})$ by a smooth function $\hat{\theta}^{*}_{i}$ that has period 1, so that in particular $\hat{\theta}^{*}_{i}(1)=\hat{\theta}^{*}_{i}(0)$. We now define $\theta^{*}_{i}(x):=\hat{\theta}^{*}_{i}(x)+2\pi x$ and restrict $\theta^{*}_{i}$ to $[0,1]$. Then $\theta^{*}_{i}$ is a smooth function that approximates $\theta_{i}$ in $H^{1}([0,1])$ and $\theta^{*}_{i}(1)=\theta^{*}_{i}(0)+2\pi$. Note that each $\theta^{*}_{i}$ is uniformly close to $\theta_{i}$ on $[0,1]$. Next we define $H^{*}$ as we defined $H$ in but replacing $\theta_{i}$ with $\theta^{*}_{i}$. Because $\delta_{2}\theta^{*}_{2}(s)+(1-\delta_{2})\theta^{*}_{3}(s)$ is uniformly close to $\delta_{2}\theta_{2}(s)+(1-\delta_{2})\theta_{3}(s)$ for $s\in [0,1]$ and $0\leq \delta_{2}\leq 1$, $H_{1}(0,\delta_{2})\leq -\eta<0$ for $0\leq \delta_{2} \leq 1$ implies that $H^{*}_{1}(0,\delta_{2})<0$ for $0\leq \delta_{2} \leq 1$. Likewise we have that $H^{*}_{1}(1,\delta_{2})>0$ for $0\leq \delta_{2} \leq 1$ and that $H^{*}_{2}(\delta_{1},0)<0$, $H^{*}_{2}(\delta_{1},1)>0$ for $0\leq \delta_{1} \leq 1$. Using the Intermediate Value Theorem, we see that for each $0\leq \delta_{2} \leq 1$, there is a $\hat{\delta}(\delta_{2})$ such that $H^{*}_{1}(\hat{\delta}(\delta_{2}),\delta_{2})=0$. Also, $H^{*}_{2}(\hat{\delta}(0),0)<0$ and $H^{*}_{2}(\hat{\delta}(1),1)>0$. So there exists a $\bar{\delta}$ such that $H^{*}_{2}(\hat{\delta}(\bar{\delta}),\bar{\delta})=0$. And $H^{*}_{1}(\hat{\delta}(\bar{\delta}),\bar{\delta})=0$. We define $\theta^{*}=\hat{\delta}(\bar{\delta})(\bar{\delta}\theta^{*}_{1} + (1-\bar{\delta})\theta^{*}_{4}) + (1-\hat{\delta}(\bar{\delta}))(\bar{\delta}\theta^{*}_{2} + (1-\bar{\delta})\theta^{*}_{3})$. Then $\theta^{*} \in C^{2}([0,1])$, $\theta^{*}$ approximates $\theta$ in $H^{1}([0,1])$, and $\theta^{*}$ satisfies the constraints . $\square$
--- abstract: 'Driven by deep neural networks and large scale datasets, scene text detection methods have progressed substantially over the past years, continuously refreshing the performance records on various standard benchmarks. However, limited by the representations (axis-aligned rectangles, rotated rectangles or quadrangles) adopted to describe text, existing methods may fall short when dealing with much more free-form text instances, such as curved text, which are actually very common in real-world scenarios. To tackle this problem, we propose a more flexible representation for scene text, termed as *TextSnake*, which is able to effectively represent text instances in horizontal, oriented and curved forms. In TextSnake, a text instance is described as a sequence of ordered, overlapping disks centered at symmetric axes, each of which is associated with potentially variable radius and orientation. Such geometry attributes are estimated via a Fully Convolutional Network (FCN) model. In experiments, the text detector based on TextSnake achieves state-of-the-art or comparable performance on Total-Text and SCUT-CTW1500, the two newly published benchmarks with special emphasis on curved text in natural images, as well as the widely-used datasets ICDAR 2015 and MSRA-TD500. Specifically, TextSnake outperforms the baseline on Total-Text by more than *$40\%$* in F-measure.' author: - 'Shangbang Long^1,2^, Jiaqiang Ruan ^1,2^, Wenjie Zhang^1,2^, Xin He^2^, Wenhao Wu^2^, Cong Yao^2^' bibliography: - 'TextSnake.bib' title: 'TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes' --- Introduction ============ In recent years, the community has witnessed a surge of research interest and effort regarding the extraction of textual information from natural scenes, a.k.a. scene text detection and recognition. The driving factors stem from both application prospect and research value. On the one hand, scene text detection and recognition have been playing ever-increasingly important roles in a wide range of practical systems, such as scene understanding, product search, and autonomous driving. On the other hand, the unique traits of scene text, for instance, significant variations in color, scale, orientation, aspect ratio and pattern, make it obviously different from general objects. Therefore, particular challenges are posed and special investigations are required. ![image](imgs/representations){width="0.9\columnwidth"} Text detection, as a prerequisite step in the pipeline of textual information extraction, has recently advanced substantially with the development of deep neural networks and large image datasets. Numerous innovative works [@yao2016scene; @Shi_2017_CVPR; @Zhou_2017_CVPR; @liao2017textboxes; @huang2015densebox; @wu2017self; @he2017multi; @Hu_2017_ICCV; @tian2017wetext; @Lyu2018; @ZhangAAAI2018] are proposed, achieving excellent performances on standard benchmarks. However, most existing methods for text detection shared a strong assumption that text instances are roughly in a linear shape and therefore adopted relatively simple representations (axis-aligned rectangles, rotated rectangles or quadrangles) to describe them. Despite their progress on standard benchmarks, these methods may fall short when handling text instances of irregular shapes, for example, curved text. As depicted in Fig. \[fig:representations\], for curved text with perspective distortion, conventional representations struggle with giving precise estimations of the geometric properties. In fact, instances of curved text are quite common in real life [@kheng2017total; @Yuliang2017Detecting]. In this paper, we propose a more flexible representation that can fit well text of arbitrary shapes, i.e., those in horizontal, multi-oriented and curved forms. This representation describes text with a series of ordered, overlapping disks, each of which is located at the center axis of text region and associated with potentially variable radius and orientation. Due to its excellent capability in adapting for the complex multiplicity of text structures, just like a snake changing its shape to adapt for the external environment, the proposed representation is named as TextSnake. The geometry attributes of text instances, i.e., central axis points, radii and orientations, are estimated with a single Fully Convolutional Network (FCN) model. Besides ICDAR 2015 and MSRA-TD500, the effectiveness of TextSnake is validated on Total-Text and SCUT-CTW1500, which are two newly-released benchmarks mainly focused on curved text. The proposed algorithm achieves state-of-the-art performance on the two curved text datasets, while at the same time outperforming previous methods on horizontal and multi-oriented text, *even in the single-scale testing mode*. Specifically, TextSnake achieves significant improvement over the baseline on Total-Text by *$40.0\%$* in F-measure. In summary, the major contributions of this paper are three-fold: (1) We propose a flexible and general representation for scene text of arbitrary shapes; (2) Based on this representation, an effective method for scene text detection is proposed; (3) The proposed text detection algorithm achieves state-of-the-art performance on several benchmarks, including text instances of different forms (horizontal, oriented and curved). Related Work ============ In the past few years, the most prominent trend in the area of scene text detection is the transfer from conventional methods [@epshtein2010detecting; @neumann2010method] to deep learning based methods [@jaderberg2014deep; @jaderberg2016reading; @liao2017textboxes; @Zhou_2017_CVPR; @Shi_2017_CVPR]. In this section, we look back on relevant previous works. For comprehensive surveys, please refer to [@ye2015text; @zhu2016scene]. Before the era of deep learning, SWT [@epshtein2010detecting] and MSER [@neumann2010method] are two representative algorithms that have influenced a variety of subsequent methods [@yin2014robust; @huang2014robust]. Modern methods are mostly based on deep neural networks, which can be coarsely classified into two categories: regression based and segmentation based. Regression based text detection methods [@liao2017textboxes] mainly draw inspirations from general object detection frameworks. TextBoxes [@liao2017textboxes] adopted SSD [@liu2016ssd] and added “long” default boxes and filters to handle the significant variation of aspect ratios of text instances. Based on Faster-RCNN [@ren2015faster], Ma *et al.* [@ma2017arbitrary] devised Rotation Region Proposal Networks (RRPN) to detect arbitrary-Oriented text in natural images. EAST [@Zhou_2017_CVPR] and Deep Regression [@He_2017_ICCV] both directly produce the rotated boxes or quadrangles of text, in a per-pixel manner. Segmentation based text detection methods cast text detection as a semantic segmentation problem and FCN [@long2015fully] is often taken as the reference framework. Yao *et al.* [@yao2016scene] modified FCN to produce multiple heatmaps corresponding various properties of text, such as text region and orientation. Zhang *et al.* [@zhang2016multi] first use FCN to extract text blocks and then hunt character candidates from these blocks with MSER [@neumann2010method]. To better separate adjacent text instances, the method of [@wu2017self] distinguishes each pixel into three categories: non-text, text border and text. These methods mainly vary in the way they separate text pixels into different instances. The methods reviewed above have achieved excellent performances on various benchmarks in this field. However, most works, except for [@yao2016scene; @he2017multi; @kheng2017total], have not payed special attention to curved text. In contrast, the representation proposed in this paper is suitable for text of arbitrary shapes (horizontal, multi-oriented and curved). It is primarily inspired by [@yao2016scene; @he2017multi] and the geometric attributes of text are also estimated via the multiple-channel outputs of an FCN-based model. Unlike [@yao2016scene], our algorithm does not need character level annotations. In addition, it also shares a similar idea with SegLink [@Shi_2017_CVPR], by successively decomposing text into local components and then composing them back into text instances. Analogous to [@zhang2015symmetry], we also detect linear symmetry axes of text instances for text localization. Another advantage of the proposed method lies in its ability to reconstruct the precise shape and regional strike of text instances, which can largely facilitate the subsequent text recognition process, because all detected text instances could be conveniently transformed into a canonical form with minimal distortion and background (see the example in Fig.\[img\_transform\]). Methodology =========== In this section, we first introduce the new representation for text of arbitrary shapes. Then we describe our method and training details. Representation {#sec:representation} -------------- ![image](imgs/textsnake){width="0.75\columnwidth"} As shown in Fig. \[fig:representations\], conventional representations for scene text (e.g., axis-aligned rectangles, rotated rectangles and quadrangles) fail to precisely describe the geometric properties of text instances of irregular shapes, since they generally assume that text instances are roughly in linear forms, which does not hold true for curved text. To address this problem, we propose a flexible and general representation: TextSnake. As demonstrated in Fig. \[fig:textsnake\], TextSnake expresses a text instance as a sequence of overlapping disks, each of which is located at the center line and associated with a radius and an orientation. Intuitively, TextSnake is able to change its shape to adapt for the variations of text instances, such as rotation, scaling and bending. Mathematically, a text instance $t$, consisting of several characters, can be viewed as an ordered list $S(t)$. $S(t)=\{D_{0},D_{1},\cdots,D_{i},\cdots,D_{n}\}$, where $D_{i}$ stands for the $i$th disk and $n$ is the number of the disks. Each disk $D$ is associated with a group of geometry attributes, i.e. $D=(c,r,\theta)$, in which $c$, $r$ and $\theta$ are the center, radius and orientation of disk $D$, respectively. The radius $r$ is defined as half of the local width of $t$, while the orientation $\theta$ is the tangential direction of the center line around the center $c$. In this sense, text region $t$ can be easily reconstructed by computing the union of the disks in $S(t)$. Note that the disks do not correspond to the characters belonging to $t$. However, the geometric attributes in $S(t)$ can be used to rectify text instances of irregular shapes and transform them into rectangular, straight image regions, which are more friendly to text recognizers. Pipeline {#sec:pipeline} -------- ![image](imgs/Flow.pdf){width="0.9\columnwidth"} In order to detect text with arbitrary shapes, we employ an FCN model to predict the geometry attributes of text instances. The pipeline of the proposed method is illustrated in Fig.\[img\_framework\]. The FCN based network predicts score maps of text center line (TCL) and text regions (TR), together with geometry attributes, including $r$, $cos\theta$ and $sin\theta$. The TCL map is further masked by the TR map since TCL is naturally part of TR. To perform instance segmentation, disjoint set is utilized, given the fact that TCL does not overlap with each other. A striding algorithm is used to extract the central axis point lists and finally reconstruct the text instances. Network Architecture -------------------- ![image](imgs/network.pdf){width="1.0\columnwidth"} The whole network is shown in Fig. \[img\_network\]. Inspired by FPN[@Lin_2017_CVPR] and U-net[@Ronneberger2015U], we adopt a scheme that gradually merges features from different levels of the stem network. The stem network can be convolutional networks proposed for image classification, e.g. VGG-16/19[@simonyan2014very] and ResNet[@He_2017_Res]. These networks can be divided into 5 stages of convolutions and a few additional fully-connected (FC) layers. We remove the FC layers, and feed the feature maps after each stage to the feature merging network. We choose VGG-16 as our stem network for the sake of direct and fair comparison with other methods. As for the feature merging network, several stages are stacked sequentially, each consisting of a merging unit that takes feature maps from the last stage and corresponding stem network layer. Merging unit is defined by the following equations: $$h_1 = f_5 \label{eq1}$$ $$h_i = conv_{3\times 3}(conv_{1\times 1}{[f_{i-1}; UpSampling_{\times 2}(h_{i-1})]}),\ \mathrm{for} \ i\ge2 \label{eq2}$$ where $f_i$ denotes the feature maps of the $i$-th stage in the stem network and $h_i$ is the feature maps of the corresponding merging units. In our experiments, upsampling is implemented as deconvolutional layer as proposed in [@Zeiler2010Deconvolutional]. After the merging, we obtain a feature map whose size is $\frac{1}{2}$ of the input images. We apply an additional upsampling layer and 2 convolutional layers to produce dense predictions: $$h_{final} = UpSampling_{\times 2}(h_5) \label{eq3}$$ $$P = conv_{1\times 1}(conv_{3\times 3}(h_{final})) \label{eq4}$$ where $P\in \mathcal{R}^{h\times w\times 7}$, with $4$ channels for logits of TR/TCL, and the last $3$ respectively for $r$, $cos\theta$ and $sin\theta$ of the text instance. As a result of the additional upsampling layer, $P$ has the same size as the input image.The final predictions are obtained by taking softmax for TR/TCL and regularizing $cos\theta$ and $sin\theta$ so that the squared sum equals $1$. Inference {#sec:inference} --------- After feed-forwarding, the network produces the TCL, TR and geometry maps. For TCL and TR, we apply thresholding with values $T_{tcl}$ and $T_{tr}$ respectively. Then, the intersection of TR and TCL gives the final prediction of TCL. Using disjoint-set, we can efficiently separate TCL pixels into different text instances. Finally, a striding algorithm is designed to extract an ordered point list that indicates the shape and course of the text instance, and also reconstruct the text instance areas. Two simple heuristics are applied to filter out false positive text instances: 1) The number of TCL pixels should be at least $0.2$ times their average radius; 2) At least half of pixels in the reconstructed text area should be classified as TR. ![image](imgs/PostProcessing.pdf){width="0.85\columnwidth"} The procedure for the striding algorithm is shown in Fig.\[img\_PP\]. It features $3$ main actions, denoted as Act(a), Act(b), and Act(c), as illustrated in Fig.\[img\_PP\_detail\]. Firstly, we randomly select a pixel as the starting point, and centralize it. Then, the search process forks into two opposite directions, striding and centralizing until it reaches the ends. This process would generates 2 ordered point list in two opposite directions, which can be combined to produce the final central axis list that follows the course of the text and describe the shape precisely. Details of the $3$ actions are shown below. ![image](imgs/PostProcessing_details.pdf){width="0.8\columnwidth"} **Act(a) Centralizing** As shown in Fig.\[img\_PP\_detail\], given a point on the TCL, we can draw the tangent line and the normal line, respectively denoted as dotted line and solid line. This step can be done with ease using the geometry maps. The midpoint of the intersection of the normal line and the TCL area gives the centralized point. **Act(b) Striding** The algorithm takes a stride to the next point to search. With the geometry maps, the displacement for each stride is computed and represented as $(\frac{1}{2} r \times cos\theta, \frac{1}{2}r \times sin\theta)$ and $(-\frac{1}{2}r \times cos\theta, -\frac{1}{2}r \times sin\theta)$, respectively for the two directions. If the next step is outside the TCL area, we decrement the stride gradually until it’s inside, or it hits the ends. **Act(c) Sliding** The algorithm iterates through the central axis and draw circles along it. Radii of the circles are obtained from the $r$ map. The area covered by the circles indicates the predicted text instance. In conclusion, taking advantage of the geometry maps and the TCL that precisely describes the course of the text instance, we can go beyond detection of text and also predict their shape and course. Besides, the striding algorithm saves our method from traversing all pixels that are related. Label Generation ---------------- ### Extracting Text Center Line For triangles and quadrangles, it’s easy to directly calculate the TCL with algebraic methods, since in this case, TCL is a straight line. For polygons of more than 4 sides, it’s not easy to derive a general algebraic method. Instead, we propose a method that is based on the assumption that, text instances are snake-shaped, i.e. that it does not fork into multiple branches. For a snake-shaped text instance, it has two edges that are respectively the *head* and the *tail*. The two edges near the head or tail are running parallel but in opposite direction. ![image](imgs/data_labelling.pdf){width="1\columnwidth"} For a text instance $t$ represented by a group of vertexes $\{v_0, v_1, v_2,...,v_n\}$ in clockwise or counterclockwise order, we define a measurement for each edge $e_{i,i+1}$ as $M(e_{i,i+1})=\cos\langle e_{i+1,i+2}, e_{i-1,i}\rangle$. Intuitively, the two edges with $M$ nearest to $-1$, e.g. $AH$ and $DE$ in Fig.\[img\_label\], are the head and tail. After that, equal number of anchor points are sampled on the two sidelines, e.g. $ABCD$ and $HGFE$ in Fig.\[img\_label\]. TCL points are computed as midpoints of corresponding anchor points. We shrink the two ends of TCL by $\frac{1}{2}r_{end}$ pixels, so that TCL are inside the TR and makes it easy for the network to learn to separate adjacent text instances. $r_{end}$ denotes the radius of the TCL points at the two ends. Finally, we expand the TCL area by $\frac{1}{5} r$, since a single-point line is prone to noise. ### Calculating $r$ and $\theta$ For each points on TCL: (1) $r$ is computed as the distance to the corresponding point on sidelines; (2) $\theta$ is computed by fitting a straight line on the TCL points in the neighborhood. For non-TCL pixels, their corresponding geometry attributes are set to 0 for convenience. Training Objectives ------------------- The proposed model is trained end-to-end, with the following loss functions as the objectives: $$L=L_{cls}+L_{reg} \label{eq5} \vspace{-4mm}$$ $$L_{cls}=\lambda_{1}L_{tr} + \lambda_{2}L_{tcl} \label{eq6} \vspace{-4mm}$$ $$L_{reg}=\lambda_{3}L_{r} +\lambda_{4}L_{sin} +\lambda_{5}L_{cos} \label{eq7}$$ $L_{cls}$ in Eq.\[eq5\] represents classification loss for TR and TCL, and $L_{reg}$ for regression loss of $r$, $cos\theta$ and $sin\theta$. In Eq.\[eq6\], $L_{tr}$ and $L_{tcl}$ are cross-entropy loss for TR and TCL. Online hard negative mining [@Shrivastava2016Training] is adopted for TR loss, with the ratio between the negatives and positives kept to 3:1 at most. For TCL, we only take into account pixels inside TR and adopt no balancing methods. In Eq.\[eq7\], regression loss, i.e. $L_{r}$ $L_{sin}$ and $L_{cos}$, are calculated as Smoothed-L1 loss[@Girshick_2015_ICCV]: $$\begin{pmatrix}L_{r} \\ L_{cos} \\ L_{sin}\end{pmatrix}= SmoothedL1\begin{pmatrix}\frac{\widehat{r}-r}{r} \\ \widehat{cos\theta}-cos\theta \\ \widehat{sin\theta}-sin\theta\end{pmatrix} \label{eq8}$$ where $\widehat r$, $\widehat{cos\theta}$ and $\widehat{sin\theta}$ are the predicted values, while $r$, $cos\theta$ and $sin\theta$ are their ground truth correspondingly. Geometry loss outside TCL are set to 0, since these attributes make no sense for non-TCL points. The weights constants $\lambda_{1}$, $\lambda_{2}$, $\lambda_{3}$, $\lambda_{4}$ and $\lambda_{5}$ are all set to 1 in our experiments. Experiments =========== In this section, we evaluate the proposed algorithm on standard benchmarks for scene text detection and compare it with previous methods. Analyses and discussions regarding our algorithm are also given. Datasets -------- The datasets used for the experiments in this paper are briefly introduced below: **SynthText** [@gupta2016synthetic] is a large sacle dataset that contains about $800K$ synthetic images. These images are created by blending natural images with text rendered with random fonts, sizes, colors, and orientations, thus these images are quite realistic. We use this dataset to pre-train our model. **TotalText** [@kheng2017total] is a newly-released benchmark for text detection. Besides horizontal and multi-Oriented text instances, the dataset specially features *curved text*, which rarely appear in other benchmark datasets,but are actually quite common in real environments. The dataset is split into training and testing sets with 1255 and 300 images, respectively. **CTW1500** [@Yuliang2017Detecting] is another dataset mainly consisting of curved text. It consists of 1000 training images and 500 test images. Text instances are annotated with polygons with 14 vertexes. **ICDAR 2015** is proposed as the Challenge 4 of the 2015 Robust Reading Competition [@karatzas2015icdar] for incidental scene text detection. Scene text images in this dataset are taken by Google Glasses without taking care of positioning, image quality, and viewpoint. This dataset features small, blur, and multi-oriented text instances. There are 1000 images for training and 500 images for testing. The text instances from this dataset are labeled as word level quadrangles. **MSRA-TD500** [@yao2012detecting] is a dataset with multi-lingual, arbitrary-oriented and long text lines. It includes 300 training images and 200 test images with text line level annotations. Following previous works [@Zhou_2017_CVPR; @Lyu2018], we also include the images from HUST-TR400 [@yao2014unified] as training data when fine-tuning on this dataset, since its training set is rather small. For experiments on ICDAR 2015 and MSRA-TD500, we fit a minimum bounding rectangle based on the output text area of our method. Data Augmentation ----------------- Images are randomly rotated, and cropped with areas ranging from $0.24$ to $1.69$ and aspect ratios ranging from $0.33$ to $3$. After that, noise, blur, and lightness are randomly adjusted. We ensure that the text on the augmented images are still legible, if they are legible before augmentation. ![image](imgs/sample_results_3.pdf){width="0.95\columnwidth"} Implementation Details ---------------------- Our method is implemented in Tensorflow 1.3.0 [@abadi2016tensorflow]. The network is pre-trained on SynthText for one epoch and fine-tuned on other datasets. We adopt the Adam optimazer [@kingma2014adam] as our learning rate scheme. During the pre-training stage, the learning rate is fixed to $10^{-3}$. During the fine-tuning stage, the learing rate is set to $10^{-3}$ initially and decaies with a rate of $0.8$ every 5000 iterations. During fine-tuning, the number of iterations is decided by the sizes of datasets. All the experiments are conducted on a regular workstation (CPU: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz; GPU:Titan X; RAM: 384GB). We train our model with the batch size of 32 on $2$ GPUs in parallel and evaluate our model on 1 GPU with batch size set as $1$. Hyper-parameters are tuned by grid search on training set. Experiment Results ------------------ ### Experiments on Curved Text (Total-Text and CTW1500) {#experiments-on-curved-text-total-text-and-ctw1500 .unnumbered} Fine-tuning on these two datasets stops at about $5k$ iterations. Thresholds $T_{tr}$, $T_{tcl}$ are set to $(0.4, 0.6)$ and $(0.4, 0.5)$ respectively on Total-Text and CTW1500. In testing, all images are rescaled to $512\times 512$ for Total-Text, while for CTW1500, the images are not resized, since the images in CTW1500 are rather small (The largest image is merely $400\times 600$). For comparison, we also evaluated the models of EAST [@Zhou_2017_CVPR] and SegLink [@Shi_2017_CVPR] on Total-Text and CTW1500. The quantitative results of different methods on these two datasets are shown in Tab. \[tab\_total\] and Tab. \[tab\_CTW1500\], respectively. **Method** **Precision** **Recall** **F-measure** ---------------------------------------- --------------- ------------ --------------- SegLink [@Shi_2017_CVPR] $30.3$ $23.8$ $26.7$ EAST [@Zhou_2017_CVPR] $50.0$ $36.2$ $42.0$ Baseline (DeconvNet[@Noh2015Learning]) $33.0$ $40.0$ $36.0$ **TextSnake** **82.7** **74.5** **78.4** : Quantitative results of different methods evaluated on Total-Text. Note that EAST and SegLink were not fine-tuned on Total-Text. Therefore their results are included only for reference.[]{data-label="tab_total"} As shown in Tab. \[tab\_total\], the proposed method achieves $82.7\%$, $74.5\%$, and $78.4\%$ in precision, recall and F-measure on Total-Text, significantly outperforming previous methods. Note that the F-measure of our method is more than double of that of the baseline provided in the original Total-Text paper [@kheng2017total]. **Method** **Precision** **Recall** **F-measure** --------------------------------- --------------- ------------ --------------- SegLink [@Shi_2017_CVPR] $42.3$ $40.0$ $40.8$ EAST [@Zhou_2017_CVPR] $78.7$ $49.1$ $60.4$ DMPNet [@Liu2017Deep] $69.9$ $56.0$ $62.2$ CTD[@Yuliang2017Detecting] $74.3$ $65.2$ $69.5$ CTD+TLOC[@Yuliang2017Detecting] **77.4** $69.8$ $73.4$ **TextSnake** 67.9 **85.3** **75.6** : Quantitative results of different methods evaluated on CTW1500. Results other than ours are obtained from [@Yuliang2017Detecting].[]{data-label="tab_CTW1500"} On CTW1500, the proposed method achieves $67.9\%$, $85.3\%$, and $75.6\%$ in precision, recall and F-measure , respectively. Compared with CTD+TLOC which is proposed together with the CTW1500 dataset in [@Yuliang2017Detecting], the F-measure of our algorithm is $2.2\%$ higher ($75.6\%$ vs. $73.4\%$). The superior performances of our method on Total-Text and CTW1500 verify that the proposed representation can handle well curved text in natural images. ### Experiments on Incidental Scene Text (ICDAR 2015) {#experiments-on-incidental-scene-text-icdar-2015 .unnumbered} Fine-tuning on ICDAR 2015 stops at about $30k$ iterations. In testing, all images are resized to $1280\times 768$. $T_{tr}$, $T_{tcl}$ are set to $(0.4, 0.9)$. For the consideration that images in ICDAR 2015 contains many unlabeled small texts, predicted rectangles with the shorter side less than 10 pixels or the area less than 300 are filtered out. The quantitative results of different methods on ICDAR 2015 are shown in Tab.\[tab\_icdar2015\]. With only *single-scale* testing, our method outperforms most competitors (including those evaluated in multi-scale). This demonstrates that the proposed representation TextSnake is general and can be readily applied to multi-oriented text in complex scenarios. **Method** **Precision** **Recall** **F-measure** **FPS** ------------------------------------------------- --------------- ------------ --------------- --------- Zhang *et al.* [@zhang2016multi] 70.8 43.0 53.6 0.48 CTPN [@tian2016detecting] 74.2 51.6 60.9 7.1 Yao *et al.* [@yao2016scene] 72.3 58.7 64.8 1.61 SegLink [@Shi_2017_CVPR] 73.1 76.8 75.0 - EAST [@Zhou_2017_CVPR] 80.5 72.8 76.4 6.52 SSTD [@SSTD] 80.0 73.0 77.0 **7.7** WordSup $^*$ [@Hu_2017_ICCV] 79.3 77.0 78.2 2 EAST $^*$ $^\text{\dag}$ [@Zhou_2017_CVPR] 83.3 78.3 80.7 - He *et al.* $^*$ $^\text{\dag}$ [@He_2017_ICCV] 82.0 80.0 81.0 1.1 PixelLink [@deng2018pixellink] **85.5** **82.0** **83.7** 3.0 **TextSnake** 84.9 80.4 82.6 1.1 : Quantitative results of different methods on ICDAR 2015. $^*$ stands for multi-scale, $^\text{\dag}$ indicates that the base net of the model is not VGG16.[]{data-label="tab_icdar2015"} ### Experiments on Long Straight Text Lines (MSRA-TD500) {#experiments-on-long-straight-text-lines-msra-td500 .unnumbered} Fine-tuning on MSRA-TD500 stops at about $10k$ iterations. Thresholds for $T_{tr}$, $T_{tcl}$ are $(0.4, 0.6)$ . In testing, all images are resized to $1280\times 768$. Results are shown in Tab.\[tab\_td500\]. The F-measure ($78.3\%$) of the proposed method is higher than that of the other methods. **Method** **Precision** **Recall** **F-measure** **FPS** -------------------------------------------- --------------- ------------ --------------- ---------- Kang *et al.* [@kang2014orientation] 71.0 62.0 66.0 - Zhang *et al.* [@zhang2016multi] 83.0 67.0 74.0 0.48 Yao *et al.* [@yao2016scene] 76.5 **75.3** 75.9 1.61 EAST [@Zhou_2017_CVPR] 81.7 61.6 70.2 6.52 EAST $^\text{\dag}$ [@Zhou_2017_CVPR] **87.3** 67.4 76.1 **13.2** SegLink [@Shi_2017_CVPR] 86.0 70.0 77.0 8.9 He *et al.* $^\text{\dag}$ [@He_2017_ICCV] 77.0 70.0 74.0 1.1 PixelLink [@deng2018pixellink] 83.0 73.2 77.8 3.0 **TextSnake** 83.2 73.9 **78.3** 1.1 : Quantitative results of different methods on MSRA-TD500. $^\text{\dag}$ indicates models whose base nets are not VGG16.[]{data-label="tab_td500"} Analyses and Discussions ------------------------ **Precise Description of Text Instances** What distinguishes our method from others is its ability to predict a precise description of the shape and course of text instances(see Fig.\[img\_samples\]). We attribute such ability to the TCL mechanism. Text center line can be seen as a kind of skeletons that prop up the text instance, and geo-attributes providing more details. Text, as a form of written language, can be seen as a stream of signals mapped onto 2D surfaces. Naturally, it should follows a course to extend. Therefore we propose to predict TCL, which is much narrower than the whole text instance. It has two advantages: (1) A slim TCL can better describe the course and shape; (2) TCL, intuitively, does not overlaps with each other, so that instance segmentation can be done in a very simple and straightforward way, thus simplifying our pipeline. Moreover, as depicted in Fig.\[img\_transform\], we can exploit local geometries to sketch the structure of the text instance and transform the predicted curved text instances into canonical form, which may largely facilitate the recognition stage. ![image](imgs/costa_new.png){width="1.0\columnwidth"} **Generalization Ability** To further verify the generalization ability of our method, we train and fine-tune our model on datasets *without* curved text and evaluate it on the two benchmarks *featuring* curved text. Specifically, we fine-tune our models on ICDAR 2015, and evaluate them on the target datasets. The models of EAST [@Zhou_2017_CVPR], SegLink [@Shi_2017_CVPR], and PixelLink [@deng2018pixellink] are taken as baselines, since these two methods were also trained on ICDAR 2015. [**Datasets**]{} -------------------------------- ----------------- ------------ --------------- --------------- ------------ --------------- **Methods** **Precision** **Recall** **F-measure** **Precision** **Recall** **F-measure** SegLink[@Shi_2017_CVPR] 35.6 33.2 34.4 33.0 2.4 30.5 EAST[@Zhou_2017_CVPR] 49.0 43.1 45.9 46.7 37.2 41.4 PixelLink [@deng2018pixellink] 53.5 52.7 53.1 50.6 42.8 46.4 **TextSnake** $\textbf{61.5}$ **67.9** **64.6** **65.4** **63.4** **64.4** : Comparison of cross-dataset results of different methods. The following models are fine-tuned on ICDAR 2015 and evaluated on Total-Text and CTW1500. Experiments for SegLink, EAST and PixelLink are done with the open source code. The evaluation protocol is DetEval [@wolf2006object], the same as Total-Text. While ICDAR 2015 and Total-Text has word-level labels, CTW1500 uses line-level ones. We deem DetEval[@wolf2006object] preferable to PASCAL [@everingham2015pascal]. Otherwise, the line-level labels of CTW1500 would significantly penalize models fine-tuned on word-level labeled ICDAR2015.[]{data-label="tab_cross"} As shown in Tab.\[tab\_cross\], our method still performs well on curved text and significantly outperforms the three strong competitors SegLink, EAST and PixelLink, without fine-tuning on curved text. We attribute this excellent generalization ability to the proposed flexible representation. Instead of taking text as a whole, the representation treats text as a collection of local elements and integrates them together to make decisions. Local attributes are kept when formed into a whole. Besides, they are independent of each other. Therefore, the final predictions of our method can retain most information of the shape and course of the text.We believe that this is the main reason for the capacity of the proposed text detection algorithm in hunting text instances with various shapes. Conclusion and Future Work ========================== In this paper, we present a novel, flexible representation for describing the properties of scene text with arbitrary shapes, including horizontal, multi-oriented and curved text instances. The proposed text detection method based upon this representation obtains state-of-the-art or comparable performance on two newly-released benchmarks for curved text (Total-Text and SCUT-CTW1500) as well as two widely-used datasets (ICDAR 2015 and MSRA-TD500) in this field, proving the effectiveness of the proposed method. As for future work, we would explore the direction of developing an end-to-end recognition system for text of arbitrary shapes.
--- abstract: 'We provide a tight result for a fundamental problem arising from packing disks into a circular container: The critical density of packing disks in a disk is 0.5. This implies that any set of (not necessarily equal) disks of total area $\delta\leq 1/2$ can always be packed into a disk of area 1; on the other hand, for any $\varepsilon>0$ there are sets of disks of area $1/2+\varepsilon$ that cannot be packed. The proof uses a careful manual analysis, complemented by a minor automatic part that is based on interval arithmetic. Beyond the basic mathematical importance, our result is also useful as a blackbox lemma for the analysis of recursive packing algorithms.' author: - 'Sándor P. Fekete' - Phillip Keldenich - Christian Scheffer bibliography: - 'references.bib' title: 'Packing Disks into Disks with Optimal Worst-Case Density' --- Introduction ============ Preliminaries ============= A Worst-Case Optimal Algorithm {#sec:algorithm} ============================== Analysis of the Algorithm {#sec:analysis} ========================= Hardness {#sec:hardness} ======== Conclusions {#sec:conc} ===========
--- abstract: 'We consider the numerical analysis of the inchworm Monte Carlo method, which is proposed recently to tackle the numerical sign problem for open quantum systems. We focus on the growth of the numerical error with respect to the simulation time, for which the inchworm Monte Carlo method shows a flatter curve than the direct application of Monte Carlo method to the classical Dyson series. To better understand the underlying mechanism of the inchworm Monte Carlo method, we distinguish two types of exponential error growth, which are known as the numerical sign problem and the error amplification. The former is due to the fast growth of variance in the stochastic method, which can be observed from the Dyson series, and the latter comes from the evolution of the numerical solution. Our analysis demonstrates that the technique of partial resummation can be considered as a tool to balance these two types of error, and the inchworm Monte Carlo method is a successful case where the numerical sign problem is effectively suppressed by such means. We first demonstrate our idea in the context of ordinary differential equations, and then provide complete analysis for the inchworm Monte Carlo method. Several numerical experiments are carried out to verify our theoretical results.' author: - Zhenning Cai - Jianfeng Lu - Siyao Yang bibliography: - 'inchworm\_reference.bib' title: 'Numerical analysis for inchworm Monte Carlo method: Sign problem and error growth' --- Open quantum system ,inchworm Monte Carlo method ,numerical sign problem ,error growth Introduction ============ In quantum mechanics, an open quantum system refers to a quantum system interacting with the environment. In reality, no quantum system is absolutely isolated, and therefore the theory of open quantum systems has wide applications including quantum thermodynamics [@Esposito2009], quantum information science [@Shor1995], and quantum biology [@Asano2016]. Due to interaction with the environment, the quantum system is irreversible [@Manicino2018], and its master equation can be obtained by Nakajima-Zwanzig projection technique [@Nakajima1958; @Zwanzig1960], which is an integro-differential equation showing that the dynamics is non-Markovian. When the coupling between the quantum system and the environment is weak, Markovian approximation can be used to simplify the simulation [@Lindblad1976], while for non-Markovian simulations, one needs to apply more expensive methods such as QuAPI (quasi-adiabatic propagator path integral) [@Makri1992; @Makri1993] and HEOM (hierarchical equations of motion) [@Ishizaki2005]. In this paper, we are interested in the numerical analysis for the inchworm algorithm [@Chen2017; @Cai2020], which is a recently proposed diagrammatic Monte Carlo method for open quantum system. The inchworm algorithm was originally proposed in [@Cohen2015] for impurity models. In [@Cai2020], the method is recast in a continuous form as an integro-differential equation, so that classical numerical techniques can be applied. In the integro-differential equation formulation of the inchworm method, the time derivative of the propagator is written as an expression involving an infinite series and high-dimensional integrals. Therefore, the numerical method involves both a Runge-Kutta part for time marching and a Monte Carlo part to deal with the series and integrals. In abstract form, we write the equation as $$\label{eq:general equation} \frac{\dd u}{\dd t} = \mathit{RHS} = \E_X R(X),$$ where $\E_X$ denotes the expectation with respect to the random variable $X$, and both $\mathit{RHS}$ and $R(X)$ are functions of the solution $u$. While we motivate using the inchworm method, such type of equations also arises in many other contexts, and thus our analysis applies in a wider context. Consider using the forward Euler method as the time integrator, combined with Monte Carlo estimate of the $\mathit{RHS}$, the numerical scheme is $$\label{eq:forward Euler} u_{n+1} = u_n + \frac{h}{N_s} \sum_{i=1}^{N_s} R(X_i^{(n)}),$$ where $h$ is the time step, and $X_i^{(n)}$ are random variables drawn from the probability distribution of $X$. Such a scheme is highly related to a number of existing methods such as the Direct simulation Monte Carlo [@Bird1963; @Bird1994], stochastic gradient descent method [@Zhang2004], and the random batch method [@Jin2020]. The qDRIFT method proposed in [@Campbell2019] is also a variant of by replacing the forward Euler method with an exact solver in the context of Hamiltonian simulation. The scheme can be easily extended to general Runge-Kutta methods, which is found in [@Cai2020] to be useful in the simulation of open quantum systems. The numerical analysis of such a method has been carried out for differential-type equations in several cases [@Li2017; @Hu2019; @Jin2020]. When such methods are applied to systems with dissipation [@Li2017; @Li2020], the numerical error can be well controlled by the intrinsic property of the system for long-time simulations. However, in quantum mechanics, where the propagators remain unitary for any $t$, it is often seen that the error grows rapidly with respect to time in the real-time simulations [@Cai2018; @MacKernan2002], as is known as the “numerical sign problem”, or more specifically the “dynamical sign problem” in the context of open quantum systems [@Muhlbacher2008; @Werner2009; @Schiro2010]. The purpose of the inchworm Monte Carlo method is to mitigate the numerical sign problem when simulating the open quantum system by Dyson series expansion [@Chen2017; @Cai2020]. The numerical sign problem, which will be further elaborated in § \[nsp\], refers to the stochastic error when applying Monte Carlo method to estimate the sum or the integral of highly oscillatory, high-dimensional functions. This is an intrinsic and notorious difficulty for simulating many-body quantum systems, such as in condensed matter physics [@Loh1990] and lattice field theory [@Cristoforetti2012]. For open quantum systems, the numerical sign problem becomes more severe as the simulation time gets longer [@Muhlbacher2008; @Werner2009; @Schiro2010]. Specifically, the average numerical error is proportional of the exponential of $t^2$, as introduces great difficulty for long time simulations. Inchworm Monte Carlo method adopts the idea of “partial resummation”, and has successfully reduced the numerical error in a number of applications [@Dong2017; @Ridley2018; @Eidelstein2020]. However, as mentioned in [@Chen2017b], it is not totally clear how the inchworm Monte Carlo method mitigates the numerical sign problem, despite some intuition coming from the idea of partial resummation. In this work, our aim is to demystify such mechanism by a deep look into the evolution of the numerical error. We find that the error of the inchworm Monte Carlo method grows as the exponential of a polynomial of $t$. However, the source of such error growth is not the numerical sign problem. The reason of the fast growth mainly comes from the amplification of the error at previous time steps, which is more similar to the error amplification in Runge-Kutta methods for ordinary differential equations. By separating these two types of error growth— numerical sign problem and error amplification, we find that partial resummation can be regarded as a tool to trade-off the two types of error, as may help flatten the error growth curve in certain cases. We hope this also helps understand a more general class of iterative numerical methods for computing summations [@Makri1995a; @Prokof'ev2007; @Makri2017; @Li2019]. In fact, such understanding of error balance can be already revealed in the context of ODEs, which we will first focus on to save the involved notations in the inchworm Monte Carlo method. Thus, in Section \[sec: diff eq\], we carry out the error analysis of differential equations for general Runge-Kutta methods with Monte Carlo evaluation of the right-hand side. The results are of independent interests, and the analysis also serves as a simple context to understand how partial resummation transforms the mechanism of error growth from numerical sign problem to error amplification. Afterwards, a detailed analysis for the inchworm Monte Carlo method will be given, which reveals the behavior of the error growth in the inchworm Monte Carlo method, and explains whether/how it relaxes the numerical sign problem. In Section \[sec: int diff eq\], we will introduce the integro-differential equation derived from the inchworm Monte Carlo method and present the corresponding main results and their implication. Our analytical results are verified by several numerical tests in Section \[sec: numer exp\], showing the agreement between the theory and the experiments. The rigorous proofs for the error analysis of differential equation and inchworm Monte Carlo equation are later given in Section \[sec: proof diff eq\] and Section \[sec: proof\] respectively. Finally, some concluding remarks are given in Section \[sec: conclusion\]. A stochastic numerical method for differential equations {#sec: diff eq} ======================================================== To demonstrate the methodology of numerical analysis for the equations with the form , we first consider the simple case of an ordinary differential equation: $$\label{eq: diff eq} \frac{\dd u}{\dd t} = f(t,u(t)), \quad t \in [0,T],$$ where $u:[0,T] \rightarrow \C^{d}$ and the right-hand side $f$ is $(p+1)$-times continuously differentiable. A general $s$-stage explicit Runge-Kutta method of order $p$ reads $$\label{def: scheme diff rk} \begin{aligned} & u_{n+1} = u_n + h \sum^s_{i=1} b_i k_i, \\ & k_i = f\Big(t_n + c_i h,u_n + h\sum^{i-1}_{j= 1}a_{ij}k_j \Big), \quad i = 1,\cdots,s. \end{aligned}$$ For simplicity, we assume that the time step $h = T/N$ is a constant, and $u_0$ is given by the initial condition $u_0 = u(0)$. The error estimation of Runge-Kutta methods is standard and can be found in textbooks such as [@Hairer:1993:SOD:153158]. As mentioned in the introduction, we now consider a special scenario where the right-hand side of this equation can be represented as the expectation of a stochastic variable: $$\label{def: g} f(t,u) = \E_X [g(t,u,X)], \qquad \forall t, u$$ with $X$ being a random variable subject to a given distribution. Inspired by the Runge-Kutta method, we consider the following numerical scheme: $$\label{def: scheme diff mc} \tu_{n+1} = \tu_n + h \sum^s_{i=1} b_i \tk_i,~0 \le n \le N$$ where $$\tk_i = \frac{1}{N_s} \sum^{N_s}_{l = 1} g\Big(t_n + c_i h, \tu_n + h \sum^{i-1}_{j = 1} a_{ij} \tk_j,X^{(i)}_l\Big)$$ with the initial condition $\tu_0 = u(0)$. Here $X^{(i)}_l$ are independent samples generated from the probability distribution of $X$. In this section, we will look for the gap between these two numerical methods and . In specific, we aim to bound the *bias* $\|\E(u_N-\tu_N)\|_2$ and the *numerical error* $[\E(\|u_N-\tu_N\|_2^2)]^{1/2}$. By combining these errors with the error estimation of Runge-Kutta methods, the final error bounds can be obtained simply by triangle inequality: $$\begin{aligned} \|\E(u(T)-\tu_N)\|_2 &\le \|u(T) - u_N\|_2 + \|\E(u_N - \tu_N)\|_2, \\ [\E(\|u(T)-\tu_N\|^2_2)]^{1/2} &\le \|u(T) - u_N\|_2 + [\E(\|(u_N - \tu_N)\|^2_2)]^{1/2},\end{aligned}$$ where $\|u(T) - u_N\|_2$ is the error numerical error for the standard Runge-Kutta method, whose analysis can be found in a number of textbooks. Main results for differential equations {#sec: diff results} --------------------------------------- In this section, we will list the main results of our error analysis. The results are based on the following working hypothesis: $$\label{assump: bd} \|\nabla_u g^{(m)}(t,u,X)\|_2 \le M', \quad \|\nabla_u f^{(m)}(t,u) \|_2 \le M', \quad \|\nabla_{u}^2 f^{(m)}(t,u)\|_{{\mathrm{F}}} \le M'',$$ where $f^{(m)}$ denotes the $m$th component of $f$, and $M$, $M'$ and $M''$ are constants independent of $t$, $u$ and $X$. For the Runge-Kutta solution, we define $$\rkn = \max_n \max_{i=1,\cdots,s} \sqrt{\operatorname{Var}g\Big(t_n + c_i h, u_n + h \sum_{j=1}^{i-1} a_{ij} k_j, X \Big)}.$$ For simplicity, below we use $R$ to denote the upper bound of all the coefficients appearing in the Runge-Kutta method . Precisely, we assume that $$\label{assump: rk bd} |a_{ij}|,|b_i|,|c_i|\le R \text{~for all~}i,j.$$ Thus, the recurrence relations of both the bias and the numerical error can be established as: \[thm: diff recurrence relations\] Given a sufficiently small time step length $h$ and a sufficiently large number of samplings at each step $N_s$. If the boundedness assumptions hold, we have the recurrence relations $$\label{eq: diff recurrence 1} \|\E(u_{n+1}-\tu_{n+1})\|_2 \le \ (1+\alpha h)\|\E(u_{n}-\tu_{n})\|_2 + \alpha h \Big( \E\big(\|u_n -\tu_n\|_2^2\big) + \frac{h^2}{N_s} \rkn^2 \Big)$$ and $$\label{eq: diff recurrence 2} \E(\|u_{n+1}-\tu_{n+1}\|_2^2) \le (1+\beta h ) \E(\|u_{n}-\tu_{n}\|_2^2) + \beta \Big( \frac{h^2}{N_s} \rkn^2 + \frac{\alpha^2 h^5}{s^2 R^2 N_s^2} \rkn^4 \Big),$$ where $\alpha = 2^s sR \max(M', s M'', 2^s s^3 M'^2 M'' R^2, 2^s s^2M'' R^2)$ and $\beta = \max(4+2^{s+2}M'^2 d R^2 s^3, 2^{s+1} R^2 s^2)$. Next, we apply the two recurrence relations above and accumulate the two errors step by step. We will reach the following estimates: \[thm: diff bounds\] Under the settings in Proposition \[thm: diff recurrence relations\], we have Bias estimation $$\label{diff 1st error upper bound} \|\E(u_{N}-\tu_{N})\|_2 \le \left[ \frac{h^2}{N_s}\big( e^{\alpha T}-1\big) + \alpha T \Big( \frac{h}{N_s} + \frac{\alpha^2 h^4}{s^2 R^2 N_s^2} \rkn^2 \Big) \big(e^{\max(\alpha,\beta) T} -1\big) \right] \rkn^2.$$ Numerical error estimation $$\label{diff 2ed error upper bound} \E(\|u_N-\tu_N\|_2^2) \le \big( e^{\beta T}-1\big) \Big( \frac{h}{N_s} + \frac{\alpha^2 h^4}{s^2 R^2 N_s^2} \rkn^2 \Big) \rkn^2.$$ This theorem shows that although the stochastic scheme is biased, the bias plays a minor role in the numerical simulation since the stochastic noise, estimated by the square root of , is significantly larger. It is also worth mentioning that the constant $\beta$ depends only on the Runge-Kutta scheme and the bound of the first-order derivatives, while the constant $\alpha$ depends also on the bounds of the second-order derivatives. However, in the estimate , the constant $\alpha$ only appears in a term significantly smaller than $h/N_s$. Thus, it is expected that the second-order derivatives have less effect on the numerical error. This is also the case for the inchworm Monte Carlo method to be analyzed in Section \[sec: proof\], and accordingly we will give less details for the estimate involving second-order derivatives. We will defer the proofs of these theorems. For now, let us discuss the implication of these theorems and how this is related to the numerical sign problem in the quantum Monte Carlo method. Discussion on the relation between the numerical error and the numerical sign problem {#nsp} ------------------------------------------------------------------------------------- The above error estimation shows exponential growth of the numerical error with respect to time. Such exponential growth is due to the amplification of the error at previous time steps, as is well known in the numerical analysis for ordinary differential equations, which is often estimated by the discrete Gronwall inequality. There exists another kind of exponential growth of error, which is typically encountered in the stochastic simulation of quantum mechanical systems, called the “numerical sign problem” [@Loh1990]. Such a problem occurs when using the Monte Carlo method to evaluate the integral or sum of a strongly oscillatory high-dimensional function. To understand how the numerical sign problem causes the exponential growth of the numerical error, we consider the case where $f(t,u) = -\ii H(t) u$ and $g(t,u,X) = -\ii A(t,X)u$. As an analog of quantum mechanics, we assume that $H(t)$ is a Hermitian matrix, so that $\|u(t)\|_2^2 = \|u(0)\|_2^2$ for any $t$. The solution of this system of ordinary differential equations can be expressed by the Dyson series: $$\label{dyson} \begin{split} u(T) = u(0) + \sum_{M=1}^{+\infty} \int_0^T \int_0^{t_M} \cdots \int_0^{t_2} & (-\ii)^M \big( \E_{X_M} A(t_M,X_M) \big) \big( \E_{X_{M-1}} A(t_{M-1},X_{M-1}) \big) \\ & \cdots \big( \E_{X_1} A(t_1,X_1) \big) u(0) \,\dd t_1 \cdots \,\dd t_{M-1} \,\dd t_M. \end{split}$$ Thus $u(T)$ can be evaluated directly using the Monte Carlo method to approximate the integral, where $M, t_1, \cdots, t_M$ and $X_1, \cdots, X_M$ are all treated as random variables. While different methods to draw samples exist [@Cai2018], here we only consider the simplest approach, in which $M$ follows the Poisson distribution, the time points $(t_1, t_2, \cdots, t_M)$ are uniformly distributed in the $M$-dimensional simplex. Let $u^{(\mathrm{num})}(T)$ be the numerical solution obtained using this method. Then the standard error estimation of the Monte Carlo method yields that $$\label{eq:variance} \begin{split} & \E \|u^{(\mathrm{num})}(T) - u(T)\|^2 \\ \leqslant{} & \sum_{M=0}^{+\infty} \int_0^T \int_0^{t_M} \cdots \int_0^{t_2} \E \| A(t_M,X_M) A(t_{M-1},X_{M-1}) \cdots A(t_1,X_1) u(0) \|_2^2 \,\dd t_1 \cdots \,\dd t_{M-1} \,\dd t_m - \|u(T)\|_2^2 \\ \leqslant{} & [\exp (d M'^2 T) - 1] \|u(0)\|_2^2, \end{split}$$ where $d$ is the number of dimensions of $u$, and $M'$ is again the bound of the first-order derivatives defined in . It can be observed that the numerical error again grows exponentially in time. However, such exponential growth of the error is not due to the error amplification in the Gronwall inequality. It comes from the growing integral domain on the right-hand side of . Note that although the integral domain expands as $T$ increases, the magnitude of the infinite sum does not grow over time ($\|u(T)\|_2 = \|u(0)\|_2$). This indicates that when $T$ is large, strong oscillation exists in the integrand of , resulting in significant cancellation when taking the sum, which leads to huge variance for Monte Carlo estimation. This intrinsic difficulty of stochastic methods is known as the numerical sign problem. As we will see later, if not carefully dealt with, the numerical sign problem can cause even faster growth of the numerical error. One possible approach to mitigating the numerical sign problem is to use the method of *partial resummation*, which only takes part of the summation (instead of the whole integral), and use the result to find other parts of the sum. For example, suppose we want to compute the infinite sum: $$s = 1+a+a^2+a^3+\cdots,$$ we can choose to take the sum directly using the Monte Carlo method. Alternatively, we can also first take the partial sum $s_1 = 1+a$, and then use the result of $s_1$ to compute another partial sum $s_3 = (1 + a^2) s_1$. Afterwards, $s_3$ can be used to compute $s_7 = (1 + a^4) s_3$, and so forth. It can be seen that the error in the computation of $s_1$ will be amplified when computing $s_3$, and the error of $s_3$ will be amplified in the computation of $s_7$. This illustrates the idea of partial resummation, which partly transfers the sign problem to error amplification. Below we would like to demonstrate that sometimes the Runge-Kutta method can also be considered as the partial resummation of , which changes the underlying mechanism of the error growth. Consider applying the forward Euler method to the equation $$\label{eq: toy model} \frac{\dd u}{\dd t} = -\ii H(t)u, \qquad H(t) = \E_X A(t,X),$$ so that the numerical scheme is $$\tu_{n+1} = \tu_n -\ii \frac{h}{N_s} \sum_{l=1}^{N_s} A(t_n, X_l^{(n)}) \tu_n.$$ When $n = 0$, the scheme gives $$\label{eq: tu1} \tu_1 = u(0) -\ii \underline{\frac{h}{N_s} \sum_{l=1}^{N_s} A(0, X_l^{(0)}) u(0)}.$$ If we view the underlined term as a special Monte Carlo method to evaluate the integral $$\int_0^h \E A(t_1, X) u(0) \,\dd t_1,$$ for which we only take one sample of $t_1$ locating at $t_1=0$, then $\tu_1$ turns out to be part of the right-hand side of by reducing the integral domain from $T$ to $h$ and only considering $M=1$. Next, this partial sum $\tu_1$ is used to compute $\tu_2$: $$\begin{split} \tu_2 &= \tu_1 -\mathrm{i} \frac{h}{N_s} \sum_{l=1}^{N_s} A(t_1, X_l^{(1)}) \tu_1 \\ &= u(0) -\mathrm{i} \frac{2h}{N_s} \sum_{l=1}^{N_s} \frac{A(0, X_l^{(0)}) + A(h,X_l^{(1)})}{2} u(0) + (-\ii)^2 \frac{h^2}{N_s^2} \left( \sum_{l=1}^{N_s} A(h, X_l^{(1)}) \right) \left( \sum_{l=1}^{N_s} A(0, X_l^{(0)}) \right) u(0) \\ & \approx u(0) + \int_0^{2h} (-\ii) \E A(t_1, X) \,\dd t_1 + \frac{1}{2} \int_0^{2h} \int_0^{t_2} (-\ii)^2 (\E A(t_2, X))(\E A(t_1, X)) u(0) \,\dd t_1 \,\dd t_2, \end{split}$$ which can again be considered as the partial sum of . For further time steps, this can also be verified. As is well known, the error of the forward Euler method may accumulate as the solution evolves, and therefore this example again shows the change of the mechanism for the growth of the error. However, our error estimate in Theorem \[thm: diff bounds\] seems to suggest that shifting numerical sign problem to error amplification does not flatten the error curve, which still grows exponentially. The reason is that our error estimation does not make any assumption on the stability of the Runge-Kutta method, as leads to the exponential growth of the error regardless of the scheme and the problem. Again, let us take as an example and apply the second-order Heun’s method. Then the deterministic scheme and the stochastic scheme are, respectively, $$\label{eq:two schemes} u_{n+1} = (I - h \mathcal{L}) u_n \quad \text{and} \quad \tilde{u}_{n+1} = (I - h \mathcal{A}) \tilde{u}_n,$$ where $$\begin{aligned} \mathcal{L} &= \frac{1}{2} \left[ \ii \Big( H(t_n) + H(t_{n+1}) \Big) + h H(t_{n+1}) H(t_n) \right], \\ \mathcal{A} &= \frac{1}{2} \left[ \frac{\ii}{N_s} \sum_{l=1}^{N_s} \Big( A(t_n, X_l^{(1)}) + A(t_{n+1}, X_l^{(2)}) \Big) + h \left( \frac{1}{N_s} \sum_{l=1}^{N_s} A(t_{n+1}, X_l^{(2)})\right) \left( \frac{1}{N_s} \sum_{l=1}^{N_s} A(t_n, X_l^{(1)})\right) \right].\end{aligned}$$ By straightforward calculation, we can find that $$\E \|u_{n+1} - \tilde{u}_{n+1}\|_2^2 \leqslant \|I-h\mathcal{L}\|_2^2 \, \E\|u_n - \tilde{u}_n\|_2^2 + h^2 \E\|(\mathcal{L} - \mathcal{A}) \tilde{u}_n\|_2^2.$$ On the right-hand side, the second term can be bounded by the standard Monte Carlo error estimate. For the first term, if we assume that $H(t_{n+1})$ and $H(t_n)$ are both Hermitian matrices, and $H(t_{n+1}) - H(t_n) = O(h)$, then $\|I-h\mathcal{L}\|_2^2 = 1 + O(h^4)$. Therefore when $h$ is small, the exponential growth of the error can be well suppressed, since $$\lim_{h\rightarrow 0^+} (1 + Ch^4)^{T/h} = 1$$ for any positive constants $C$ and $T$. However, for large time steps such that $\|I - h\mathcal{L}\|_2$ is significantly larger than $1$, the error still grows exponentially. In general, for a stable Runge-Kutta scheme, the constant $\beta$ in the coefficient $(1+\beta h)$ appearing in can be negative or a positive $o(1)$ quantity. In this case, partial resummation indeed helps reduce the error growth. One example of applications is the method of qDRIFT proposed in [@Campbell2019], where the total Hamiltonian is also computed using a stochastic method. A symplectic time integrator is utilized therein so that the error growth is also well suppressed. Note that the paper [@Campbell2019] provides only an estimate of the bias for qDRIFT, while has not considered the full numerical error. As we have discussed, the bias is in fact not the major part of the error. The analysis of such a simple ODE sketches the idea how the numerical sign problem can be mitigated in the algorithms with partial resummation. For open quantum systems, the inchworm Monte Carlo method, which has been claimed to have the capability of taming the numerical sign problem [@Cohen2015], is one option to apply partial resummation to the corresponding Dyson series. However, due to the existence of the heat bath, the evolution of the quantum state is non-Markovian, and the equation that the inchworm Monte Carlo method solves can only be formulated as an integro-differential equation, so that one cannot simply apply a symplectic scheme to suppress the error growth. For this reason, the situation of the inchworm Monte Carlo method is much more complicated due to the nontrivial behavior of the error amplification. A detailed introduction will be given in the next section. Inchworm Monte Carlo method and the main results {#sec: int diff eq} ================================================ We now study the integro-differential equation induced by the inchworm method for open quantum systems. The integro-differential equation is introduced in [@Cai2020] for the full propagator $\Ge: \mathcal{T} \rightarrow \C^{n\times n}$, where the subscript $e$ stands for “exact” and $\mathcal{T} = \{(s_f, s_i) \mid s_f \geqslant s_i \geqslant 0\}$. Here $n$ is the number of dimensions for all the quantum states of the system. In this paper, we only consider the case $n=2$ for simplicity. For the case of higher dimensions, the analysis can be extended without difficulties. The equation reads $$\label{eq: inchworm equation} \frac{\partial \Ge(\sa,s_i)}{\partial \sa} = \sgn(\sa - t) \ii H_s \Ge(\sa,s_i) + \Hs(\sa,\Ge,s_i),$$ where $$\label{eq: calH} \Hs(\sa,\Ge,s_i) := \sgn(\sa - t)\sum^{\bar{M}}_{\substack{M=1\\ M \text{~is odd~}}} \ii^{M+1} \int_{\sa > s_M^M > \cdots > s_1^M > s_i } (-1)^{\#\{\vec{\sb}^M \le t\}}W_s \Us(\sa,\vec{\sb}^M,s_i) \Ls(\sa,\vec{\sb}^M) \,\dd\vec{\sb}^M .$$ Here we use $\vec{\sb}^M$ as a short hand of the sequence $s_M^M, s_{M-1}^M, \cdots, s_2^M, s_1^M$, and the integral with respect to $\vec{\sb}^M$ is interpreted as $$\int_{\sa > s_M^M > \cdots > s_1^M > s_i} \varphi \,\dd\vec{\sb}^M = \int_{s_i}^{\sa} \int_{s_1^M}^{\sa} \cdots \int_{s_{M-1}^M}^{\sa} \varphi \,\dd s_M^M \cdots \,\dd s_2^M \,\dd s_1^M.$$ Other symbols appeared in are introduced as follows: - $\Us(\sa,\vec{\sb}^M,s_i) = \Ge(\sa,s_M^M) W_s \Ge(s_M^M,s_{M-1}^M) \cdots W_s \Ge(s_1^M,s_i)$; - $H_s \in \C^{2\times2}$: the Hamiltonian of the quantum system we are interested in; - $W_s \in \C^{2\times2}$: the perturbation of the Hamiltonian due to coupling with the environment; - $O_s \in \C^{2\times2}$: the observable of the quantum system, acting at time $t$; - $\Ls$: the bath influence functional with the form of the sum of products (see [@Cai2020] for details). The equation holds for the initial time point $s_i\in[0,2t]\backslash \{t\}$ and the final time point $\sa \in [s_i,2t]\backslash \{t\}$, and the full propagator satisfies the “jump conditions” at time $t$: $$\label{eq: jump condition} \begin{split} &\lim_{\sa \rightarrow t^+} \Ge(\sa,s_i) = O_s \lim_{\sa \rightarrow t^-}\Ge(\sa,s_i), \\ &\lim_{s_i \rightarrow t^-} \Ge(\sa,s_i) = \lim_{s_i \rightarrow t^+}\Ge(\sa,s_i)O_s \end{split}$$ as well as the boundary condition $\Ge(\sa,\sa) = \text{Id}$. Here we remark that the original formula of this integro-differential equation is given with $\bar{M}=\infty$ in [@Cai2020]. However, in practice, we truncate the series by a finite $\bar{M}$ as an approximation. In fact, the major benefit of the inchworm method compared with the classical Dyson series expansion is just the fast convergence of this infinite series [@Chen2017; @Chen2017b]. Therefore in this paper, we only restrict ourselves to the case of a finite $\bar{M}$. Similar to the case of the differential equation, we may use general explicit time integrator to solve this integro-differential equation numerically. In this work, we focus on the numerical method proposed in [@Cai2020], which is inspired by the second-order Heun’s method: $$\label{def: scheme 1} \begin{split} &G^*_{n+1,m} = (I+\sgn(t_n - t) \ii H_s h)G_{n,m} + K_1 h, \\ &G_{n+1,m} = (I + \frac{1}{2}\sgn(t_n - t)\ii H_s h )G_{n,m} +\frac{1}{2}\sgn(t_{n+1} - t)\ii H_s h G^*_{n+1,m} + \frac{1}{2}(K_1+K_2) h, \quad 0 \le m\le n\le 2N, \end{split}$$ where $h=t/N$ (we again require $h \le 1$) is the time step length, and $G_{n,m}$ denotes the numerical approximation of the solution $\Ge(nh, mh)$. Different from the standard Heun’s method for ODEs, the slope $K_1$ has to be computed based on a number of previous numerical solutions $$\label{eq: gbnm} \gb_{n,m}:=(G_{m+1,m};G_{m+2,m+1},G_{m+2,m};\cdots;G_{n,n-1},\cdots,G_{n,m}).$$ The explicit expression for $K_1$ is given by $$\begin{gathered} \label{def: scheme 1 k1} K_1 = F_1( \gb_{n,m}) := \sgn(t_n - t) \sum^{\bar{M}}_{\substack{M=1 \\M \text{~is odd~}}} \ii^{M+1} \int_{t_n > s_M^M > \cdots > s_1^M > t_m } (-1)^{\#\{\vec{\sb}^M \le t\}}W_s I_h G(t_n,s_M^M) W_s \cdots W_s \times \\ \times I_h G(s_1^M,t_m) \Ls(t_n,\vec{\sb}^M) \dd \vec{\sb}^M, \end{gathered}$$ where $I_h G(\cdot,\cdot)$ is obtained by piecewise linear interpolation on the triangular mesh shown in Figure \[fig:mesh and order\] such that $I_h G(t_j,t_k) = G_{j,k}$ for all integers $m\le k\le j\le n$. Similarly, $K_2$ is given by $$\begin{gathered} K_2 = F_2(\gb^*_{n,m}) := \sgn(t_{n+1} - t) \sum^{\bar{M}}_{\substack{M=1 \\ M \text{~is odd~}}} \ii^{M+1} \int_{t_{n+1} > s_M^M > \cdots > s_1^M > t_m } (-1)^{\#\{\vec{\sb}^M\le t\}}W_s I^*_h G(t_{n+1},s_M^M) W_s \cdots W_s \times \\ \times I^*_h G(s_1^M,t_m) \Ls(t_{n+1},\vec{\sb}^M) \dd\vec{\sb}^M \end{gathered}$$ where $\gb^*_{n,m}:=(\gb_{n,m};G_{n+1,n},\cdots,G_{n+1,m+1},G^*_{n+1,m})$ and $I^*_h G(\cdot,\cdot)$ is the linear interpolation such that $$I^*_h G(t_j,t_k) = \begin{cases} G_{j,k}, & \text{if } (j,k) \neq (n+1,m), \\ G^*_{n+1,m}, & \text{if } (j,k) = (n+1,m). \end{cases}$$ To implement this scheme, we compute each $G_{j,k}$ in the order illustrated in Figure \[fig:mesh and order\]. Specifically, we calculate the propagators column by column from left to right, and for each column we start from the boundary value $G_{i,i} = \text{Id}$ (red “”) locating on the diagonal and compute from top to bottom. Due to the jump conditions , we need a special treatment for the two discontinuities at $G_{N,k}$ (green “”) and $G_{j,N}$ (blue “”) to achieve second-order convergence. In the numerical scheme, we keep two copies of $G_{N,k}$ or $G_{j,N}$ representing the left- and right-limits when $s \to t^{\pm}$: $$(G_{N^+,k},G_{N^-,k}) \text{~and~} (G_{j,N^+},G_{j,N^-}) \text{~for~} 0\le k \le N-1,N+1 \le j \le 2N.$$ Here $G_{N^{\pm},k}$ and $G_{j,N^{\pm}}$ are, respectively, the approximation of $\lim\limits_{s \rightarrow t^{\pm}} G(s, kh)$ and $\lim\limits_{s \rightarrow t^{\pm}} G(j h, s)$. The relation $G_{N^+,k} = O_s G_{N^-,k}$ and $G_{j,N^-} = G_{j,N^+} O_s$ are immediately derived from the jump conditions. Moreover, we note that the boundary value on the discontinuities are given by: $G_{N^+,N^+}=G_{N^-,N^-} =\text{Id}$ and $G_{N^+,N^-} = O_s$. In the implementation, we need to follow the rules below while evolving the scheme near the discontinuities: - When $n = N-1$, the quantities $G_{n+1,m}^*$ and $G_{n+1,m}$ are regarded as $G_{N^-,m}^*$ and $G_{N^-,m}$, respectively, and $\sgn(t_{n+1}-t)$ takes the value $-1$. When $n = N$, the propagator $G_{n,m}$ is regarded as $G_{N^+,m}^*$, and $t_n-t$ takes the value $1$. - The value of $G_{n+1,N^+}$ is set to be $O_s G_{N^-,m}$; the value of $G_{n+1,N^-}$ is set to be $G_{n+1,N^+} O_s$. - $\sgn(t_{N^-} - t) = -1$, $\sgn(t_{N^+} - t) = 1$. - The interpolation of $I_h G$ and $I_h^* G$ should respect such discontinuities. For example, the interpolating operator $I_h$ should satisfy $$\begin{gathered} \lim_{s\rightarrow t^{\pm}} I_h G(t_j, s) = G_{j,N^{\pm}}, \qquad \lim_{s\rightarrow t^{\pm}} I_h G(s, t_k) = G_{N^{\pm},k}, \\ \lim_{s\rightarrow t^+} \lim_{\tilde{s} \rightarrow t^+} I_h G(\tilde{s}, s) = \lim_{s\rightarrow t^-} \lim_{\tilde{s} \rightarrow t^-} I_h G(s, \tilde{s}) = \text{Id}, \qquad \lim_{s\rightarrow t^+} \lim_{\tilde{s} \rightarrow t^-} I_h G(s, \tilde{s}) = O_s. \end{gathered}$$ The conditions for $I_h^* G$ are similar. As we will prove later, the above numerical method guarantees a second-order approximation of the solution. However, the computation cost is not affordable when $M$ is large since the degrees of freedom for calculating the integral with respect to $\vec{\sb}^M$ will grow exponentially w.r.t $M$. Therefore, we take advantage of Monte Carlo integration and replace the integrals by the averages of Monte Carlo samples, resulting in the following *inchworm Monte Carlo method*: $$\label{def: scheme 2} \begin{split} &\tG^*_{n+1,m} = (I+\sgn(t_n - t) \ii H_s h)\tG_{n,m} + \tK_1 h, \\ &\tG_{n+1,m} = (I + \frac{1}{2}\sgn(t_n - t)\ii H_s h )\tG_{n,m} +\frac{1}{2}\sgn(t_{n+1} - t)\ii H_s h \tG^*_{n+1,m} + \frac{1}{2}(\tK_1+\tK_2) h, \quad 0 \le m\le n\le 2N, \end{split}$$ with $$\begin{aligned} \tK_1 & = \frac{1}{N_s}\sum_{i=1}^{N_s} \widetilde{F}_1( \tg_{n,m};\vec{\sb}^i)\\ & := \frac{\sgn(t_n - t)}{N_s} \sum_{i=1}^{N_s}\sum^{\bar{M}}_{\substack{M=1 \\ M \text{~is odd~}}} \ii^{M+1} \frac{(t_n-t_m)^M}{M!} (-1)^{\#\{\vec{\sb}^{i,M} \le t\}}W_s I_h\tG(t_n,s^{i,M}_M) W_s \cdots W_s \times \\ & \hspace{150pt} \times I_h\tG(s^{i,M}_1,t_m) \Ls(t_n,\vec{\sb}^{i,M}),\end{aligned}$$ where $N_s$ denotes the number of samplings and $\vec{\sb}^{i,M}=(s_1^{i,M},s_2^{i,M},\cdots,s_M^{i,M})$ is the time sequence obtained via uniform sampling $\vec{\sb}^{i,M} \sim U(t_m,t_n)$. Similarly, we define $$\begin{aligned} \tK_2 & = \frac{1}{N_s}\sum_{i=1}^{N_s} \widetilde{F}_2(\tg^*_{n,m};\vec{\sb}^{*i}) \\ & := \frac{\sgn(t_{n+1} - t)}{N_s} \sum_{i=1}^{N_s} \sum^{\bar{M}}_{\substack{M=1 \\ M \text{~is odd~}}} \ii^{M+1} \frac{(t_{n+1}-t_m)^M}{M!} (-1)^{\#\{\vec{\sb}^{*i,M} \le t\}}W_s I^*_h \tG(t_{n+1},s_M^{*i,M}) W_s \cdots W_s \times \\ & \hspace{150pt} \times I^*_h \tG(s_1^{*i,M},t_m) \Ls(t_{n+1},\vec{\sb}^{*i,M})\end{aligned}$$ with the samplings $\vec{\sb}^{*i,M} = (s_1^{*i,M},s_2^{*i,M},\cdots,s_M^{*i,M}) \sim U(t_m, t_{n+1})$. Our goal in this section is to understand how the error evolves over time. The purpose is to compare this method with the classical method using Dyson series expansion [@Dyson1949]. According to the derivation of the inchworm Monte Carlo method [@Chen2017; @Cai2020], the underlying idea is the partial resummation of the Dyson series with the following form: $$\label{eq:Ge_Dyson} \Ge(\sa,s_i) = \sum_{\substack{m=0\\m\text{ is even}}}^{+\infty} \ii^m \int_{\sa>s_m^m>\cdots>s_1^m>s_i} (-1)^{\#\{\vec{\sb}^m \le t\}}\Us^{(0)}(\sa,\vec{\sb}^m,s_i) \Ls^{(0)}(\vec{\sb}^m) \,\dd\vec{\sb}^m,$$ where $\Us^{(0)}(\sa,\vec{\sb}^m,s_i) = G_s^{(0)}(\sa,s_m^m) W_s G_s^{(0)}(s_m^m,s_{m-1}^m) W_s \cdots W_s G_s^{(0)}(s_2^m,s_1^m) W_s G_s^{(0)}(s_1^m,s_i)$ with $G_s^{(0)}(\cdot,\cdot)$ defined by $$G_s^{(0)}(s_{k+1}^m, s_k^m) = \begin{cases} {\mathrm{e}}^{-\ii (s_{k+1}^m - s_k^m) H_s}, & \text{if } s_k^m \leqslant s_{k+1}^m < t, \\ {\mathrm{e}}^{-\ii (s_k^m - s_{k+1}^m) H_s}, & \text{if } t \leqslant s_k^m \leqslant s_{k+1}^,, \\ {\mathrm{e}}^{-\ii (t - s_{k+1}^m) H_s} O_s {\mathrm{e}}^{-\ii (t - s_k^m) H_s}, & \text{if } s_k^m < t \leqslant s_{k+1}^m. \end{cases}$$ Similar to $\Ls$, the function $\Ls^{(0)}$ in is also the sum of products, but with much more terms in the sum. The propagator $\Ge(\sa,s_i)$ given by is equivalent to that given by with $\bar{M} = +\infty$. The relation between these two formulations is similar to the relation between and . However, due to the non-Markovian nature of the propagators, partial resummation cannot reduce the series to a differential equation, and therefore the equation holds an integro-differential form . Nevertheless, by partial resummation, the series with $\bar{M} = +\infty$ converges much faster than , allowing us to truncate it to get a reasonable approximation. In this paper, we assume that a finite $\bar{M}$ can already provide sufficiently good approximation and ignore the error introduced by the truncation of the series. Specifically, we will study the error growth of the inchworm Monte Carlo method for a finite $\bar{M}$ by comparing it with the numerical sign problem for the Monte Carlo method applied to . Let us remark that if we consider the direct summation without using inchworm method, we will face a severe dynamical sign problem: For any given even integer $m$, the function $\Ls^{(0)}$ is the sum of $(m-1)!!$ terms, each bounded by $\bdL^{m/2}$. Similar to our analysis for , the numerical sign problem of can be quantified by the following bound of variance: $$\sum_{\substack{m=0\\m\text{ is even}}}^{+\infty} \frac{(\sa-s_i)^m}{m!} (m-1)!! \left(\|W_s\|^m \bdL^{m/2}\right)^2 = \exp \left( \frac{\|W_s\|^4 \bdL^2 (\sa-s_i)^2}{2} \right).$$ For details, we refer the readers to [@Cai2020 Section 5], where a proof for the spin-Boson model can be found. In [@Chen2017; @Chen2017b], it is claimed that the inchworm Monte Carlo method can effectively mitigate such sign problem, without providing an argument. In this paper, we aim at a rigorous numerical analysis for the scheme. Notation and Assumptions ------------------------ We first list some notations and assumptions here for the convenience of readers. ### Vectorization and norms For a sequence of matrices defined as $\yb:=(Y_{1},Y_{2},\cdots,Y_{\ell}) \in \C^{2\times 2\ell}$, we define its vectorization $\vec{\yb}$ by $$\vec{\yb}=(Y^{(11)}_{1},Y^{(21)}_1,Y^{(12)}_1,Y^{(22)}_1,\cdots,Y^{(11)}_{\ell},Y^{(21)}_\ell,Y^{(12)}_\ell,Y^{(22)}_\ell)^{{\mathrm{T}}} \in \C^{4\ell},$$ which reshapes the matrix into a column vector. The same notation applies to a single matrix. For example, for $Y = (Y^{(ij)})_{2\times 2}$, we have $\vec{Y} = (Y^{(11)}, Y^{(21)}, Y^{(12)}, Y^{(22)})^{{\mathrm{T}}}$. For any function $F(\boldsymbol{y})$, its gradient $\nabla F(\boldsymbol{y})$ is defined as a $4\ell$-dimensional vector: $$\nabla F(\yb) = \left( \frac{\partial F}{\partial Y_1^{(11)}}, \frac{\partial F}{\partial Y_1^{(21)}}, \frac{\partial F}{\partial Y_1^{(12)}}, \frac{\partial F}{\partial Y_1^{(22)}}, \cdots, \frac{\partial F}{\partial Y_{\ell}^{(11)}}, \frac{\partial F}{\partial Y_{\ell}^{(21)}}, \frac{\partial F}{\partial Y_{\ell}^{(12)}}, \frac{\partial F}{\partial Y_{\ell}^{(22)}} \right)^{{\mathrm{T}}},$$ so that the mean value theorem is denoted by $$F(\yb_2) - F(\yb_1) = \nabla F\Big( (1-\xi) \yb_1 + \xi \yb_2 \Big)^{{\mathrm{T}}} (\vec{\yb}_2 - \vec{\yb}_1), \qquad \text{for some } \xi \in [0,1].$$ The Hessian matrix $\nabla^2 F(\yb)$ is similarly defined as a $4\ell \times 4\ell$ matrix. Let $\Is$ be an index set and $\gb = (G_{\alpha})_{\alpha \in \Is}$ be a collection of random matrices with each $G_{\alpha} \in \C^{2\times 2}$. For any $\Ds \subset \Is$, we define $$\|\gb\|_{\Ds} := \max_{\alpha \in\Ds}\{\| G_{\alpha} \|_{\mathrm{F}}\},$$ where $\|\cdot\|_{\mathrm{F}}$ denotes the Frobenius norm. When $\Ds = \Is$, the subscript $\Ds$ will be omitted: $\|\gb\| = \|\gb\|_{\Is}$. It is clear that $\|\cdot\|$ defines a norm, and $\|\cdot\|_{\Ds}$ is a seminorm if $\Ds \subset \Is$. In particular, for any $2\times 2$ matrix $G$, we define $\|G\| = \|G\|_{{\mathrm{F}}}$. In our analysis, the index $\alpha$ is always a 2D multi-index. For example, if $n > m$ and $$\Is = \{ (m+1,m) \} \cup \{ (m+2,m+1), (m+2,m) \} \cup \cdots \cup \{ (n,n-1), \cdots, (n,m)\},$$ then $\gb$ equals $\gb_{n,m}$ defined in . Similarly, we define $$\Ns^{(\std)}_{\Ds}(\gb) = \max_{\alpha \in \Ds} \left\{ \left[ \E( \|G_\alpha \|_{{\mathrm{F}}}^2) \right]^{1/2} \right\}$$ which will be often used throughout our analysis. ### Boundedness assumptions We will need the following assumptions for our analysis: 1. \[assump: H1\] The exact solution of $\Ge$, numerical solution $G$ solved by the deterministic method and numerical solution $\tG$ solved by inchworm Monte Carlo method are all bounded by a $O(1)$ constant: $$\begin{gathered} \|\Ge(\sa,s_i)\| \le \ \bdG \text{~for any~} 0 \le s_i \le \sa \le 2t; \\ \|\tG_{j,k}\|,\|G_{j,k} \| \le \bdG\text{~for any~} j,k = 0,1,\cdots,N-1,N^-,N^+,N+1,\cdots 2N-1,2N. \end{gathered}$$ 2. \[assump: H2\] Each $rs-$entry ($r,s=1,2$) of the full propagator $\Ge^{(rs)}(\sa,s_i)$ is of class $C^3$ on the domain $s_i\in[0,2t]\backslash \{t\},\ \sa \in [s_i,2t]\backslash \{t\}$ and we define the following upper bounds: $$\begin{aligned} \left| \frac{\partial^\alpha \Ge^{(rs)}}{\partial \sa^{\alpha_1} \partial s_i^{\alpha_2}}(x_1,x_2) \right| \le \begin{cases} \bdG'', &\text{for~} \alpha = \alpha_1 + \alpha_2 = 2, \\ \bdG''', &\text{for~}\alpha = \alpha_1 + \alpha_2 = 3, \end{cases}\end{aligned}$$ for any $x_2 \in[0,2t]\backslash \{t\},\ x_1 \in [x_2,2t]\backslash \{t\}$. 3. \[assump: H3\] We further assume that system Hamiltonian $H_s$ and system perturbation $W_s$ can also be bounded by $O(1)$ constants: $$\|H_s\| \le \bdH, \ \|W_s\| \le \bdW.$$ In addition, we assume that the bath influence functional $\Ls$ can be bounded as $$\label{eq: L bound} |\Ls(\sa,\vec{\sb}^M)| \le (M!!) \bdL^{\frac{M+1}{2}}$$ for some $O(1)$ constant $\bdL$. Here in the assumption (H1), we have assumed an upper bound for the exact solution. This is reasonable as $G_{{\mathrm{e}}}$, the propagator of a spin in an open quantum system, should be unitary. Although the inchworm method applies some approximation by truncating the infinite series, which may result in some deviation from $U(2)$, we would still like to restrict ourselves to the case when the equation gives an approximation with sufficient quality. Similarly, the boundedness assumptions of the numerical solutions mean that we want to study the evolution of the error when the numerical solution does not completely lose its validity. In (H3), the assumption comes from the actual expression of $\Ls$, for which we refer the readers to [@Cai2020] for more details. The bound can actually be improved by $|\Ls(\sa,\vec{\sb}^M)| \le \alpha(M) (M!!) \bdL^{\frac{M+1}{2}}$ with a coefficient $\alpha(M) \in (0,1]$, and the factor $\alpha(M)$ is the reason why the series has faster convergence than the Dyson series . Here we are mainly interested in the case of a finite $\bar{M}$ (the upper bound of $M$), and therefore the looser bound does not change the order of the error and its general growth rate. Main results and discussions on the error growth {#sec: inchworm results} ------------------------------------------------ In this section, we will provide our main results for the error estimation of the inchworm Monte Carlo method, and compare the results with the error growth of Dyson series expansion. The following theorem gives the difference between the inchworm Monte Carlo method and the deterministic scheme : \[thm: bounds\] Let $\dG_{n,m}= \tG_{n,m} - G_{n,m}$. Given a sufficiently small time step length $h$ and a sufficiently large $N_s$, if the assumptions (H1) and (H3) hold, the difference between the deterministic solution and the Monte Carlo solution can be estimated by - Bias estimation $$\label{1st error upper bound} \|\E(\Delta G_{n+1,m})\| \le 4\theta_2^2 \bar{\alpha}(t_{n-m+1})\bar{\gamma}(t_{n-m+1})\left({\mathrm{e}}^{3 \theta_1 \sqrt{P_1(t_{n-m+1})} t_{n-m+1}}\right)\cdot \frac{h}{N_s}$$ - Numerical error estimation $$\label{2ed error upper bound} [\E(\|\dG_{n+1,m}\|^2)]^{1/2} \le \theta_2 \sqrt{\bar{\gamma}(t_{n-m+1})}\left({\mathrm{e}}^{ \theta_1 \sqrt{P_1(t_{n-m+1})} t_{n-m+1}}\right) \cdot \sqrt{\frac{h}{N_s}}$$ Here $$\begin{aligned} \label{eq: alpha gamma} &\bar{\alpha}(t)= 16P_2(t)\cdot (10t+16t^2 + 5t^3 + \frac{1}{4}t^4), \qquad \bar{\gamma}(t) = 2\bdW \bdG \bdL^{1/2} \sum^{\bar{M}}_{\substack{M=1 \\ M \text{~is odd~}}} \frac{(\bdW \bdG \bdL^{1/2} t)^M}{(M-1)!!}, \\ \label{eq: P1} & P_1(t) = 2 \bdW^2 \bdG \bdL + 3\bdW^3 \bdG^2 \bdL^{\frac{3}{2}} (1+t) \sum^{\bar{M}}_{\substack{M=3 \\ M ~\text{is odd}}} \frac{(M-1)M}{(M-3)!!}(2\bdW \bdG \bdL^{1/2}t)^{M-2}\end{aligned}$$ and the function $P_2(\cdot)$ is a polynomial of degree $\bar{M}-1$ and are independent of $h$. The constants $\theta_1$ and $\theta_2$ are given by $\theta_1 = 353$ and $\theta_2 = \sqrt{34}$. The difference between the results of the inchworm Monte Carlo method and the exact solution is given by \[thm: final estimates\] Under the same settings as Theorem \[thm: bounds\], if we further assume that (H2) holds, then the difference between the inchworm Monte Carlo solution and the exact solution can be estimated by - Bias estimation $$\label{final 1st error upper bound} \begin{split} & \left\|\E\left( G_{\emph{e}}(t_{n+1},t_m) - \tG_{n+1,m} \right)\right\| \le P^{\emph{e}}(t_{n-m+1})\left({\mathrm{e}}^{\theta_1 \sqrt{P_1(t_{n-m+1})} t_{n-m+1}}\right)\cdot h^2 \\ & \qquad + 4\theta_2^2 \bar{\alpha}(t_{n-m+1})\bar{\gamma}(t_{n-m+1})\left({\mathrm{e}}^{3 \theta_1 \sqrt{P_1(t_{n-m+1})} t_{n-m+1}}\right)\cdot \frac{h}{N_s}, \end{split}$$ - Numerical error estimation $$\label{final 2ed error upper bound} \begin{split} &\left[\E\left( \| G_{\emph{e}}(t_{n+1},t_m) - \tG_{n+1,m}\|^2 \right) \right]^{1/2} \le P^{\emph{e}}(t_{n-m+1})\left({\mathrm{e}}^{\theta_1 \sqrt{P_1(t_{n-m+1})} t_{n-m+1}}\right)\cdot h^2 \\ &\hspace{120pt}+ \theta_2 \sqrt{\bar{\gamma}(t_{n-m+1})}\left({\mathrm{e}}^{ \theta_1 \sqrt{P_1(t_{n-m+1})} t_{n-m+1}}\right) \cdot \sqrt{\frac{h}{N_s}} . \end{split}$$ Here the function $P^{\emph{e}}(t)$ is defined by $$\label{eq: Ce} P^{\emph{e}}(t) = \left( \frac{1}{4} \bdH + 8P_1(t) \right)\bdG'' + \frac{5}{12} \bdG''' + \bdW \bdG'' \bdL^{1/2} \left( \sum^{\bar{M}}_{\substack{M=1 \\ M \text{~is odd~}}} \frac{M+1}{(M-1)!!}(2\bdW \bdG \bdL^{1/2} t)^M \right).$$ The above result indicates the following properties of the inchworm scheme, which are similar to the case of the differential equations: - The bias again has only a small contribution to the numerical error, which is often hardly observable in the numerical experiments. - The error consists of two parts. The first part is second-order in $h$, and the second part is half-order in the total number of samples. The growth of the numerical error over time is more complicated compared to the ODE case. Since the function $P_1(t)$ is a polynomial of degree $\bar{M}-1$, the growth of the error is on the order of $\exp(C t^{(\bar{M}+1)/2})$. Clearly the growth rate depends on the choice of $\bar{M}$. In the numerical examples shown in [@Chen2017b; @Cai2020], only $\bar{M} = 1$ and $\bar{M} = 3$ are used in the applications. Regarding the behavior of the error growth for different $\bar{M}$, we remark that - When $\bar{M}=1$, the final error estimation shows that there exists constants $C_1$ and $C_2$ such that $$\left[\E\left( \| \Ge(t_{n+1},t_m) - \tG_{n+1,m}\|^2 \right) \right]^{1/2} \le C_1 {\mathrm{e}}^{C_2 t_{n-m+1}} \left( h^2 + \sqrt{\frac{h}{N_s}} \right),$$ showing that the error grows exponentially with respect to the time difference in the propagator, which is slower than the method using Dyson series, where the error grows exponentially with respect to the square of the time difference. In this case, the numerical error is successfully mitigated. - When $\bar{M}=3$, there exist constants $C_1$ and $C_2$ such that $$\left[\E\left( \| \Ge(t_{n+1},t_m) - \tG_{n+1,m}\|^2 \right) \right]^{1/2} \le C_1 {\mathrm{e}}^{C_2 (t_{n-m+1}+t_{n-m+1}^2)} \left( h^2 + \sqrt{\frac{h}{N_s}} \right).$$ In this case, the growth rate is exponential in $t^2$, which is the same as the Dyson series. Thus which method has greater error depends on the coefficient in front of $t^2$. Instead of a detailed analysis, we would just comment that the inchworm Monte Carlo method is likely to have a smaller coefficient due to the effect of partial resummation, which leads to less terms in than the original Dyson series. - For larger $\bar{M}$, if $t$ is large, the error growth of the inchworm Monte Carlo method will be even worse than the summation using Dyson series. However, since the coefficient of $t^k$ is smaller for larger $k$, when $t$ is small, we may still expect that the inchworm Monte Carlo method has slower error growth due to the effect of partial resummation. - When $\bar{M} \rightarrow +\infty$, we have $$\begin{aligned} \lim_{\bar{M}\rightarrow +\infty} \bar{\gamma}(t) &= 2\bdW^2 \bdG^2 \bdL t \exp \left( \bdW^2 \bdG^2 \bdL t^2/2 \right), \\ \lim_{\bar{M}\rightarrow +\infty} P_1(t) &= 2\bdW^2 \bdG \bdL + 3\bdW^3 \bdG^2 \bdL^{\frac{3}{2}} (1+t) P(2\bdW \bdG \bdL^{1/2}t) \cdot {\mathrm{e}}^{2\bdW^2 \bdG^2 \bdL t^2}, \end{aligned}$$ where $P(x) = x^5 + 7x^3 + 6x$. Although these quantities are still finite, the error bound grows double exponentially with respect to $t^2$, which is undesired in applications. The numerical experiments in [@Chen2017b; @Cai2020] show that in certain regimes where the constant $\bdL$ is relatively small, the contribution from $M = 1$ is dominant in the series . In this case, the inchworm Monte Carlo method can well suppress the numerical sign problem and achieve an exponential error growth in these applications. Outline of the proof -------------------- We will postpone the details of the proof while provide an outline here. The results are obtained in the following steps: - Estimate the derivatives of the right-hand sides (Propositions \[thm: first order derivative\] and \[thm: second order derivative\]). - Derive recurrence relations for the numerical error (Proposition \[thm: recurrence relations\]). - Apply the recurrence relations to derive the error bounds (Theorem \[thm: bounds\]). - Estimate the error of the deterministic method (Proposition \[thm: Runge Kutta error\]). - Use the triangle inequality to derive the final error bounds (Theorem \[thm: final estimates\]). Some more details of these steps are given by a number of propositions below. We first define some sets of 2D multi-indices that will be used. $$\begin{aligned} \begin{array}{l l} \Omega_{n,m}=\{(j,k) \in \Z^2 \ | \ m \le k < j \le n \}, & \Omega^*_{n,m}= \Omega_{n+1,m}; \\ \partial \Omega_{n,m}=\{(j,k)\in \Omega_{n,m}\ | \ j=n\text{~or~}k=m\}, &\partial \Omega^*_{n,m}=\partial \Omega_{n+1,m}; \\ \mathring{\Omega}_{n,m}= \Omega_{n,m}\backslash \partial \Omega_{n,m}, &\mathring{\Omega}^{\ast}_{n,m}=\mathring{\Omega}_{n+1,m};\\ \Gamma_{n,m}(i) = \{(j,k)\in \Omega_{n,m}\ | \ j-k= i\}, & \Gamma^*_{n,m}(i)=\Gamma_{n+1,m}(i). \end{array}\end{aligned}$$ One may refer to Fig. \[fig:sets\] to visualize these definitions. Note that $\Omega_{n,m}$ and $\Omega^*_{n,m}$ respectively contain indices of the numerical solutions in $\gb_{n,m}$ and $\gb^*_{n,m}$ and thus give the locations of all nodes that $K_1$ and $K_2$ depend on. In addition, since $G^*_{n+1,m}$ is calculated completely based on the rest of $\gb^*_{n,m}$, we define $\bar{\Omega}_{n,m}=\Omega^*_{n,m}\backslash \{(n+1,m)\}$ to represent the indices of all full propagators that we actually use in order to obtain $G_{n+1,m}$. For the analysis of ODEs, it has been assumed in the boundedness of the first- and second-order derivatives of the rigth-hand side. Correspondingly, our first step is to estimate the derivatives of $F_1$ and $F_2$. For $F_1$, the results are given by the following two propositions for first- and second-order derivatives respectively. Assume (H1)(H3)(R4) hold. Given the time step length $h$ and any $\xib_{n,m}$ being a convex combination of $\ge_{n,m}$, $\gb_{n,m}$ and $\tg_{n,m}$, the first-order derivative of $F_1(\xib_{n,m})$ w.r.t. the $pq-$entry ($p,q=1,2$) of $G_{k,\ell}$ is bounded by $$\label{eq: 1st order bound} \left\| \frac{\partial F_1 (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} \right\| \le \begin{cases} P_1(t_{n-m}) h, &\text{for~} (k,\ell) \in \partial \Omega_{n,m} , \\ P_1(t_{n-m}) h^2, &\text{for~}(k,\ell) \in \mathring{\Omega}_{n,m}, \end{cases}$$ where $P_1(t)$ is defined in . \[thm: first order derivative\] Assume (H1)(H3)(R4) hold. Given the time step length $h$, the second-order derivative of $F_1(\xib_{n,m})$ w.r.t. the $p_1 q_1-$entry of $G_{k_1,\ell_1}$ and the $p_2 q_2-$entry of $G_{k_2,\ell_2}$ ($p_i,q_i = 1,2$) is bounded by: - If $(k_1,\ell_1) \times (k_2,\ell_2) \in \partial \Omega_{n,m} \times \partial \Omega_{n,m}$, $$\label{eq:2ed deriv bdxbd} \begin{split} \left\| \frac{\partial^2 F_1 (\xib_{n,m})}{\partial G_{k_1,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} \right\| \le \begin{cases} P_2(t_{n-m}) h, &\text{if one of the conditions \textbf{(a)}-\textbf{(d)} holds}, \\ P_2(t_{n-m}) h^2, &\text{otherwise}, \end{cases} \end{split}$$ where the conditions **(a)**-**(d)** are given by $$\begin{aligned} &\textbf{(a)} \ k_1 = k_2 = n, \ (\ell_1,\ell_2) \in \{(n-1,m),(m,n-1)\}, \\ &\textbf{(b)} \ \ell_1 = \ell_2 = m, \ (k_1,k_2) \in \{(m+1,n),(n,m+1)\}, \\ &\textbf{(c)} \ k_1 = n \text{~and~} \ell_2 = m, \ (k_2,\ell_1) \in \big\{ m \le \ell_1 \le n-1,m+1 \le k_2 \le n \, \big\vert \, |\ell_1 - k_2|\le 1 \big\}, \\ &\textbf{(d)} \ k_2 = n \text{~and~} \ell_1 = m, \ (k_1,\ell_2) \in \big\{ m \le \ell_2 \le n-1,m+1 \le k_1 \le n \, \big\vert \, |\ell_2 - k_1|\le 1 \big\} .\end{aligned}$$ - If $(k_1,\ell_1)\times (k_2,\ell_2) \in \partial \Omega_{n,m} \times \mathring{\Omega}_{n,m}$, $$\label{eq:2ed deriv bdxint} \begin{split} \left\| \frac{\partial^2 F_1 (\xib_{n,m})}{\partial G_{k_1,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} \right\| \le \begin{cases} P_2(t_{n-m}) h^2, &\text{for~}k_1 =n,|k_2-\ell_1|\le 1\text{~or~}\ell_1=m,|k_1 -\ell_2|\le 1 , \\ P_2(t_{n-m}) h^3, &\text{otherwise}. \end{cases} \end{split}$$ - If $(k_1,\ell_1)\times (k_2,\ell_2) \in \mathring{\Omega}_{n,m} \times \partial \Omega_{n,m}$, $$\label{eq:2ed deriv bdxint 2} \begin{split} \left\| \frac{\partial^2 F_1 (\xib_{n,m})}{\partial G_{k_1,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} \right\| \le \begin{cases} P_2(t_{n-m}) h^2, &\text{for~}k_2 =n,|k_1-\ell_2|\le 1\text{~or~}\ell_2=m,|k_2 -\ell_1|\le 1 , \\ P_2(t_{n-m}) h^3, &\text{otherwise}. \end{cases} \end{split}$$ - If $(k_1,\ell_1)\times(k_2,\ell_2) \in \mathring{\Omega}_{n,m} \times \mathring{\Omega}_{n,m} $, $$\label{eq:2ed deriv intxint} \begin{split} \left\| \frac{\partial^2 F_1 (\xib_{n,m})}{\partial G_{k_1,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} \right\| \le \begin{cases} P_2(t_{n-m}) h^3, &\text{for~}|k_1-\ell_2|\le1\text{~or~}|k_2-\ell_1|\le 1 , \\ P_2(t_{n-m}) h^4, &\text{otherwise}, \end{cases} \end{split}$$ \[thm: second order derivative\] where $P_2(t)$ is a polynomial of degree $\bar{M}-1$ independent of $h$. In these two propositions, the functions $P_1(\cdot)$ and $P_2(\cdot)$ are the same as the corresponding functions in Theorems \[thm: bounds\] and \[thm: final estimates\], respectively. The proofs of these two propositions are deferred to Section \[sec: derivatives\]. Unlike the case of differential equations, the partial derivative of $F_1(\cdot)$ involves a number of previous numerical solutions (all red nodes in Figure \[fig:sets\]), and the magnitudes depend on the locations of the nodes, as forms different cases in the above propositions. Similar results for the derivatives of $F_2(\cdot)$ with the same functions $P_1(t),P_2(t)$ can be proven, where all the indices $n$ should be changed to $n+1$. With the above estimates for the derivatives, we can establish recurrence relations for the bias and the numerical error: \[thm: recurrence relations\] Let $\Delta \gb_{n,m}= \tg_{n,m} - \gb_{n,m}$ and $\Delta \gb^*_{n,m}= \tg^*_{n,m} - \gb^*_{n,m}$. Given a sufficiently small time step length $h$ and a sufficiently large $N_s$, if the assumptions (H1) and (H3) hold, the difference between the deterministic solution and the Monte Carlo solution can be estimated by - Bias estimation: $$\label{eq: recurrence 1} \begin{split} &\|\E(\dG_{n+1,m})\|\\ \le& \ (1+\frac{1}{8}\bdH^4 h^4)\|\E(\dG_{n,m})\| + 22P_1(t_{n-m+1}) h^2 \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \left\| \E ( \Delta \gb^*_{n,m}) \right\|_{\Gamma^*_{n,m}(i)}\\ & + \left(\frac{7}{2}\bar{\alpha}(t_{n-m+1})h\left[ \Ns^{(\emph{std})}_{\bar{\Omega}_{n,m}}(\Delta \gb^*_{n,m}) \right]^2 + 8 \bar{\alpha}(t_{n-m+1})\bar{\gamma}(t_{n-m+1})\cdot \frac{h^3}{N_s} \right). \end{split}$$ - Numerical error estimation: $$\label{eq: recurrence 2 1/2} \begin{split} &[\E(\|\dG_{n+1,m}\|^2)]^{1/2} \le (1+\frac{1}{8}\bdH^4 h^4)[\E(\|\dG_{n,m}\|^2)]^{1/2} \\ &\hspace{10pt}+ 22 P_1(t_{n-m+1}) h^2 \sum_{i = 1}^{n-m} \big( 2 + (n-m+1-i)h \big) \Ns^{(\emph{std})}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) + \frac{7}{2} \sqrt{\bar{\gamma}(t_{n-m+1})}\cdot \frac{h}{\sqrt{N_s}}, \end{split}$$ and $$\label{eq: recurrence 2} \begin{split} & \E(\|\dG_{n+1,m}\|^2) \le ( 1 + \frac{1}{4}\bdH^4 h^4 )\cdot\E(\|\dG_{n,m}\|^2) + (1+ \frac{1}{8}\bdH^4 h^4)h \left[\E(\|\dG_{n,m}\|^2)\right]^{1/2} \times\\ & \hspace{-20pt} \left\{ 44 P_1(t_{n-m+1}) h \sum_{i = 1}^{n-m} \big( 2 + (n-m+1-i)h \big) \Ns^{(\emph{std})}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) + 4 \bar{\alpha}(t_{n-m+1})\bar{\gamma}(t_{n-m+1})\cdot \frac{h^2}{N_s} \right\}\\ & \hspace{-10pt}+912 P_1^2(t_{n-m+1}) h^4 \left\{ \sum_{i = 1}^{n-m} \big( 2 + (n-m+1-i)h \big) \Ns^{(\emph{std})}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) \right\}^2+ 17\bar{\gamma}(t_{n-m+1})\cdot \frac{h^2}{N_s}, \end{split}$$ where the functions $\bar{\alpha}$ and $\bar{\gamma}$ are defined in . In the above proposition, two different recurrence relations are given for the numerical error. Note that is not a simple square of the estimation . The main reason lies in the term $4 \bar{\alpha}(t_{n-m+1})\bar{\gamma}(t_{n-m+1})\cdot h^2/N_s$ located at the end of the second line in . Below we are going to use a simple analog to help the readers understand the difference. Consider the two recurrence relations $$\begin{aligned} \label{eq: analog1} e_{n+1} &\le e_n + \frac{h}{\sqrt{N_s}}, \\ \label{eq: analog2} e_{n+1}^2 &\le e_n^2 + \frac{2h^2}{N_s} e_n + \frac{h^2}{N_s}.\end{aligned}$$ The relation between these two recurrence relations are analogous to the relation between and . The square of the first recurrence relation is $$e_{n+1}^2 \le e_n^2 + \frac{2h}{\sqrt{N_s}} e_n + \frac{h^2}{N_s},$$ where the cross-term is different from . However, the relation provides a higher numerical order than , since by Cauchy-Schwarz inequality, we can derive from that $$\label{eq: cs} e_{n+1}^2 \le \left(1 + \frac{h^2}{N_s}\right) e_n^2 + \frac{2h^2}{N_s},$$ indicating that $e_n \sim O(\sqrt{h/N_s})$, while the recurrence relation can only give us $e_n \sim O(\sqrt{1/N_s})$. Besides, in order to study the error growth rate with respect to time, we are not allowed to use the Cauchy-Schwarz inequality to simplify equation like . Later in our proof, the simpler equation will be used the determine the growth rate of the numerical error, while the more complicated version is responsible for the final error estimation. Theorem \[thm: bounds\] is obtained from the recurrence relations stated in Proposition \[thm: recurrence relations\]. To obtain the final estimates (Theorem \[thm: final estimates\]), we need to estimate the error of the deterministic scheme , which is given by \[thm: Runge Kutta error\] We define the deterministic error $E_{n+1,m} = \Ge(t_{n+1}, t_m) - G_{n+1,m}$. If the assumptions (H1)(H2)(H3) hold, then for a sufficiently small time step length $h$ and a sufficiently large number of samplings at each step $N_s$, we have $$\label{eq: Runge Kutta error estimate global} \|E_{n+1,m}\| \le P^{\emph{e}}(t_{n-m+1})\left({\mathrm{e}}^{\theta_1 \sqrt{P_1(t_{n-m+1})} t_{n-m+1}}\right)\cdot h^2$$ where $P^{\emph{e}}(t)$ is defined in , and the constant $\theta_1$ is the same as the one in Theorem \[thm: final estimates\]. It is easy to see that our final conclusions in Theorem \[thm: final estimates\] are a straightforward combination of Theorem \[thm: bounds\] and Proposition \[thm: Runge Kutta error\] by the triangle inequality. Numerical experiments {#sec: numer exp} ===================== In this section, we will verify the above statements using numerical experiments. The following two subsections will be devoted, respectively, to the case of differential equations and the inchworm Monte Carlo method. Numerical experiments for ordinary differential equations {#sec: numer exp diff} --------------------------------------------------------- We consider an example as the following ordinary differential equation: $$\label{numexp:diff eq} \begin{split} &\frac{\dd u}{\dd t} = -\frac{\ii}{2} K u(t) = \E_X \big(R(u,X)\big),\ t\in[0,T], \\ & R(u,X) = - \ii Xu \end{split}$$ with the initial condition $u(0) = 1$ and the random variable $X \sim U(0,K)$. We apply the two schemes proposed in to get the numerical solutions $u_n$ and $\tu_n$ with uniform time step length $h = T/N$. For the stochastic $\tu_n$, we carry out the experiments independently for $N_{\exp} = 100N N_s$ times to obtain $\tu^{(1)}_n,\tu^{(2)}_n,\cdots,\tu^{(N_{\exp})}_n$ and we approximate the numerical error by $$\label{def: second moment diff} \E(|u_n-\tu_n|_2^2) \approx \mu_n :=\frac{1}{N_{\exp}}\sum^{N_{\exp}}_{i=1} |u_n -\tu_n^{(i)}|^2, \qquad \text{for } n = 0,1,\cdots,N.$$ Based on these settings, we now focus on the numerical order of the scheme and the growth of the numerical error with respect to $t$. For given time step $h$, we define the error function $e(\cdot)$ by $e(nh) = \mu_n$. We first set $K=3$ and $K=10$ in and $T = 3$. Figure \[fig:evolution diff\] shows the evolution of the numerical error $e(t)$ for $h = \frac{1}{4}$ and various numbers of samples $N_s$. For $K = 10$, the left panel of Figure \[fig:evolution diff\] shows that the error grows exponentially over time as predicted in Theorem \[thm: diff bounds\], while for smaller $K$, the stability of the method takes effect, and the error grows only linearly up to $T=3$ as exhibited in the right panel of Figure \[fig:evolution diff\]. This verifies that the exponential growth can be well controlled if appropriate Runge-Kutta schemes and sufficiently small time steps are adopted, which avoids the numerical sign problem in the Monte Carlo method that directly calculates . ![Evolution of numerical error $e(t)$ (left: $K=10$, right: $K=3$).[]{data-label="fig:evolution diff"}](fig_error_evolution_diff.eps){width="\textwidth"} To verify the convergence rate with respect to $h$ and $N_s$ in the estimate . We set $K=1$ and $T=1$ in and consider the numerical error at $t=0.5$ and $t=1$. We first fix $N_s=100$ and reduce $h$ from $1/2$ to $1/64$, and then fix $h = 1/4$ and increase $N_s$ from $100$ to $3200$. The numerical errors are listed in Table \[tab:convergence rate\], from which we can easily observe the first-order convergence for both $h$ and $N_s$, as agrees with our estimate . $h, N_s$ $e(0.5)$ order $e(1)$ order $h,N_s$ $e(0.5)$ order $e(1)$ order ----------- ------------ -------- ------------ -------- ----------- ------------ -------- ------------ -------- 1/2, 100 1.0917e-04 – 2.1940e-04 – 1/4, 100 5.2721e-05 – 1.0593e-04 – 1/4, 100 5.2721e-05 1.0502 1.0593e-04 1.0505 1/4, 200 2.6533e-05 0.9906 5.3332e-05 0.9901 1/8, 100 2.6257e-05 1.0057 5.2776e-05 1.0052 1/4, 400 1.3210e-05 1.0062 2.6520e-05 1.0079 1/16, 100 1.3027e-05 1.0111 2.6039e-05 1.0192 1/4, 800 6.6185e-06 0.9970 1.3254e-05 1.0007 1/32, 100 6.5086e-06 1.0011 1.3013e-05 1.0007 1/4, 1600 3.3043e-06 1.0022 6.5942e-06 1.0072 1/64, 100 3.2579e-06 0.9984 6.5124e-06 0.9987 1/4, 3200 1.6528e-06 0.9995 3.3060e-06 0.9961 : Numerical error $e(1)$ with different time steps $h$ and sampling numbers $N_s$ and the order of accuracy.[]{data-label="tab:convergence rate"} Numerical experiments for the inchworm Monte Carlo method {#sec: numer exp integ diff} --------------------------------------------------------- To verify the error growth of the inchworm Monte Carlo method, we consider the spin-boson model with a bath with Ohmic spectral density, where the Hamiltonian and perturbation operators are respectively given by $$H_s = \epsilon \hat{\sigma}_z + \Delta \hat{\sigma}_x, \qquad W_s = \hat{\sigma}_z$$ where we set the energy difference between two spin states $\epsilon=1$ and the frequency of the spin flipping $\Delta=1$. $\hat{\sigma}_x, \hat{\sigma}_z$ are Pauli matrices defined by $$\hat{\sigma}_x = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix},\qquad \hat{\sigma}_z = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}.$$ We aim to verify the error growth when $\bar{M}=1$ and $\bar{M}=3$ as we have discussed in Section \[sec: inchworm results\]. The bath influence functional $\Ls$ is given by $$\begin{aligned} \Ls(\vec{\sb}) = \begin{cases} B(s_1,s_2), & \text{when}~ \vec{\sb} = (s_2,s_1) \\ B(s_1,s_3)B(s_2,s_4),& \text{when}~ \vec{\sb} = (s_4,s_3,s_2,s_1) \end{cases} \label{def: bath influ func}\end{aligned}$$ where the correlation function $B(\cdot,\cdot)$ is formulated as $$\label{def: B} B(\tau_1, \tau_2) = \sum_{l=1}^L \frac{c_l^2}{2\omega_l} \left[ \coth \left( \frac{\beta \omega_l}{2} \right) \cos \big( \omega_l (\tau_2 - \tau_1) \big) - \ii \sin\big( \omega_l(\tau_2 - \tau_1) ) \right].$$ The general formula of $\Ls(\vec{\sb})$ for higher-dimensional $\vec{\sb}$ can be found in [@Cai2020]. According to [@Makri1999], the coupling intensity $c_l$ and frequency of each harmonic oscillator $\omega_l \in [0,\omega_{\max}]$ are respectively defined as $$c_l = \omega_l \sqrt{\frac{\xi \omega_c}{L} [1 - \exp(-\omega_{\max}/\omega_c)]}, \quad \omega_l = -\omega_c \ln \left( 1 - \frac{l}{L} [1 - \exp(-\omega_{\max} / \omega_c)] \right), \quad l = 1,\cdots,L.$$ As for the parameters above, we set $L = 200$, $\omega_{\max} = 4\omega_c$ with the primary frequency $\omega_c =3$, $\xi = 0.6$ and $\beta = 5$ throughout our experiments. We are interested in evolution of the expectation for the observable $\langle \hat{\sigma}_z(t)\rangle :=\mathrm{tr} \big(\rho_s \Ge(2t,0) \big)$, where $\rho_s$ is the initial density matrix for the system, which is set to be $$\rho_s = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}$$ in our simulations. Thus $\langle \hat{\sigma}_z(t)\rangle$ can be approximated via inchworm Monte Carlo method by $$\langle \hat{\sigma}_z(j h)\rangle \approx \tG^{(11)}_{N+j,N-j}, \text{~for~} j = 0,1,\cdots,N,$$ where $\tG_{n,m}$ is obtained by the scheme . The evolution of $\langle \hat{\sigma}_z(t)\rangle$ is plotted in Figure \[fig:observable\]. Note that due to the numerical error, the computed $\langle \hat{\sigma}_z(t)\rangle$ may contain a nonzero imaginary part, and here we only the real part of the numerical result is plotted. ![Evolution of $\text{Re}\langle \hat{\sigma}_z(t)\rangle$.[]{data-label="fig:observable"}](fig_observable.eps){width="50.00000%"} The numerical results in Figure \[fig:observable\] are obtained using the scheme with time step $h = 1/8$. We choose $N_s=10^4$ for $\bar{M}=1$ and $N_s=10^5$ for $\bar{M}=3$ in order for the curves to be sufficiently smooth. One can observe that when $t < 1$, two curves are hardly distinguishable, meaning that the contribution from $M=3$ is much smaller than $M=1$. It can therefore be expected that the error curves for both methods are close to each other before $t = 1$, and the quadratic error growth of $\bar{M}=3$ will become obvious only for large $t$. Note that the curve given by $\bar{M}=3$ is expected to be closer to the actual physics. Here we only aim at the verification of Theorem \[thm: bounds\], and do not discuss the modeling error by choosing a finite $\bar{M}$. Unlike the differential equation case, now it is much harder to find the solution of the deterministic scheme due to the high-dimensional integral on the right-hand side of . Therefore, instead of verifying directly, we use $$\E(\| \tG_{n,m} - G_{n,m} \|^2) = \operatorname{Var}(\tG_{n,m}) + \|\E(\tG_{n,m}-G_{n,m})\|^2,$$ and only take the first term on the right-hand side (the variance of $\tG_{n,m}$) to approximate our numerical error. Such an approximation is reasonable since the second term $\|\E(\tG_{n,m}-G_{n,m})\|^2$ has a higher order $O(h^2/N^2_s)$ by the bias estimation . To compute the variance numerically, we run the same simulation $N_{\exp}$ times, and compute the unbiased estimation of the variance: $$\operatorname{Var}(\tG_{n,m}) \approx \bar{\mu}_{n,m} := \frac{N_{\exp}}{N_{\exp} - 1} \Bigg( \frac{1}{N_{\exp}}\sum_{k=1}^{N_{\exp}} \|\tG_{n,m}^{[k]} \|^2 - \Bigg\|\frac{1}{N_{\exp}} \sum_{i=1}^{N_{\exp}} \tG_{n,m}^{[k]} \Bigg\|^2 \Bigg),$$ where $\tG_{n,m}^{[k]}$ is the result of the $k$th simulation. For a given time step $h$, we let $e(jh) = \bar{\mu}_{N+j,N-j}$. Below we first check the numerical order. By choosing $N_{\exp} = 1000N N_s$, we get results shown in Table \[tab:order\], from which one can clearly observe the order of convergence given in . $h,N_s$ $e(0.5)$ order $e(1)$ order $h,N_s$ $e(0.5)$ order $e(1)$ order ----------- ---------- -------- -------- -------- ---------- ---------- -------- -------- -------- $1/10, 2$ 0.0417 – 0.1488 – $1/4,1$ 0.1939 – 0.8579 – $1/12, 2$ 0.0350 0.9658 0.1228 1.0505 $1/4,2$ 0.0972 0.9959 0.3908 1.1344 $1/14, 2$ 0.0303 0.9293 0.1051 1.0083 $1/4,4$ 0.0473 1.0386 0.1824 1.0990 $1/16, 2$ 0.0263 1.0574 0.0915 1.0409 $1/4,8$ 0.0237 0.9998 0.0886 1.0417 $1/18, 2$ 0.0237 0.8936 0.0811 1.0203 $1/4,16$ 0.0119 0.9962 0.0436 1.0235 $1/20, 2$ 0.0214 0.9614 0.0728 1.0341 $1/4,32$ 0.0059 1.0053 0.0217 1.0091 : Numerical error $e(0.5)$, $e(1)$ with different time steps $h$ and sampling numbers $N_s$ and the order of accuracy[]{data-label="tab:order"} Now we present the growth of error for $N_s = 4$ and $8$ in Figure \[fig:error\_growth\], where the time step is set to be $h = 1/8$, and we choose $N_{\exp} = 700N N_s$ for both $\bar{M}=1$ and $\bar{M}=3$. As predicted, the two curves almost coincide for $t < 1$. For $\bar{M} = 1$, the numerical error starts to show the exponential growth from $t = 4.5$, and for $\bar{M} = 3$, the quadratic exponential growth becomes obvious from $t = 2.5$. Both results are in accordance with the theoretical results in Theorem \[thm: bounds\]. ![Evolution of numerical error $e(t)$ (left: $N_s=4$, right: $N_s = 8$).[]{data-label="fig:error_growth"}](fig_error_growth.eps){width="\textwidth"} By now, we have stated all the results in this paper. From the next section, we will start to prove the theorems and propositions. Proofs for the case of differential equations {#sec: proof diff eq} ============================================= In this section, we prove the results for differential equations as stated in Section \[sec: diff results\]. Proof of Proposition \[thm: diff recurrence relations\] — Part I: Recurrence relation for the bias {#sec: diff recurrence 1} -------------------------------------------------------------------------------------------------- In this section we focus on the proof of . By taking the difference of the schemes and and applying the triangle inequality and the bounds of the coefficients, we get $$\label{eq: evalue_1st_err} \|\E(u_{n+1} -\tu_{n+1})\|_2 \le \|\E (u_{n} -\tu_{n})\|_2 + h R \sum^s_{i=1} \|\E(k_i - \tk_i)\|_2$$ for all non-negative integer $n$. We then focus on the estimate for $\|\E(k_i - \tk_i)\|_2$. In fact, we have the following results: \[lemma: deltak and deltak\^2\] Given a sufficiently small time step length $h$. If the boundedness assumption are satisfied, we have $$\label{eq: est_krk_1st_global} \begin{split} \|\E(k_{i} - \tk_{i})\|_2 \le \alpha' \Big( \|\E(u_n -\tu_n)\|_2 + \E\big(\|u_n -\tu_n\|_2^2\big) + \frac{h^2}{N_s} \rkn^2 \Big) \end{split}$$ and $$\label{eq: est_krk_2ed_global} \E(\|k_{i} - \tk_{i}\|_2^2) \le \beta' \Big( \E\big(\|u_n -\tu_n\|_2^2\big) + \frac{1}{N_s} \rkn^2 \Big),$$ for $\beta' = 2^{s+1} \max(s M'^2, 1)$ and $\alpha'= 2^s \max(M', s M'', s^2M'' R^2 \beta'/2)$. Here we recall that $s$ is the number of Runge-Kutta stages, $d$ is the dimension of solution $u$ and $R,M',M''$ are some upper bounds defined in –. With the above Lemma, one may insert the estimate into to get the recurrence relation stated in Proposition \[thm: diff recurrence relations\] for the bias $\|\E(u_{n+1}-\tu_{n+1})\|_2$. The proof of Lemma \[lemma: deltak and deltak\^2\] is given below: Apply the relation and use Taylor expansion at the deterministic point $\big(t_n + c_{i} h, u_{n} + h \sum^{i-1}_{j = 1} a_{ij} k_j\big)$, we have for the $m$th component of $\E(k_{i} - \tk_{i})$, $$\label{eq: k^1 expansion} \begin{split} |\E (k^{(m)}_{i} - \tk^{(m)}_{i})| &= \left|\E \Big( f^{(m)}\big(t_n + c_{i} h, u_{n} + h \sum^{i-1}_{j = 1} a_{ij} k_j\big) - f^{(m)}\big(t_n + c_{i} h, \tu_n + h \sum^{i-1}_{j = 1} a_{ij} \tk_j\big) \Big)\right| \\ &\le M'\|\E w_i\|_2 + \frac{M''}{2} \E \|w_i\|_2^2. \end{split}$$ where $$w_i = (u_n -\tu_n)+h\sum^{i-1}_{j = 1} a_{ij} (k_j - \tk_j),$$ and we have applied the boundedness assumption . The above inequality immediately yields $$\begin{split} \|\E (k_{i} - \tk_{i})\|_2 & \le M' \Big( \|\E(u_n -\tu_n)\|_2 + h R \sum^{i-1}_{j = 1}\|\E(k_j - \tk_j)\|_2 \Big) \\ & + \frac{sM''}{2} \Big( \E(\|u_n -\tu_n)\|_2^2) + h^2 R^2 \sum^{i-1}_{j = 1}\E(\|k_j - \tk_j\|_2^2) \Big). \end{split}$$ By recursion, we obtain $$\label{eq: est_krk_1st} \|\E (k_{i} - \tk_{i})\|_2 \le (1+hRM')^s \bigg[ M' \|\E(u_n - \tu_n)\| + \frac{sM''}{2} \Big( \E(\|u_n -\tu_n\|_2^2) + h^2 R^2 \sum^{i-1}_{j = 1}\E(\|k_j - \tk_j\|_2^2) \Big) \bigg].$$ We observe from the inequality above that the upper bound of $\|\E(k_{i} - \tk_{i})\|_2$ is partially determined by the second moment $\E(\|k_j - \tk_j\|_2^2)$ up to $(i-1)$-th Runge-Kutta stage. Therefore, we subsequently consider the estimate for $\E(\|k_j - \tk_j\|_2^2)$. By direct calculation, $$\begin{split} \E(\|k_{i} - \tk_{i}\|_2^2) & = \E\Big[ \ \Big\|f\big(t_n + c_{i} h, u_{n} + h \sum^{i-1}_{j = 1} a_{ij} k_j\big) - \frac{1}{N_s} \sum^{N_s}_{l = 1} g\big(t_n + c_{i} h, \tu_n + h \sum^{i-1}_{j = 1} a_{ij} \tk_j,X^{(i)}_l\big) \Big\|_2^2 \ \Big]\\ & = 2\E\Big[ \ \Big\|f\big(t_n + c_{i} h, u_{n} + h \sum^{i-1}_{j = 1} a_{ij} k_j\big) - \frac{1}{N_s} \sum_{l=1}^{N_s} g\big(t_n + c_{i} h,u_n + h \sum^{i-1}_{j = 1} a_{ij} k_j, X_l^{(i)} \big)\Big\|_2^2 \ \Big] \\ &\quad +\frac{2}{N_s^2} \E\Big[ \ \Big\|\sum^{N_s}_{l = 1} \Big[ g\big(t_n + c_{i} h, u_n + h \sum^{i-1}_{j = 1} a_{ij} k_j,X^{(i)}_l\big) - g\big(t_n + c_{i} h, \tu_n + h \sum^{i-1}_{j = 1} a_{ij} \tk_j,X^{(i)}_l\big) \big] \Big\|_2^2 \ \Big]\\ & \le \frac{2}{N_s} \rkn^2 + 2M'^2 d \|w_i\|_2^2 \\ & \le 2ds{M'}^2 \Big( \E\big(\|u_n -\tu_n\|_2^2\big) + R^2 h^2 \sum_{j=1}^{i-1} \E\big(\|k_j - \tk_j\|_2^2 \big) \Big) + \frac{2}{N_s} \rkn^2. \end{split}$$ Here we have used the mean value theorem and the standard error estimates for the Monte Carlo method to obtain the upper bound. By applying the above inequality recursively backwards to the first Runge-Kutta stage, we obtain a uniform bound for $\E(\|k_{i} - \tk_{i}\|_2^2)$: $$\label{eq: est_krk_2ed} \E(\|k_{i} - \tk_{i}\|_2^2) \le (1+2M'^2 R^2 d s h^2)^{s} \Big( 2dsM'^2 \E\big(\|u_n -\tu_n\|_2^2\big) + \frac{2}{N_s} \rkn^2 \Big).$$ The estimation can be obtained by setting $\beta' = 2^{s+1} \max(d s M'^2, 1)$ for $h \leqslant \sqrt{d s}/(2M'R)$. Substituting into , we can obtain with $h \le 1/ \max(M'R, R\sqrt{s\beta'})$. Proof of Proposition \[thm: diff recurrence relations\] — Part II: Recurrence relation for the numerical error {#sec: diff recurrence 2} -------------------------------------------------------------------------------------------------------------- We again insert the two schemes and expand the numerical error $\E\big(\|u_{n+1}-\tu_{n+1}\|_2^2\big)$ into $$\label{eq: est_2ed_order_twoparts} \begin{split} \E\big(\|u_{n+1} -\tu_{n+1}\|_2^2\big) &= \E\Big[ \ \Big\|( u_n - \tu_n ) + h \sum_{i=1}^s b_i (k_i - \tk_i ) \Big\|_2^2 \ \Big] \\ &= \E\big(\|u_n - \tu_n\|_2^2\big) +h^2 \E\Big[ \ \Big\| \sum_{i=1}^s b_i (k_i - \tk_i ) \Big\|_2^2 \ \Big] \\ &\hspace{20pt}+ h\E\Big[(u_n - \tu_n)^\dagger \sum_{i=1}^s b_i (k_i - \tk_i ) \Big] + h\E\Big[ \Big(\sum_{i=1}^s b_i (k_i - \tk_i ) \Big)^\dagger (u_n - \tu_n)\Big]. \end{split}$$ The second term on the right-hand side can be immediately estimated using the previous result given in Lemma \[lemma: deltak and deltak\^2\]: $$\label{eq: est_2ed_order_term_1} h^2 \E\Big[ \ \Big\| \sum_{i=1}^s b_i (k_i - \tk_i ) \Big\|_2^2 \ \Big] \le \ R^2 s h^2 \sum_{i=1}^s\E\big(\| k_i - \tk_i \|_2^2\big) \le \ R^2 s^2 \beta' \Big( h^2 \E\big(\|u_n -\tu_n\|_2^2\big) + \frac{h^2}{N_s} \rkn^2 \Big).$$ Using this estimate, naively we can bound the last two cross terms in by Cauchy-Schwarz inequality. However, such a strategy will lead to an error estimate with the form $$h\E\Big[(u_n - \tu_n)^\dagger \sum_{i=1}^s b_i (k_i - \tk_i ) \Big] + h\E\Big[ \Big(\sum_{i=1}^s b_i (k_i - \tk_i ) \Big)^\dagger (u_n - \tu_n)\Big] \le C \left(h \E(\|u_n - \tu_n\|_2^2) + \frac{h}{N_s} \rkn^2\right),$$ where the last term is sub-optimal and will lead to a deterioration in the final error estimate. Therefore we need a more careful estimate as in the following lemma: \[lemma: 2ed error cross term\] Given a sufficiently small time step length $h$. If the boundedness assumptions are satisfied, we have $$\label{eq: est_2ed_order_term_2} h\E\Big[(u_n - \tu_n)^\dagger \sum_{i=1}^s b_i (k_i - \tk_i ) \Big] + h\E\Big[ \Big(\sum_{i=1}^s b_i (k_i - \tk_i ) \Big)^\dagger (u_n - \tu_n)\Big] \le C_{\text{cr}} \Bigl( \frac{\alpha'^2 h^5}{N_s^2} \rkn^4 + h \E(\|u_n - \tu_n\|_2^2) \Bigr),$$ where $C_{\text{cr}} = \max(R^2 s^2, 2+2^{s+1} {M'}^2 d R^2 s^3)$. Here we recall that $s$ is the number of Runge-Kutta stages, $d$ is the dimension of solution $u$, $R,M'$ are some upper bounds defined in the assumptions – and $\alpha'$ is given in Lemma \[lemma: deltak and deltak\^2\]. With Lemma \[lemma: 2ed error cross term\], we now plug the estimates and into to obtain the recurrence relation for the numerical error by $$\E(\|u_{n+1}-\tu_{n+1}\|_2^2) \le (1+C_{\text{cr}} h + R^2 s^2 \beta' h^2) \E(\|u_{n}-\tu_{n}\|_2^2) + \Big(C_{\text{cr}}\frac{\alpha'^2 h^5}{N_s^2} \rkn^4 + R^2 s^2 \beta'\frac{h^2}{N_s} \rkn^2 \Big),$$ from which one can see that if $h$ and $N_s$ satisfy $h\le \frac{C_{\text{cr}}}{R^2 s^2 \beta'}$, then holds for $\beta = \max(2C_{\text{cr}}, R^2 s^2 \beta')$ with $\beta'$ is given in Lemma \[lemma: deltak and deltak\^2\]. The rest of this section devotes to the proof of Lemma \[lemma: 2ed error cross term\]. We introduce a “semi-stochastic” approximation $\bu_{n+1}$ defined by $$\label{eq: semi-stochastic} \begin{aligned} & \bk_i = f(t_n + c_i h, \tu_n + h \sum^{i-1}_{j = 1} a_{ij} \bk_j), \quad k = 1,\cdots,s; \\ & \bu_{n+1} = \tu_n + h \sum^s_{i=1} b_i \bk_i. \end{aligned}$$ This approximation applies the deterministic Runge-Kutta scheme to the stochastic solution $\tu_n$ for one time step. The following Lemma controls the difference between this local approximation and the stochastic scheme . \[thm: diff\_tkvsbk\] Let $X_i :=\big(X^{(1)},X^{(2)},\cdots,X^{(i)}\big)$ be the collection of samples up to $i$th Runge-Kutta stage where each $X^{(j)} = (X^{(j)}_1,X^{(j)}_2,\cdots,X^{(j)}_{N_s})$, we have $$\begin{split} &\|\E_{X_i}(\bk_{i} - \tk_{i})\|_2 \le \ \alpha' \frac{h^2}{N_s} \rkn^2, \end{split}$$ where $\alpha'$ is given in . The proof of this lemma is omitted since it is almost identical to the proof of Lemma \[lemma: deltak and deltak\^2\]. The first and second terms on the right-hand side of do not appear in the above result, since $\bk_{i}$ and $\tk_{i}$ are computed based on the same solution at the $n$th step. Below we provide the proof of Lemma \[lemma: 2ed error cross term\]: It suffices to only focus on one factor $h\E\Big[(u_n - \tu_n)^\dagger \sum_{i=1}^s b_i (k_i - \tk_i ) \Big]$ since the other one is simply its conjugate transpose which can be controlled by exactly the same upper bound. We use $\bk_i$ as a bridge and split $$\label{eq: est_2ed_order_crossterm} \begin{split} \Big| h\E\Big[(u_n - \tu_n)^\dagger \sum_{i=1}^s b_i (k_i - \tk_i ) \Big] \Big| & \le h\Big| \E\Big[(u_n - \tu_n)^\dagger \sum_{i=1}^s b_i (k_i - \bk_i ) \Big] \Big| + h \Big| \E\Big[(u_n - \tu_n)^\dagger \E_{X_s}\Big( \sum_{i=1}^s b_i (\bk_i - \tk_i ) \Big) \Big] \Big| \\ & \le h \E(\|u_n - \tu_n\|_2^2) + \frac{h}{2} \E\Big[ \ \Big\|\sum_{i=1}^s b_i (k_i - \bk_i )\Big\|_2^2 + \Big\|\E_{X_s}\Big( \sum_{i=1}^s b_i (\bk_i - \tk_i ) \Big) \Big\|_2^2 \ \Big] \\ & \le h \E(\|u_n - \tu_n\|_2^2) + \frac{R^2 s h}{2} \sum_{i=1}^s \Big( \E\|k_i - \bk_i\|_2^2 + \E \|\E_{X_s} (\bk_i - \tk_i)\|_2^2 \Big). \end{split}$$ From the first line to the second line above, we have taken advantage of the fact that $\tu_n$ is independent from $X_s$ which is sampled at the $(n+1)^{\text{th}}$ time step when calculating $\tu_{n+1}$. The difference between $k_i$ and $\bk_i$ can be estimated in the same way as the derivation of . The result is $$\E(\|k_i - \bk_i\|_2^2) \le 2{M'}^2 s d ( 1 + 2{M'}^2 d R^2 s h^2)^s \E(\|u_n - \tu_n\|_2^2) .$$ Inserting the above and the result of Lemma \[thm: diff\_tkvsbk\] into , we get $$\Big| h\E\Big[(u_n - \tu_n)^\dagger \sum_{i=1}^s b_i (k_i - \tk_i ) \Big] \Big| \le h (1 + R^2 s^3 M'^2 d) ( 1 + 2{M'}^2 d R^2 s h^2)^s \E(\|u_n - \tu_n\|_2^2) + \frac{R^2 s^2 \alpha'^2 h^5}{2 N_s^2} \rkn^4,$$ from which one can easily observe that the lemma holds if $2M'^2 d R^2 s h^2 < 1$. Proof of Theorem \[thm: diff bounds\] — error bounds ---------------------------------------------------- In this section, we apply the two recurrence relations stated in Proposition \[thm: diff recurrence relations\] to get the estimates for the bias $\|\E(u_{n+1} - \tu_{n+1})\|_2$ as well as the numerical error $\E(\|u_{n+1} - \tu_{n+1}\|_2^2)$. By using recursively backwards w.r.t $n$, we have $$\label{eq: est_2ed_order_tk} \begin{split} \E(\|u_{n+1}-\tu_{n+1}\|_2^2) & \le (1+\beta h )^{n+1} \E(\|u_{0}-\tu_{0}\|_2^2) + \beta \left( \frac{h^2}{N_s} \rkn^2 + \frac{\alpha^2 h^5}{s^2 R^2 N_s^2} \rkn^4 \right) \sum_{i=0}^{n} (1+\beta h)^i \\ & = \Big( \frac{h}{N_s} \rkn^2 + \frac{\alpha^2 h^4}{s^2 R^2 N_s^2} \rkn^4 \Big) \big( e^{\beta t_{n+1}}-1\big) \end{split}$$ which leads to the global estimate for the bias. Inserting into the recurrence relation and expanding the recursion in a similar way, we get $$\label{upper bound diff} \begin{split} &\|\E(u_{n+1}-\tu_{n+1})\|_2 \\ \le & \ \alpha \frac{h^3}{N_s} \rkn^2 \sum^n_{i=0}(1+\alpha h)^i + \alpha h \Big( \frac{h}{N_s} \rkn^2 + \frac{\alpha^2 h^4}{s^2 R^2 N_s^2} \rkn^4 \Big) \sum^{n-1}_{i=0} (1+\alpha h)^i \big(e^{\beta t_{n-i}} -1\big) \\ \le & \ \frac{h^2}{N_s}\big( e^{\alpha t_{n+1}}-1\big) \rkn^2 + \alpha h \Big( \frac{h}{N_s} \rkn^2 + \frac{\alpha^2 h^4}{s^2 R^2 N_s^2} \rkn^4 \Big)\sum^{n-1}_{i=0} e^{\alpha t_i}\big(e^{\beta t_{n-i}} -1\big) \\ \le & \ \frac{h^2}{N_s}\big( e^{\alpha t_{n+1}}-1\big) \rkn^2 + \alpha h \Big( \frac{h}{N_s} \rkn^2 + \frac{\alpha^2 h^4}{s^2 R^2 N_s^2} \rkn^4 \Big) \sum^{n-1}_{i=0} \big(e^{\max(\alpha,\beta) t_{n}} -1\big) \\ = & \ \frac{h^2}{N_s}\big( e^{\alpha t_{n+1}}-1\big) \rkn^2 + \alpha t_n \Big( \frac{h}{N_s} \rkn^2 + \frac{\alpha^2 h^4}{s^2 R^2 N_s^2} \rkn^4 \Big) \big(e^{\max(\alpha,\beta) t_{n}} -1\big), \end{split}$$ which completes the proof of . Proofs of estimates for inchworm Monte Carlo method {#sec: proof} =================================================== In this section, the proofs of theorems in Section \[sec: inchworm results\] are detailed. We will again first focus on the difference between the deterministic method and the stochastic method, and the error of the deterministic method will be discussed at the end of this section. Thanks to the previous discussion on the differential equation case, we may follow this framework which guides the general flow of our derivation. Below we point out the major differences as well as difficulties for the case of this integro-differential equation before the detailed proof: - Since $K_2$ depends on more previously-computed time steps than $K_1$ due to the nonlocal integral term in (this can be easily observed by comparing $\gb_{n,m}$ with $\gb^*_{n,m}$), a uniform expression for $K_i$ like is no longer available for the integro-differential equation. Therefore, we need individual analysis for each $K_i$. - Recall that the Taylor expansion is applied in the proof of Lemma \[lemma: deltak and deltak\^2\] (e.g. in ), which requires to estimate the first- and second-order derivatives of the source term $f(t,u)$. This can no longer be simply assumed as in and has to be carefully studied. They play crucial roles in understanding the behavior of the inchworm Monte Carlo method. - The derivation of the error amplification can no longer be handled by the simple discrete Gronwall inequality, due to the involvement of a large number of previous steps on the right-hand side of the numerical scheme. The error estimation must be handled with more care, e.g. the estimation we used to handle the cross term in (Lemma \[lemma: 2ed error cross term\]) will lead to a pessimistic (sub-optimal) fast growth rate in the error estimate of integro-differential equations. - Most importantly, the magnitude of the derivatives depends on $\bar{M}$, as it is determined by the dimensionality of the integral in the equation. This will result in different error amplification with different choices of $\bar{M}$. This is the key point which explains whether/how the inchworm Monte Carlo method mitigates the numerical sign problem. Estimation of the derivatives of $F_1(\cdot)$ {#sec: derivatives} --------------------------------------------- We first estimate the first- and second-order derivatives of $F_1(\cdot)$. As discussed in Section \[sec: diff results\], we will provide a detailed proof for the bounds of first-order derivatives. The proof for the second-order derivatives will only be sketched. ### Proof of Proposition \[thm: first order derivative\]—Estimate the first-order derivatives We are looking for an upper bound for the derivative $\frac{ \partial F_1(\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}}$ with $\xib_{n,m}$ being a convex combination of $\ge_{n,m}$, $\gb_{n,m}$ and $\tg_{n,m}$ defined by $$\xib_{n,m}: = (\Xi_{m+1,m};\Xi_{m+2,m+1},\Xi_{m+2,m};\cdots;\Xi_{n,n-1},\cdots,\Xi_{n,m})$$ satisfying $$\Xi_{j,k} = c_1 \Ge(t_j,t_k) + c_2 G_{j,k} + (1-c_1 - c_2) \tG_{j,k}$$ for some constants $0 \le c_1,c_2 \le 1$ and $c_1+c_2 \le 1$ given $m\le k<j \le n$. According to the assumption (H1) on the boundedness of $\Ge$, $G$ and $\tG$, we immediately have $$\|\Xi_{j,k}\| \le \bdG \text{~for any~} j,k = 0,1,\cdots,N-1,N^-,N^+,N+1,\cdots 2N-1,2N.$$ In , we write $F_1(\xib_{n,m})$ as $$F_1(\xib_{n,m}) = \sgn(t_n - t) \sum^{\bar{M}}_{\substack{M=1 \\ M \text{~is odd~}}} \ii^{M+1} \Fs_M(\xib_{n,m})$$ where $$\label{eq:deriv_FM} \Fs_M(\xib_{n,m}):= \int_{t_n > s_M^M > \cdots > s_1^M > t_m } (-1)^{\#\{\vec{\sb}^M \le t\}}W_s I_h \Xi(t_n,s_M^M) W_s \cdots W_s I_h \Xi(s_1^M,t_m) \Ls(t_n,\vec{\sb}^M) \ \dd \vec{\sb}^M.$$ Therefore, we focus on the estimation of $\frac{ \partial \Fs_M(\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}}$ for each odd integer $M$. The two cases given in equation will be discussed separately below. #### (I) $(k,\ell) \in \partial \Omega_{n,m}$ This case includes $\frac{ \partial \Fs_M(\xib_{n,m})}{\partial G_{n,\ell}^{(pq)}}$ and $\frac{ \partial \Fs_M(\xib_{n,m})}{\partial G_{k,m}^{(pq)}}$. Here we only consider the derivative $\frac{ \partial \Fs_M(\xib_{n,m})}{\partial G_{n,\ell}^{(pq)}}$ since the analysis for the other is similar. For each $\Fs_M(\xib_{n,m})$, we split the derivative by $$\label{eq: FM splitting} \frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{n,\ell}^{(pq)}} = \sum_{j = 0}^M \frac{\partial \Is_j (\xib_{n,m})}{\partial G_{n,\ell}^{(pq)}}.$$ where $$\label{eq:deriv_Ij} \begin{split} \Is_j(\xib_{n,m}) = &\int_{t_n > s^M_M > \cdots > s^M_{j+1} > t_{n-1} > s^M_j > \cdots > s^M_1 > t_m} \\ &\hspace{30pt} (-1)^{\#\{\vec{\sb}^M \le t\}}W_s I_h \Xi(t_n,s^M_M) W_s I_h \Xi(s^M_M,s^M_{M-1}) \cdots W_s I_h \Xi(s^M_1,t_m) \Ls(t_n,\vec{\sb}^M) \ \dd \vec{\sb}^M, \end{split}$$ in which we require that $t_{n-1}$ is between $s_j^M$ and $s_{j+1}^M$. We write the integrand of each $\Is_j(\xib_{n,m})$ as $\Gs^j_1 \times I_h \Xi(s^M_{j+1},s^M_j) \times \Gs^j_2$ where $$\label{def:G1 and G2} \begin{split} \Gs^j_1 =\ & (-1)^{\#\{\vec{\sb}^M \le t\}} W_s I_h \Xi(t_n,s^M_M) W_s \cdots W_s I_h \Xi(s^M_{j+2},s^M_{j+1})W_s, \\ \Gs^j_2 =\ & W_s I_h \Xi(s^M_j,s^M_{j-1}) W_s \cdots W_s I_h \Xi(s^M_1,t_m)\Ls(t_n,\vec{\sb}^M) \end{split}$$ for $1 \le j \le M-1$ (the formula for $j=0$ and $j=M$ is slightly different but easy to get). One can easily verify that $\Gs^j_2$ is completely independent from the factor $\Xi_{n,\ell}$ since the time sequence $\{s_1^M, \cdots, s_j^M\}$ is at least one time step away from $t_n$ and thus the interpolation of any $I_h \Xi(s^M_{i+1},s^M_i)$ in $\Gs^j_2$ never uses the value of $\Xi_{n,\ell}$. On the other hand, the value of $\Gs^j_1$ as well as the “interface” $I_h \Xi(s^M_{j+1},s^M_j)$ may or may not rely on $\Xi_{n,\ell}$, depending on how $\ell$ is given, as leads to the two cases we are going to discuss below. ##### Case 1: $\ell=n-1$. This is the most complicated case in this proof. Note that $\Gs^j_1$ depends on $\Xi_{n,n-1}$ due to the fact that we get each $I_h \Xi(s^M_{i+1},s^M_i)$ in $\Gs^j_1$ by the interpolation $I_h \Xi(s^M_{i+1},s^M_i) = c^M_{i,1} \Xi_{n,n} + c^M_{i,2}\Xi_{n-1,n-1} + c^M_{i,3} \Xi_{n,n-1}$ with some coefficients $|c^M_i| < 1$. The “interface" $I_h \Xi(s^M_{j+1},s^M_j)$ depends on $\Xi_{n,n-1}$ only when $s^M_j$ is restricted between $(t_{n-2}, t_{n-1})$ where $I_h \Xi(s^M_{j+1},s^M_j) = c^M_{j,1} \Xi_{n-1,n-1} + c^M_{j,2}\Xi_{n-1,n-2} + c^M_{j,3} \Xi_{n,n-1}+ c^M_{j,4} \Xi_{n,n-2}$. One may refer to Figure \[fig:deriv\_case1\_1\] for a better understanding. For these reasons, we further divide the derivative $\frac{\partial\Is_j (\xib_{n,m})}{\partial G_{n,n-1}^{(pq)}}$ into two parts: $$\label{eq: deriv_Ij} \begin{split} &\frac{\partial\Is_j (\xib_{n,m})}{\partial G_{n,n-1}^{(pq)}} =\\ ={} & \int_{t_n > s^M_{M} > \cdots > s^M_{j+1} > t_{n-1}} \int_{t_{n-1} > s^M_j > t_{n-2}} \int_{s^M_j > s^M_{j-1} >\cdots >s^M_1 > t_m} \left[ \frac{\partial}{\partial \Xi_{n,n-1}^{(pq)}}\big( \Gs^j_1 I_h \Xi(s^M_{j+1},s^M_j) \big) \right] \Gs^j_2 \ \dd\vec{\sb}^M \\ & \hspace{45pt}+ \int_{t_n > s^M_M > \cdots > s^M_{j+1} > t_{n-1}} \int_{t_{n-2} > s^M_j > t_{m}} \int_{s^M_j > \cdots > t_m} \left( \frac{\partial}{\partial \Xi_{n,n-1}^{(pq)}} \Gs^j_1 \right) I_h \Xi(s^M_{j+1},s^M_j) \Gs^j_2 \ \dd\vec{\sb}^M. \end{split}$$ For the first integral above, we compute the derivative in the square bracket by $$\begin{aligned} &\frac{\partial}{\partial \Xi_{n,n-1}^{(pq)}} \left( \Gs^j_1 I_h \Xi(s^M_{j+1},s^M_j) \right) = \notag \\ =& \ \sum_{i=j}^{M} (-1)^{\#\{\vec{\sb}^M \le t\}} W_s I_h \Xi(t_n,s^M_M) W_s \cdots W_s \left[ \frac{\partial}{\partial \Xi_{n,n-1}^{(pq)}} I_h \Xi(s^M_{i+1},s^M_i) \right] W_s \cdots W_s I_h \Xi(s^M_{j+1},s^M_j) \notag \\ =& \ \sum_{i=j}^{M} (-1)^{\#\{\vec{\sb}^M \le t\}} W_s I_h \Xi(t_n,s^M_M) W_s \cdots W_s c^M_{i,3}E_{pq} W_s \cdots W_s I_h \Xi(s^M_{j+1},s^M_j) . \end{aligned}$$ Here $E_{pq}$ is defined as a 2-by-2 matrix with its $pq-$entry being the only non-zero entry equal to 1. By the hypothesis (H1)(H3), this integral is therefore bounded by $$\label{eq: est_case_1_1_part_1} \begin{split} &\left\| \int_{t_n > s^M_M > \cdots > s^M_{j+1} > t_{n-1}} \int_{t_{n-1} > s^M_j > t_{n-2}} \int_{s^M_j > s^M_{j-1} > \cdots > s^M_1 > t_m} \left[ \frac{\partial}{\partial \Xi_{n,n-1}^{(pq)}} \Gs^j_1 I_h \Xi(s^M_{j+1},s^M_j) \right] \Gs^j_2 \ \dd\vec{\sb}^M \right\| \\ \le & \ (M-j+1) \bdW^{M+1}\bdG^M (M!! \bdL^{\frac{M+1}{2}}) \times \int_{t_n > s^M_M > \cdots > s^M_{j+1} > t_{n-1}} \int_{t_{n-1} > s^M_j > t_{n-2}} \int_{s^M_j > s^M_{j-1} > \cdots > s^M_1 > t_m} 1 \ \dd\vec{\sb}^M \\ \le & \ \bdW^{M+1}\bdG^M (M!! \bdL^{\frac{M+1}{2}}) \times (M-j+1) \times \frac{1}{(M-j)!(j-1)!}(t_{n-1} - t_m)^{j-1} h^{M-j+1}. \end{split}$$ We notice that the upper bound above consists of three components: (1) $ \bdW^{M+1}\bdG^M (M!! \bdL^{\frac{M+1}{2}})$: the upper bound of the integrand; (2) $M-j+1$: the number of terms with the form $I_h \Xi$ whose values depend on $\Xi_{n,n-1}$; (3) $\frac{1}{(M-j)!(j-1)!}(t_{n-1} - t_m)^{j-1} h^{M-j+1}$: the area of the domain of integration. Similarly, we may directly write down the upper bound for the second integral in : $$\begin{gathered} \label{eq: est_case_1_1_part_2} \left\| \int_{t_n > s^M_M > \cdots > s^M_{j+1} > t_{n-1}} \int_{t_{n-2} > s^M_j > t_{m}} \int_{s^M_j > s^M_{j-1} > \cdots > s^M_1 > t_m} \left[ \frac{\partial}{\partial \Xi_{n,n-1}^{(pq)}} \Gs^j_1 \right] I_h \Xi(s^M_{j+1},s^M_j) \Gs^j_2 \ \dd\vec{\sb}^M \right\| \\ \le \ \bdW^{M+1}\bdG^M (M!! \bdL^{\frac{M+1}{2}}) \times (M-j) \times \frac{1}{(M-j)!j!} (t_{n-2} -t_m)^j h^{M-j}. \end{gathered}$$ Combining the estimation and yields $$\label{eq: deriv Ij bd} \left\| \frac{\partial\Is_j (\xib_{n,m})}{\partial G_{n,n-1}^{(pq)}} \right\|\le \ 2\bdW^{M+1} \bdG^M (M!! \bdL^{\frac{M+1}{2}}) \frac{M-j+1}{(M-j)!(j-1)!} (t_{n-1}-t_m)^j h^{M-j}.$$ As we have mentioned previously, the upper bound we obtained above is for $1 \le j \le M-1$. For $j=0$ and $j=M$, we may return to and consider these two individual cases and we easily reach to the following results with similar argument: $$\begin{aligned} &\left\| \frac{\partial\Is_0 (\xib_{n,m})}{\partial G_{n,n-1}^{(pq)}} \right\| \le \bdW^{M+1} \bdG^M \bdL^{\frac{M+1}{2}} \frac{M+1}{(M-1)!!} h^{M},\\ &\left\| \frac{\partial\Is_M (\xib_{n,m})}{\partial G_{n,n-1}^{(pq)}} \right\| \le \bdW^{M+1} \bdG^M \bdL^{\frac{M+1}{2}} \frac{M}{(M-1)!!} (t_{n-1}-t_m)^{M-1} h.\end{aligned}$$ Now we sum up all the upper bounds for $\frac{\partial \Is_j (\xib_{n,m})}{\partial G_{n,n-1}^{(pq)}}$ and use the combination relation $(1+X)^n = \sum_{k \geq 0} \left({n \atop k}\right) X^k$ to get: $$\label{eq: 1st-order_case_1_1} \begin{split} \left\| \frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{n,n-1}^{(pq)}} \right\| \le \sum_{j = 0}^{M} \left\|\frac{\partial \Is_j (\xib_{n,m})}{\partial G_{n,n-1}^{(pq)}} \right\| \le 5\bdW^{M+1} \bdG^M \bdL^{\frac{M+1}{2}} \frac{M}{(M-3)!!} (t_n - t_m)^{M-1} h. \end{split}$$ Here we remark that the argument above is only valid when the odd number $M$ is chosen to be greater than $1$ due to the number $(M-3)!!$ in the estimate , which is not defined when $M=1$. For $M=1$, we simply return to the definition and follow similar procedures to reach to the result $$\label{eq: 1st-order_case_1_2} \left\| \frac{\partial \Fs_1 (\xib_{n,m})}{\partial G_{n,n-1}^{(pq)}} \right\|\le \ 2\bdW^2 \bdG \bdL h.$$ ##### Case 2: $\ell < n-1$. For this case, one may check that $\Gs^j_1$ and $\Gs^j_2$ defined in are both independent of $\Xi_{n,\ell}$, while the “interface" $I_h \Xi(s^M_{j+1},s^M_j)$ depends on $\Xi_{n,\ell}$ only when $t_{\ell-1}<s^M_j<t_{\ell+1}$. Therefore, we simply calculate the derivative for each $\Is_j$ by $$\frac{\partial \Is_j (\xib_{n,m})}{\partial G_{n,\ell}^{(pq)}} = \int_{t_n > s^M_M > \cdots > s^M_{j+1} > t_{n-1}} \int_{t_{\ell+1} > s^M_j > t_{\ell-1}} \int_{s^M_j > s^M_{j-1} > \cdots > s^M_1 > t_m} \Gs^j_1 \left(\frac{\partial }{\partial \Xi_{n,\ell}^{(pq)}} I_h \Xi(s^M_{j+1},s^M_j) \right)\Gs^j_2 \ \dd\vec{\sb}^M.$$ Note that we need to choose $1\le j \le M-1$ and when $j=0$ or $j=M$, the corresponding derivative above vanishes. We follow the analysis for *Case 1* and can also obtain upper bounds for all $\left\|\frac{\partial \Is_j (\xib_{n,m})}{\partial G_{n,\ell}^{(pq)}}\right\|$, summing up which leads to the estimate of $\left\| \frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{n,\ell}^{(pq)}} \right\|$. By calculation similar to Case 1, the derivative $\left\| \frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{n,\ell}^{(pq)}} \right\|$ can also be bounded by the same upper bounds given in for $M \geq 3$ and for $M=1$. Overall, we arrive at the conclusion that for any $(k,\ell) \in \partial \Omega_{n,m}$, $$\begin{split} \left\| \frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} \right\| \le \begin{cases} 2\bdW^2 \bdG \bdL h, &\text{if~} M=1 , \\ 5\bdW^{M+1} \bdG^M \bdL^{\frac{M+1}{2}} \frac{M}{(M-3)!!} (t_n - t_m)^{M-1} h, &\text{if~} M \geq 3. \end{cases} \end{split}$$ To complete our estimation for $\frac{ \partial F_1(\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}}$, we now sum up the bounds for $\left\| \frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} \right\|$ up to $\bar{M}$ and get a uniform bound $$\label{eq: est_case_1} \left\| \frac{\partial F_1 (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} \right\| \le \left( 2\bdW^2 \bdG \bdL + 5 \bdW^2 \bdG \bdL \sum^{\bar{M}}_{\substack{M=3 \\ M \text{~is odd}}} \frac{M}{(M-3)!!} \left(\bdW \bdG \bdL^{1/2} t_{n-m}\right)^{M-1} \right) \cdot h,$$ which proves the first case in the Theorem \[thm: first order derivative\]. #### (II) $(k,\ell) \in \mathring{\Omega}_{n,m}$ To compute $\frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}}$, we need to first find all $I_h \Xi(s^M_{j+1},s^M_j)$ in $\Fs_M (\xib_{n,m})$ that depends on $\Xi_{k,\ell}^{(pq)}$. Since $(k,\ell) \in \mathring{\Omega}_{n,m}$, only those $I_h \Xi(s^M_{j+1},s^M_j)$ such that $t_{\ell-1}<s^M_{j}<t_{\ell+1}$ and $t_{k-1} < s^M_{j+1} < t_{k+1}$ may depend on $\Xi_{k,\ell}$. We first consider a special case $M=1$, where the derivative is simply given by $$\frac{\partial \Fs_1 (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} = \frac{\partial}{\partial \Xi_{k,\ell}^{(pq)}} \int^{t_n}_{t_m}(-1)^{\#\{s_1^1 \le t\}} W_s I_h \Xi(t_n,s_1^1)W_s I_h \Xi(s_1^1,t_m)\Ls(t_n,s_1^1) \ \dd s_1^1.$$ It is easy to see that neither $I_h \Xi(t_n,s_1^1)$ nor $I_h \Xi(s_1^1,t_m)$ depends on $\Xi_{k,\ell}$, since both are interpolated by $\Xi$ values only on $\partial \Omega_{n,m}$. As the result, $\frac{\partial \Fs_1 (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} = 0$. When $M\geq 3$, we need to consider the following two possibilities: ##### Case 1: $k - \ell \geq 2$. Similar to , we apply the following splitting of the integral in the definition of $\Fs_M$ : $$\frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} = \sum_{j = 1}^{M-1} \frac{\partial \Js_j (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}},$$ where $$\begin{aligned} \Js_j(\xib_{n,m}) = & \int_{t_n>\cdots>s^M_{j+2}>s^M_{j+1}} \int_{t_{k+1}>s^M_{j+1}>t_{k-1}}\int_{t_{\ell+1}>s^M_j>t_{\ell-1}}\int_{s^M_j>s^M_{j-1}>\cdots>t_m} \\ & \hspace{20pt} (-1)^{\#\{\vec{\sb}^M \le t\}}W_s I_h \Xi(t_n,s^M_M) W_s I_h \Xi(s^M_M,s^M_{M-1}) \cdots W_s I_h \Xi(s^M_1,t_m) \Ls(t_n,\vec{\sb}^M) \ \dd \vec{\sb}^M.\end{aligned}$$ Here a critical observation is that once we assume $t_{\ell-1}<s^M_{j}<t_{\ell+1},\ t_{k-1} < s^M_{j+1} < t_{k+1}$ for any fixed $j$, $I_h \Xi(s^M_{j+1},s^M_j)$ is then the unique term in the integrand of $\Fs_M (\xib_{n,m})$ that depends on $\Xi_{k,\ell}$ since $(t_{\ell-1},t_{\ell+1}) \cap (t_{k-1},t_{k+1}) = \emptyset$ when $k - \ell \geq 2$. This observation is illustrated in Figure \[fig:deriv\_case2\_1\]. We again write the integrand above as $\Gs^j_1 \times I_h \Xi(s^M_{j+1},s^M_j) \times \Gs^j_2$ defined exactly the same as in , then $\Gs^j_1$ and $\Gs^j_2$ are both independent from $\Xi_{k,\ell}$. Therefore, $$\begin{split} \left\| \frac{\partial\Js_j (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} \right\| \le & \ \int_{t_n>\cdots>s^M_{j+1}} \int_{t_{k+1}>s^M_{j+1}>t_{k-1}}\int_{t_{\ell+1}>s^M_j>t_{\ell-1}}\int_{s^M_j>\cdots>t_m} \left\| \Gs^j_1 \left( \frac{ \partial}{\partial \Xi_{k,\ell}^{(pq)}} I_h \Xi(s^M_{j+1},s^M_j) \right) \Gs^j_2 \right\| \ \dd \vec{\sb}^M \\ \le & \ 4\bdW^{M+1} \bdG^M \bdL^{\frac{M+1}{2}} \frac{M!!}{(M-j-1)!(j-1)!} (t_{n}-t_m)^{M-2}h^2, \end{split}$$ which leads to $$\label{eq: est_case_2_1} \left\|\frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}}\right\| = \sum_{j = 1}^{M-1} \left\| \frac{\partial \Js_j (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} \right\| \le 4\bdW^{M+1} \bdG^M \bdL^{\frac{M+1}{2}} \frac{M}{(M-3)!!}(2t_{n-m})^{M-2} h^2.$$ ##### Case 2: $k - \ell = 1$. There is a overlapping region $(t_{\ell-1},t_{\ell+1}) \cap (t_{k-1},t_{k+1}) = (t_{\ell},t_{\ell+1})$ in this case. Consequently, there can be multiple terms in the integrand depending on $\Xi_{k,\ell}$. To estimate the derivative $\frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}}$, we further divide it into three parts based on the distribution of the time sequence $\vec{\sb}^M$ in the integrand: $$\frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} = \sum_{v=2}^M \sum_{u=0}^{M-v} \frac{\partial \Ks^{u,v}_1}{\partial \Xi_{k,\ell}^{(pq)}} + \left(\sum_{v=1}^{M-1} \sum_{u=1}^{M-v} \frac{\partial \Ks^{u,v}_{2,L}}{\partial \Xi_{k,\ell}^{(pq)}} + \sum_{v=1}^{M-1} \sum_{u=0}^{M-v-1} \frac{\partial \Ks^{u,v}_{2,R}}{\partial \Xi_{k,\ell}^{(pq)}} \right) + \sum_{v=0}^{M-2} \sum_{u=1}^{ M-v-1 } \frac{\partial \Ks^{u,v}_3}{\partial \Xi_{k,\ell}^{(pq)}}$$ where $$\begin{split} \Ks^{u,v}_1 = & \ \int_{t_n > s^M_M > \cdots >s^M_{u+v+1}>t_{k+1}}\int_{t_{\ell+1}>s^M_{u+v}>\cdots>s^M_{u+1}>t_\ell} \int_{t_{\ell-1}>s^M_u>\cdots>s^M_1>t_m} \\ & \hspace{80pt} \Gs_1^{u,v} \times I_h \Xi(s^M_{u+v},s^M_{u+v-1})W_s\cdots W_s I_h \Xi(s^M_{u+2},s^M_{u+1}) \times \Gs_2^{u,v} \ \dd \vec{\sb}^M; \\ \Ks^{u,v}_{2,L} = & \ \int_{t_n > s^M_M > \cdots >s^M_{u+v+1}>t_{k+1}}\int_{t_{\ell+1}>s^M_{u+v}>\cdots>s^M_{u+1}>t_\ell}\int_{t_\ell > s^M_u > t_{\ell-1}} \int_{s^M_u>s^M_{u-1}>\cdots>s^M_1>t_m} \\ & \hspace{80pt} \Gs_1^{u,v} \times I_h \Xi(s^M_{u+v},s^M_{u+v-1})W_s\cdots W_s I_h \Xi(s^M_{u+1},s^M_{u}) \times \Gs_2^{u,v} \ \dd \vec{\sb}^M, \\ \Ks^{u,v}_{2,R} = & \ \int_{t_n > s^M_M > \cdots >s^M_{u+v+2}>s^M_{u+v+1}}\int_{t_{k+1>}s^M_{u+v+1}>t_k}\int_{t_{\ell+1}>s^M_{u+v}>\cdots>s^M_{u+1}>t_\ell}\int_{t_{\ell-1}>s^M_u>\cdots>s^M_1>t_m} \\ & \hspace{80pt} \Gs_1^{u,v} \times I_h \Xi(s^M_{u+v+1},s^M_{u+v})W_s\cdots W_s I_h \Xi(s^M_{u+2},s^M_{u+1}) \times \Gs_2^{u,v} \ \dd \vec{\sb}^M; \\ \Ks^{u,v}_{3} = & \ \int_{t_n > s^M_M > \cdots >s^M_{u+v+1}}\int_{t_{k+1>}s^M_{u+v+1}>t_k}\int_{t_{\ell+1}>s^M_{u+v}>\cdots>s^M_{u+1}>t_\ell}\int_{t_\ell > s^M_u > t_{\ell-1}}\int_{s^M_u>\cdots>s^M_1>t_m} \\ & \hspace{80pt} \Gs_1^{u,v} \times I_h \Xi(s^M_{u+v+1},s^M_{u+v})W_s\cdots W_s I_h \Xi(s^M_{u+1},s^M_{u}) \times \Gs_2^{u,v} \ \dd \vec{\sb}^M. \end{split}$$ With a slight abuse of notation, here $\Gs_1^{u,v}$ and $ \Gs_2^{u,v}$ denote the products with the form “$W_s I_h \Xi \cdots W_s I_h \Xi$" that complete the integrand. Each $\Ks^{u,v}$ here represents a part of the integral $\Fs_M(\xib_{n,m})$ where there are $v$ time points in $\vec{\sb}^M$ locating in $(t_\ell,t_{\ell+1})$. These cases are illustrated in Figure \[fig\_deriv\_case2\_2\]: - In $\Ks^{u,v}_1$, no time point other than these $v$ points $s_{u+1}, \cdots, s_{u+v}$ is in the interval $(t_{\ell-1},t_{k+1})$. - In $\Ks^{u,v}_{2,L}$ (or $\Ks^{u,v}_{2,R}$), there exists at least one point other than $s_{u+1}, \cdots, s_{u+v}$ locating in $(t_{\ell-1},t_{\ell})$ (or $(t_{k},t_{k+1})$ ) while no time point appears in $(t_{k},t_{k+1})$ (or $(t_{\ell-1},t_{\ell})$). - In $\Ks^{u,v}_3$, there exists at least one point other than $s_{u+1}, \cdots, s_{u+v}$ in both $(t_{k},t_{k+1})$ and $(t_{\ell-1},t_{\ell})$. By splitting $\Fs_M(\xib_{n,m})$ in this way, one may easily check that $\Gs_1^{u,v}$ and $\Gs_2^{u,v}$ in each $\Ks^{u,v}$ are all independent of $\Xi_{k,\ell}$, while all $I_h \Xi$ in between depend on $\Xi_{k,\ell}$. Therefore, we can now compute $\frac{\partial \Ks^{u,v}}{\partial \Xi_{k,\ell}^{(pq)}}$ as the product of the derivative of these $I_h \Xi$. Mimicking the previous analysis in and , we may bound the first summation $$\begin{split} &\sum_{v=2}^M \sum_{u=0}^{M-v} \left\| \frac{\partial \Ks^{u,v}_1}{\partial \Xi_{k,\ell}^{(pq)}} \right\| \\ \le & \ \sum_{v=2}^M \sum_{u=0}^{M-v} \bdW^{M+1} \bdG (M!! \bdL^{\frac{M+1}{2}}) \times (v-1) \times \frac{(t_n - t_{k+1})^{M-u-v}}{(M-u-v)!}\cdot \frac{h^v}{v!} \cdot \frac{(t_{\ell -1} - t_m)^u}{u!} \\ \le & \ \sum_{v=2}^M \bdW^{M+1} \bdG (M!! \bdL^{\frac{M+1}{2}}) \frac{v-1}{v!} (t_{n-m-1})^{M-v} h^v \sum_{u=0}^{M-v} \frac{1}{(M-v-u)!u!} \\ = & \ \sum_{v=2}^M \bdW^{M+1} \bdG (M!! \bdL^{\frac{M+1}{2}}) \frac{v-1}{(M-v)!v!} (2t_{n-m-1})^{M-v} h^v \\ = & \ \bdW^{M+1} \bdG (M!! \bdL^{\frac{M+1}{2}}) (2t_{n-m-1})^{M-2} h^2 \sum_{v=0}^{M-2} \frac{(M-2)!}{(M-2-v)!v!}\left( \frac{h}{2t_{n-m-1}}\right)^v \cdot \frac{1}{(M-2)!(v+2)} \\ \le & \ \frac{1}{2} \bdW^{M+1} \bdG (M!! \bdL^{\frac{M+1}{2}})\frac{1}{(M-2)!}(2t_{n-m-1}+h)^{M-2}h^2 \\ \le & \ \frac{1}{2}\bdW^{M+1} \bdG \bdL^{\frac{M+1}{2}} \frac{M}{(M-3)!!}(2t_{n-m})^{M-2}h^2. \end{split}$$ Similarly, we may estimate the other summations as $$\left(\sum_{v=1}^{M-1} \sum_{u=1}^{M-v} \left\| \frac{\partial \Ks^{u,v}_{2,L}}{\partial \Xi_{k,\ell}^{(pq)}} \right\|+ \sum_{v=1}^{M-1} \sum_{u=0}^{M-v-1} \left\| \frac{\partial \Ks^{u,v}_{2,R}}{\partial \Xi_{k,\ell}^{(pq)}} \right\| \right) \le 2 \bdW^{M+1} \bdG \bdL^{\frac{M+1}{2}} \frac{M}{(M-3)!!}(2t_{n-m})^{M-2}h^2$$ and $$\sum_{v=0}^{M-2} \sum_{u=1}^{ M-v-1 } \left\| \frac{\partial \Ks^{u,v}_3}{\partial \Xi_{k,\ell}^{(pq)}} \right\| \le \bdW^{M+1} \bdG \bdL^{\frac{M+1}{2}} \frac{(M-1)M}{(M-3)!!}(2t_{n-m})^{M-2}h^2.$$ Therefore, we get the estimate $$\label{eq: est_case_2_2} \left\| \frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} \right\| \le 3\bdW^{M+1} \bdG^M \bdL^{\frac{M+1}{2}}\frac{(M-1)M}{(M-3)!!}(2t_{n-m})^{M-2}h^2.$$ Note that the upper bound above is strictly greater than the *Case 1* bound given in . Therefore, we may use the upper bound in as the uniform bound for both cases. As a summary, we have obtained the following result: for $(k,\ell) \in \mathring{\Omega}_{n,m}$, $$\begin{aligned} \left\| \frac{\partial \Fs_M (\xib_{m,n})}{\partial G_{k,\ell}^{(pq)}} \right\|\le \begin{cases} 0, & \text{if}~ M=1, \\ 3\bdW^{M+1} \bdG^M \bdL^{\frac{M+1}{2}}\frac{(M-1)M}{(M-3)!!}(2t_{n-m})^{M-2}h^2,& \text{if}~ M \geq 3. \end{cases}\end{aligned}$$ Summing up these estimates, we see that for odd $\bar{M}>1$, $$\label{eq: est_case_2} \left\| \frac{\partial F_1 (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} \right\| \le \sum^{\bar{M}}_{\substack{M=1 \\ M ~\text{is odd}}} \left\| \frac{\partial \Fs_M (\xib_{n,m})}{\partial G_{k,\ell}^{(pq)}} \right\| \le \left( 3\bdW^3 \bdG^2 \bdL^{\frac{3}{2}} \sum^{\bar{M}}_{\substack{M=3 \\ M ~\text{is odd}}} \frac{(M-1)M}{(M-3)!!}(2\bdW \bdG \bdL^{1/2}t_{n-m})^{M-2} \right) \cdot h^2.$$ By now, all the cases have been discussed, and the final conclusion is a simple combination of and , as completes the proof. ### Proof of Proposition \[thm: second order derivative\]—Estimate the second-order derivatives The proof of Proposition \[thm: second order derivative\] is quite tedious and does not shed much light. Moreover the error contributed by the second-order derivatives play a less important role in our final result. Thus we will only provide the outline of the proof stating the idea without technical details. By the definition of $F_1(\cdot)$, we decompose the second-order derivative as $$\label{eq: 2nd order derivative} \frac{\partial^2 F_1 (\xib_{n,m})}{\partial G_{k_1,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} =\sgn(t_n - t) \sum^{\bar{M}}_{\substack{M=1 \\ M \text{~is odd~}}} \ii^{M+1} \frac{\partial^2 \Fs_M(\xib_{n,m})}{\partial G_{k_1,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} ,$$ where the definition of the function $\Fs_M(\xib_{n,m})$ is given in . Since each $I_h \Xi$ is obtained via linear interpolation, one can easily check the following result: If $\frac{\partial^2 \Fs_M(\xib_{n,m})}{\partial G_{k_1,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} $ is non-zero, there exist at least two factors $I_h \Xi(s^M_{j_1},s^M_{j_1 -1})$ and $I_h \Xi(s^M_{j_2},s^M_{j_2 -1})$ in the integrand of with $1\le j_1 \neq j_2 \le M+1$ such that $$\label{claim:non zero derivative} t_{k_i-1} < s^M_{j_i} < t_{k_i+1} \text{~and~} t_{\ell_i-1} < s^M_{j_i-1} < t_{\ell_i+1}, \quad \text{for } i=1,2,$$ where we define $s^M_{M+1} = t_n$ and $s^M_{0} = t_m$. \[lemma: 2nd order\] The entire argument in the rest of this section will be based on this result. Similar to the proof of the bounds for the first-order derivatives, we need to take into account four possibilities for the locations of $(k_1,\ell_1)$ and $(k_2, \ell_2)$. #### (I) $(k_1,\ell_1) \times (k_2,\ell_2) \in \partial \Omega_{n,m} \times \partial \Omega_{n,m}$ Since $\partial \Omega_{n,m}$ includes two sides (see Figure \[fig:sets\]), we are going to study the two cases where the two nodes $(k_1, \ell_1)$ and $(k_2, \ell_2)$ are on the same/different sides. ##### Case 1: $k_1 = k_2 =n$ or $\ell_1 = \ell_2 =m$. This is the case where $(k_1,\ell_1)$ and $(k_2,\ell_2)$ are on the same side of $\partial \Omega_{n,m}$. Here we only focus on the case $k_1 = k_2 =n$ and the analysis for the other case $\ell_1 = \ell_2 =m$ is similar. We may check that - If $\ell_1,\ell_2 \le n-2$, by Lemma \[lemma: 2nd order\], in order that the second-order derivative $\frac{\partial^2 \Fs_M(\xib_{n,m})}{\partial G_{k_1,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}}$ is nonzero, there exist distinct $j_1$ and $j_2$ such that $$\begin{gathered} \label{eq: cond1} t_{n-1} < s^M_{j_1} \le t_{n}, \qquad t_{\ell_1 -1} < s^M_{j_1 -1} < t_{\ell_1 + 1} \le t_{n-1}, \\ \label{eq: cond2} t_{n-1} < s^M_{j_2} \le t_{n}, \qquad t_{\ell_2 -1} < s^M_{j_2 -1} < t_{\ell_2 + 1} \le t_{n-1}.\end{gathered}$$ However, the conditions and contradict each other because the point $t_{n-1}$ can only locate between one pair of adjacent points in the sequence $\vec{\sb}^M$. Thus $ \frac{\partial^2 F_1(\xib_{n,m})}{\partial G_{n,\ell_1}^{(p_1 q_1)}\partial G_{n,\ell_2}^{(p_2 q_2)}}$ is always zero. - If $\ell_1 = n-1$ and $m+1 \le \ell_2 \le n-2$ (same for $\ell_2 = n-1$ and $m+1 \le \ell_1 \le n-2$), the corresponding conditions are $$\begin{gathered} t_{n-1} < s^M_{j_1} \le t_{n}, \qquad t_{n-2} < s^M_{j_1 -1} < t_n, \\ t_{n-1} < s^M_{j_2} \le t_{n}, \qquad t_{\ell_2 -1} < s^M_{j_2 -1} < t_{\ell_2 + 1} \le t_{n-1}.\end{gathered}$$ Such $j_1$ and $j_2$ can be found only if there is at least one point in $\vec{\sb}^M$ between $t_{n-1}$ and $t_n$. Therefore when $M > 1$ $$\label{eq: m gt 1} \begin{split} \frac{\partial^2 \Fs_M(\xib_{n,m})}{\partial G_{n,n-1}^{(p_1 q_1)}\partial G_{n,\ell_2}^{(p_2 q_2)}} &= \sum_{r=1}^{M-1} \int_{t_n > s^M_{M} > \cdots > s^M_{M-r+1} > t_{n-1}} \int_{t_{\ell_2 + 1} > s^M_{M-r} > t_{\ell_2 -1} }\int_{s^M_{M-r} > \cdots > s^M_1 > t_m} \\ & \qquad (-1)^{\#\{\vec{\sb}^M \le t\}} \frac{\partial}{\partial \Xi_{n,n-1}^{(p_1 q_1)}} \left( W_s I_h \Xi(t_n,s_M^M) W_s \cdots I_h \Xi(s^M_{M-r+2},s^M_{M-r+1}) \right) \times \\ & \qquad \frac{\partial}{\partial \Xi_{n,\ell_2}^{(p_2 q_2)}} \Big( W_s I_h \Xi(s^M_{M-r+1},s^M_{M-r}) W_s \cdots I_h \Xi(s^M_1,t_m)\Big) \Ls(t_n,\vec{\sb}^M) \ \dd \vec{\sb}^M. \end{split}$$ When $M = 1$, the derivative is zero. The magnitude of the above sum can be observed from the sizes of the integral domains. The leading-order term is provided by $r = 2$, which gives $$\label{eq:2ed deriv same leg} \frac{\partial^2 F_1 (\xib_{n,m})}{\partial G_{n,\ell_1}^{(p_1 q_1)}\partial G_{n,\ell_2}^{(p_2 q_2)}} \sim O(h^2).$$ - If $\ell_1 = n-1$ and $\ell_2 = m$ (same for $\ell_2 = n-1$ and $\ell_1 = m$), the analysis for $M > 1$ is the same as . When $M = 1$, we have $$\label{eq:analysis O(h) M=1} \begin{split} \frac{\partial^2 \Fs_M(\xib_{n,m})}{\partial G_{n,n-1}^{(p_1 q_1)}\partial G_{n,\ell_2}^{(p_2 q_2)}} &= \int_{t_{n-1}}^{t_n} (-1)^{\#\{s_1^1 \le t\}} \frac{\partial}{\partial \Xi_{n,n-1}^{(p_1 q_1)}} \left( W_s I_h \Xi(t_n,s_1^1) \right) \frac{\partial}{\partial \Xi_{n,\ell_2}^{(p_2 q_2)}} \Big( W_s I_h \Xi(s^1_1,t_m)\Big) \Ls(t_n,s_1^1) \ \dd s_1^1 \\ & \sim O(h). \end{split}$$ Therefore the derivative also has magnitude $O(h)$. - If $\ell_1 = \ell_2 = n-1$, we need to find distinct $j_1$ and $j_2$ such that $$\begin{gathered} t_{n-1} < s^M_{j_1} \le t_{n}, \qquad t_{n-2} < s^M_{j_1 -1} < t_n, \\ t_{n-1} < s^M_{j_2} \le t_{n}, \qquad t_{n-2} < s^M_{j_2 -1} < t_{n}.\end{gathered}$$ These conditions can be satisfied only if at least two points in $\vec{\sb}^M$ are in $(t_{n-2}, t_n)$, as also results in . ##### Case 2: $k_1 =n,\ell_2 =m$ or $\ell_1 = m,k_2 =n$. This is the case where $(k_1,\ell_1)$ and $(k_2,\ell_2)$ are on the different sides of $\partial \Omega_{n,m}$ in Figure \[fig:sets\]. Again we focus only one case $k_1 =n,\ell_2 =m$, and the other case is similar. - If $|\ell_1 - k_2 | > 1$ (same for $|\ell_2 - k_1 | > 1$ when $\ell_1 = m,k_2 = n$), we similarly propose the conditions $$\begin{gathered} t_{n-1} < s^M_{j_1} \le t_{n}, \qquad t_{\ell_1-1} < s^M_{j_1 -1} < t_{\ell_1 + 1}, \\ t_{k_2-1} < s^M_{j_2} < t_{k_2+1}, \qquad t_{m} \le s^M_{j_2 -1} < t_{m+1}.\end{gathered}$$ When $M = 1$, the derivative is zero. When $M>1$, the leading-order term in $\frac{\partial^2\Fs_M(\xib_{n,m})}{\partial G_{n,\ell_1}^{(p_1 q_1)}\partial G_{k_2,m}^{(p_2 q_2)}}$ is the part of integral where we let $I_h \Xi(s^M_{j_1},s^M_{j_1 -1}) = I_h \Xi(t_n,s^M_{M})$ and $I_h \Xi(s^M_{j_2},s^M_{j_2 -1}) = I_h \Xi(s^M_1,t_m)$ respectively in . As a result, we have the restriction $t_{\ell_1 -1} < s^M_M < t_{\ell_1 + 1},t_{k_2 -1} < s^M_1 < t_{k_2 + 1}$, which leads to $$\label{eq:2ed deriv diff leg} \frac{\partial^2 F_1(\xib_{n,m})}{\partial G_{n,\ell_1}^{(p_1 q_1)}\partial G_{k_2,m}^{(p_2 q_2)}} \sim O(h^2).$$ - If $|\ell_1 - k_2 | \le 1$ (same for $|\ell_2 - k_1 | \le 1$ when $\ell_1 = m,k_2 = n$), we propose exactly the same conditions for $|\ell_1 - k_2|>1$. When $M>1$, the derivative again has magnitude $O(h^2)$ by the same analysis. When $M=1$, we may obtain the result that the derivative again has magnitude $O(h)$ following a similar reasoning as upon setting $\max(t_{\ell_1 -1},t_{k_2 -1}) < s^1_1 < \min(t_{\ell_1 +1},t_{k_2 +1})$. #### (II) If $(k_1,\ell_1) \times (k_2,\ell_2) \in \partial \Omega_{n,m} \times \mathring{\Omega}_{n,m}$ In this case, we have $k_1 =n$ or $\ell_1 =m$. One can easily check that the derivative vanishes when $M=1$. For $M>1$, we may first assume $k_1 = n$. To find out the leading order term in each $\frac{\partial^2 \Fs_M(\xib_{n,m})}{\partial G_{n,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} $, we consider the following two cases: - If $|k_2 - \ell_1| \le 1$, we require $\max(t_{\ell_1 -1},t_{k_2 -1}) < s^M_M < \min(t_{\ell_1 +1},t_{k_2 +1})$ and $t_{\ell_2 -1} < s^M_{M-1} < t_{\ell_2 +1}$ so that we can set $I_h \Xi(s^M_{j_1},s^M_{j_1 -1}) = I_h \Xi(t_n,s^M_M)$ and $I_h \Xi(s^M_{j_2},s^M_{j_2 -1}) = I_h \Xi(s^M_M,s^M_{M-1})$ in . Since in this case we need to restrict at least $s^M_M$ and $s^M_{M-1}$, we have $\frac{\partial^2 F_1(\xib_{n,m})}{\partial G_{n,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} \sim O(h^2)$. - If $|\ell_1 - k_2| > 1$, we require $t_{\ell_1 -1 } < s^M_M < t_{\ell_1 + 1}$ and $t_{k_2 -1} < s^M_{j+1} < t_{k_2 +1},t_{\ell_2 -1} < s^M_j < t_{\ell_2 +1}$ for some $1 \le j\le M-1$ so that we set $I_h \Xi(s^M_{j_1},s^M_{j_1 -1}) = I_h \Xi(t_n,s^M_M)$ and $I_h \Xi(s^M_{j_2},s^M_{j_2 -1}) = I_h \Xi(s^M_{j+1},s^M_{j})$ in . Since in this case we need to restrict at least $s^M_M,s^M_{j+1}$ and $s^M_j$, we have $\frac{\partial^2 F_1(\xib_{n,m})}{\partial G_{n,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} \sim O(h^3)$. Similar results can be obtained for the case when $\ell_1 =m$. So we now have the conclusion . #### (III) If $(k_1,\ell_1) \times (k_2,\ell_2) \in \mathring{\Omega}_{n,m} \times \partial \Omega_{n,m}$ The reasoning is similar to (II). #### (IV) If $(k_1,\ell_1)\times(k_2,\ell_2) \in \mathring{\Omega}_{n,m} \times \mathring{\Omega}_{n,m} $ Again, we can check that the derivative is non-zero only when $M>1$ and we may assume $k_1 \geq k_2$. We also have the following two cases: - If $|k_2-\ell_1| \le 1$, we require $t_{k_1 -1} < s^M_{j+2} < t_{k_1 +1}$, $\max(t_{\ell_1 -1},t_{k_2-1}) < s^M_{j+1} < \min(t_{\ell_1 +1},t_{k_2 + 1})$ and $t_{\ell_2 -1} < s^M_{j} < t_{\ell_2 +1}$ for some $1\le j\le M-2$ so that we can set $I_h \Xi(s^M_{j_1},s^M_{j_1 -1}) = I_h \Xi(s^M_{j+2},s^M_{j+1})$ and $I_h \Xi(s^M_{j_2},s^M_{j_2 -1}) = I_h \Xi(s^M_{j+1},s^M_{j})$ in . Since in this case we need to restrict at least $s^M_{j+2},s^M_{j+1}$ and $s^M_j$, we have $\frac{\partial^2 F_1(\xib_{n,m})}{\partial G_{k_1,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} \sim O(h^3)$. - If $|k_2-\ell_1| > 1$, we require $t_{k_i -1} < s^M_{j_i + 1} < t_{k_i +1}$ and $t_{\ell_i -1} < s^M_{j_i} < t_{\ell_i +1}$ with $i=1,2$ for $1\le j_i \le M-1$ and $j_1 - j_2 > 1$ and set $I_h \Xi(s^M_{j_1},s^M_{j_1 -1}) = I_h \Xi(s^M_{j_1+1},s^M_{j_1})$ and $I_h \Xi(s^M_{j_2},s^M_{j_2 -1}) = I_h \Xi(s^M_{j_2 +1},s^M_{j_2})$ in . Since in this case we need to restrict at least $s^M_{j_1+1},s^M_{j_1},s^M_{j_2 + 1}$ and $s^M_{j_2}$, we have $\frac{\partial^2 F_1(\xib_{n,m})}{\partial G_{k_1,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} \sim O(h^4)$. Similar analysis and result can be given for $k_1 < k_2$. Therefore, we arrive at . Proof of Proposition \[thm: recurrence relations\] — Recurrence relation for the numerical error {#sec: recurrence 2} ------------------------------------------------------------------------------------------------ By the definitions of the deterministic method and the inchworm Monte Carlo method , it is straightforward to check that $$\label{eq: formula first order error} \dG_{n+1,m} = A_{n,m}(h)\dG_{n,m} + \frac{1}{2}h\left( B_{n,m}(h)\Delta K_1 + \Delta K_2\right),$$ where for simplicity we have used the short-hands $$\begin{split} & \Delta K_i = \tK_i - K_i, \\ & A_{n,m}(h) = I + \frac{1}{2}\big(\sgn(t_n - t) + \sgn(t_{n+1} - t)\big)\ii H_s h - \frac{1}{2}\sgn(t_n - t)\sgn(t_{n+1} - t) H^2_s h^2,\\ & B_{n,m}(h) = I + \sgn(t_{n+1} - t)\ii H_s h. \end{split}$$ By triangle inequality, the error can be bounded by $$\label{eq: est second order error 1/2} \left[\E(\|\dG_{n+1,m}\|^2)\right]^{1/2} \le \left[\E(\|A_{n,m}(h)\dG_{n,m})\|^2\right]^{1/2} + \frac{1}{2}h \left[\E\left( \big\| B_{n,m}(h)\Delta K_1 + \Delta K_2 \big\|^2 \right) \right]^{1/2}.$$ For the first term on the right-hand side, we have $$\begin{split} &\E(\|A_{n,m}(h)\dG_{n,m})\|^2) \\ \le& \ \left[\rho\left(I+\ \frac{1}{2}\big(\sgn(t_n - t) + \sgn(t_{n+1} - t)\big)\ii H_s h - \frac{1}{2}\sgn(t_n - t)\sgn(t_{n+1} - t) H^2_s h^2\right)\right]^2 \cdot\E(\|\dG_{n,m})\|^2), \end{split}$$ where $\rho(\cdot)$ denotes the spectral radius of a matrix. Let $\lambda_1$ and $\lambda_2$ be the two eigenvalues of $H_s$. Then $$\label{eq: est spectrum} \begin{split} &\ \left[\rho\left(I+\ \frac{1}{2}\big(\sgn(t_n - t) + \sgn(t_{n+1} - t)\big)\ii H_s h - \frac{1}{2}\sgn(t_n - t)\sgn(t_{n+1} - t) H^2_s h^2\right) \right]^2\\ = & \ \max_{i=1,2} \left| 1+\frac{1}{2}\left(\sgn(t_n - t) + \sgn(t_{n+1} - t)\right)\ii \lambda_i h - \frac{1}{2}\sgn(t_n - t)\sgn(t_{n+1} - t) \lambda_i^2 h^2 \right|^2\\ = & \ \max_{i=1,2} \left( 1+ \frac{1}{4}\left(\sgn(t_{n+1} - t) - \sgn(t_{n} - t)\right)^2 \lambda^2_i h^2 + \frac{1}{4}\left(\sgn(t_{n+1} - t)\sgn(t_{n} - t)\right)^2 \lambda^4_i h^4 \right)\\ = & \ 1 + \frac{1}{4}\left(\rho(H_s)\right)^4 h^4 \le 1 + \frac{1}{4}\bdH^4 h^4. \end{split}$$ Note that in the third line of the above equation, the second term vanishes due to the fact that the scheme evolves according to (R1)(R3). Consequently, the first term on the right-hand side of can be estimated by $$\label{eq: AG} \left[\E(\|A_{n,m}(h)\dG_{n,m})\|^2)\right]^{1/2} \le \sqrt{ 1 + \frac{1}{4}\bdH^4 h^4 }\cdot\left[\E(\|\dG_{n,m})\|^2)\right]^{1/2} \le (1 + \frac{1}{8}\bdH^4 h^4)\cdot\left[\E(\|\dG_{n,m})\|^2)\right]^{1/2}.$$ To estimate the second term on the right-hand side of , we again need to bound $\left[\E(\|\tK_i - K_i\|^2)\right]^{1/2}$ to obtain a recurrence relation for the numerical error. Such results are given in the following lemma: \[lemma: deltak and deltak\^2 integ diff\] Assume that the hypotheses (H1) and (H3) hold. For a sufficiently small time step length $h$, we have $$\begin{aligned} \label{eq: bound k_1} \left\|\E(\tK_1 - K_1)\right\| &\le 8P_1(t_{n-m}) h \sum_{i=1}^{n-m} \left( 2 + (n-m-i)h \right)\left\| \E \left(\Delta \gb_{n,m} \right) \right\|_{\Gamma_{n,m}(i)} + \bar{\alpha}(t_{n-m}) \left[ \Ns^{(\emph{std})}_{\Omega_{n,m}}(\Delta \gb_{n,m})\right]^2, \\ \label{eq: bound k_2} \begin{split} \left\|\E(\tK_2 - K_2)\right\| &\le 28P_1(t_{n-m+1}) h \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right)\left\| \E \left( \Delta \gb^*_{n,m} \right) \right\|_{\Gamma^*_{n,m}(i)} \\ &\hspace{100pt}+ 5\bar{\alpha}(t_{n-m+1}) \left[\Ns^{(\emph{std})}_{\bar{\Omega}_{n,m}}(\Delta \gb^*_{n,m})\right]^2 + 16 \bar{\alpha}(t_{n-m+1})\bar{\gamma}(t_{n-m+1})\cdot \frac{h^2}{N_s}, \end{split}\end{aligned}$$ and $$\begin{aligned} \label{eq: bound k_1^2} \left[\E( \|\tK_1 - K_1\|^2 )\right]^{1/2} &\le 8P_1(t_{n-m}) h \sum_{i = 1}^{n-m} \left( 2 + (n-m-i)h \right) \Ns^{(\emph{std})}_{\Gamma_{n,m}(i)}(\Delta \gb_{n,m})+ 2\sqrt{\bar{\gamma}(t_{n-m})}\cdot \frac{1}{\sqrt{N_s}}, \\ \label{eq: bound k_2^2} \begin{split} \left[\E(\|\tK_2 - K_2\|^2)\right]^{1/2} & \le 28P_1(t_{n-m+1}) h \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \Ns^{(\emph{std})}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) \\ &\hspace{220pt}+ 3\sqrt{\bar{\gamma}(t_{n-m+1})}\cdot \frac{1}{\sqrt{N_s}}, \end{split}\end{aligned}$$ where $\bar{\alpha}$ and $\bar{\gamma}$ are defined in . Here we recall that $P_1(t),P_2(t)$ are given in Propositions \[thm: first order derivative\] and \[thm: second order derivative\], and $\bdW,\bdG,\bdL$ are some upper bounds given in the assumptions (H1) and (H3). We only write down the proof for and which are related to the numerical error in this section. The other two will only be used when estimating the bias so we put the corresponding proof in the Appendix. **(i)** Estimate of $\E(\|\tK_1 - K_1\|^2)$: The definition of $\tK_i$ indicates that $$\label{eq: relation F and G} \begin{split} &\E_{\vec{\sb}}[ \tilde{F}_1( \tg_{n,m};\vec{\sb}) ] = F_1( \tg_{n,m}), \\ &\E_{\vec{\sb}'}[ \tilde{F}_2( \tg^*_{n,m};\vec{\sb}') ] = F_2( \tg^*_{n,m}) \end{split}$$ by the fact that $\vec{\sb}$ and $\vec{\sb}'$ are sampled independently from $\tg_{n,m}$ and $\tg^*_{n,m}$. Therefore, for each $rs-$entry we have $$\label{eq: est k_1^2} \begin{split} \E_{\vec{\sb}}(|\tK^{(rs)}_1 - K^{(rs)}_1|^2) =& \ \E_{\vec{\sb}}\left[ \left| F^{(rs)}_1(\gb_{n,m})- \frac{1}{N_s}\sum_{i=1}^{N_s} \tilde{F}^{(rs)}_1( \tg_{n,m};\vec{\sb}^i) \right|^2 \right] \\ =& \ \frac{1}{N_s} \E_{\vec{\sb}}\left[ \left|\tilde{F}^{(rs)}_1( \tg_{n,m};\vec{\sb})\right|^2 - \left|F^{(rs)}_1(\tg_{n,m})\right|^2 \right] + \left|F^{(rs)}_1(\gb_{n,m})-F^{(rs)}_1(\tg_{n,m})\right|^2 \end{split}$$ which gives $$\label{eq: est k_1^2 sqrt} \begin{split} & \left[\E(|\tK^{(rs)}_1 - K^{(rs)}_1|^2)\right]^{1/2} \le \frac{1}{\sqrt{N_s}}\cdot \left[\E\left( \left|\tilde{F}^{(rs)}_1( \tg_{n,m};\vec{\sb})\right|^2 - \left|F^{(rs)}_1(\tg_{n,m})\right|^2 \right) \right]^{1/2} \\ & \hspace{160pt} \qquad \quad+ \left[\E \left( \left|F^{(rs)}_1(\gb_{n,m})-F^{(rs)}_1(\tg_{n,m})\right|^2 \right) \right]^{1/2}. \end{split}$$ According to the the boundedness assumption (H1)(H3), the first term on the right-hand side of the inequality above is immediately bounded by $$\label{eq: est variance} \frac{1}{\sqrt{N_s}}\cdot \left[\E\left( \left|\tilde{F}^{(rs)}_1( \tg_{n,m};\vec{\sb})\right|^2 - \left|F^{(rs)}_1(\tg_{n,m})\right|^2 \right) \right]^{1/2} \le \sqrt{\bar{\gamma}(t_{n-m})}\cdot\frac{1}{\sqrt{N_s}}.$$ with $\bar{\gamma}(t)$ defined in . For the second term, we use mean value theorem to get $$\label{eq: est mean value} \begin{split} &\left[\E \left( \left|F^{(rs)}_1(\gb_{n,m})-F^{(rs)}_1(\tg_{n,m})\right|^2 \right) \right]^{1/2}= \left[ \E \left( \left|\left( \nabla F_1^{(rs)}(\etab_{n,m})\right)^{{\mathrm{T}}} \cdot(\htg_{n,m} - \hg_{n,m})\right|^2 \right) \right]^{1/2} \\ ={} & \left[ \E \left( \left| \sum_{i=1}^{n-m}\quad \sum_{(k,\ell) \in \Gamma_{n,m}(i)}\quad \sum_{p,q=1,2} \frac{\partial F^{(rs)}_1(\etab_{n,m}) }{\partial G_{k,\ell}^{(pq)}} \cdot \Delta G_{k,\ell}^{(pq)} \right|^2 \right) \right]^{1/2} \\ \le{} & \sum_{i=1}^{n-m} \sum_{(k,\ell) \in \Gamma_{n,m}(i)}\quad \sum_{p,q=1,2}\left\{ \left[ \E\left( \left|\frac{\partial F^{(rs)}_1(\etab_{n,m}) }{\partial G_{k,\ell}^{(pq)}}\right|^2 \right) \right]^{1/2} \cdot \left[ \E\left( \left| \Delta G_{k,\ell}^{(pq)}\right|^2 \right) \right]^{1/2} \right\} \\ \le{} & \sum_{i=1}^{n-m} \left\{ \left( \sum_{(k,\ell) \in \Gamma_{n,m}(i)}\quad \sum_{p,q=1,2} \left[\E\left(\left|\frac{\partial F^{(rs)}_1(\etab_{n,m}) }{\partial G_{k,\ell}^{(pq)}}\right|^2 \right)\right]^{1/2} \right) \cdot \max_{(k,\ell)\in \Gamma_{n,m}(i); \atop p,q=1,2} \left[\E\left(\left| \Delta G_{k,\ell}^{(pq)} \right|^2 \right)\right]^{1/2} \right\} \\ \le{} & 4 P_1(t_{n-m}) \left[ h\Ns^{(\std)}_{\Gamma_{n,m}(n-m)}(\Delta \gb_{n,m}) + \sum_{i=1}^{n-m-1}(2h + (n-m-1-i)h^2)\Ns^{(\std)}_{\Gamma_{n,m}(i)}(\Delta \gb_{n,m}) \right] \\ \le{} & 4P_1(t_{n-m}) h \sum_{i = 1}^{n-m} \left( 2 + (n-m-i)h \right) \Ns^{(\std)}_{\Gamma_{n,m}(i)}(\Delta \gb_{n,m}) . \end{split}$$ Here we have considered the derivatives of $F_1(\cdot)$ for different locations in $\Omega_{n,m}$. Also, we have applied Minkowski inequality in the first $``\le"$ and Hölder’s inequality in the second $``\le"$. The estimate can then be obtained by inserting and into . **(ii)** Estimate of $\E(\|\tK_2 - K_2\|^2)$: Similar to , we use the triangle inequality to bound $ \left[\E(|\tK^{(rs)}_2 - K^{(rs)}_2|^2)\right]^{1/2} $ by $$\label{eq: K2 diff} \begin{split} & \left[\E(|\tK^{(rs)}_2 - K^{(rs)}_2|^2)\right]^{1/2} \le \frac{1}{\sqrt{N_s}} \cdot \left[ \E\left( \left|\tilde{F}^{(rs)}_2( \tg^*_{n,m};\vec{\sb}')\right|^2 - \left|F^{(rs)}_2(\tg^*_{n,m})\right|^2 \right) \right]^{1/2}\\ & \hspace{200pt}+\left[ \E\left( \left|F^{(rs)}_2(\gb^*_{n,m})-F^{(rs)}_2(\tg^*_{n,m})\right|^2 \right) \right]^{1/2} \end{split}$$ where the first term on the right-hand side can be estimated similarly to , and the result is $$\label{eq: est variance 2} \frac{1}{\sqrt{N_s}}\cdot \left[\E\left( \left|\tilde{F}^{(rs)}_2( \tg_{n,m}^*;\vec{\sb})\right|^2 - \left|F^{(rs)}_2(\tg_{n,m}^*)\right|^2 \right) \right]^{1/2} \le \sqrt{\bar{\gamma}(t_{n-m+1})}\cdot\frac{1}{\sqrt{N_s}}.$$ For the second term on the right-had side of , we mimic the analysis in to get $$\begin{split} &\left[\E\left( \left|F^{(rs)}_2(\tg^*_{n,m})-F^{(rs)}_2(\gb^*_{n,m})\right|^2 \right) \right]^{1/2}\\ \le & \ 4P_1(t_{n-m+1}) h \left\{ \left[ \E \left(\|\tG^*_{n+1,m} - G^*_{n+1,m}\|^2\right) \right]^{1/2} + \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \Ns^{(\std)}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) \right\}. \end{split}$$ Here the difference between $\tG^*_{n+1,m}$ and $G^*_{n+1,m}$ can be estimated by $$\begin{split} & \left[ \E \left(\|\tG^*_{n+1,m} - G^*_{n+1,m}\|^2\right) \right]^{1/2} \\ \le & \ \left[\E\left( \left\| \left( I + \sgn(t_n - t)\ii H_s h \right) \Delta G_{n,m} \right\|^2\right)\right]^{1/2} + h\left[\E(\|\tK_1 - K_1\|^2)\right]^{1/2}\\ \le & \ (1+\frac{1}{2}\bdH^2 h^2 )\left[\E(\| \Delta G_{n,m}\|^2)\right]^{1/2} \\ & \hspace{70pt}+ 8P_1(t_{n-m}) h^2 \sum_{i = 1}^{n-m} \left( 2 + (n-m-i)h \right) \Ns^{(\std)}_{\Gamma_{n,m}(i)}(\Delta \gb_{n,m})+ 2\sqrt{\bar{\gamma}(t_{n-m})}\cdot \frac{h}{\sqrt{N_s}}, \end{split}$$ where we have applied our previous estimate to bound $\E(\|\tK_1 - K_1\|^2)$, and we have omitted the details of the estimation of $\E\left( \left\| \left( I + \sgn(t_n - t)\ii H_s h \right) \Delta G_{n,m} \right\|^2\right)$, which is similar to . $$\label{eq: est mean value 2} \begin{split} &\left[ \E\left( \left|F^{(rs)}_2(\gb^*_{n,m})-F^{(rs)}_2(\tg^*_{n,m})\right|^2 \right) \right]^{1/2} \le 4P_1(t_{n-m+1}) h \Big\{ (1+\frac{1}{2}\bdH^2 h^2 )\Ns^{(\std)}_{\Gamma^*_{n,m}(n-m)}(\Delta \gb^*_{n,m}) \\ &\hspace{20pt}+ \left(1+8P_1(t_{n-m}) h^2\right)\sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \Ns^{(\std)}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) + 2\sqrt{\bar{\gamma}(t_{n-m})}\cdot \frac{h}{\sqrt{N_s}} \Big\}. \end{split}$$ Again, we insert the estimates and into to get $$\begin{split} &\left[\E( \|\tK_2 - K_2\|^2 )\right]^{1/2} \le 8\left(3+h + (\frac{1}{2}\bdH^2 + 16P_1(t_{n-m}))h^2 + 8P_1(t_{n-m})h^3\right) P_1(t_{n-m+1}) h \times \\ &\hspace{10pt} \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \Ns^{(\std)}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) + \left(2+16P_1(t_{n-m+1}) h^2\right)\sqrt{\bar{\gamma}(t_{n-m+1})}\cdot \frac{1}{\sqrt{N_s}}. \end{split}$$ By choosing a sufficiently small time step such that $h + (\frac{1}{2}\bdH^2 + 16P_1(t_{n-m}))h^2 + 8P_1(t_{n-m})h^3 \le \frac{1}{2}$ and $h \le \sqrt{\frac{1}{16P_1(t_{n-m+1})}}$, the estimate can be obtained. With the results above, we now return to the formula and give the recurrence relation for the numerical error $\left[\E(\|\dG_{n+1,m}\|^2)\right]^{1/2}$ as $$\label{eq: recurrence 2 1/2 derivation} \begin{split} &\left[\E(\|\dG_{n+1,m}\|^2)\right]^{1/2} \\ \le & \ (1+\frac{1}{8}\bdH^4 h^4)\left[\E(\|\dG_{n,m}\|^2)\right]^{1/2} + \frac{1}{2}(1+\frac{1}{2}\bdH^2 h^2)h\left[\E(\|\Delta K_1\|^2)\right]^{1/2} + \frac{1}{2}h\left[\E( \| \Delta K_2 \|^2 ) \right]^{1/2} \\ \le & \ (1+\frac{1}{8}\bdH^4 h^4)\left[\E(\|\dG_{n,m}\|^2)\right]^{1/2} \\ &+ 22 P_1(t_{n-m+1}) h^2 \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \Ns^{(\std)}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) + \frac{7}{2} \sqrt{\bar{\gamma}(t_{n-m+1})}\cdot \frac{h}{\sqrt{N_s}} \end{split}$$ upon assuming $h \le \frac{\sqrt{2}}{\bdH}$. Next, we consider the recurrence relation of $\E(\|\dG_{n+1,m}\|^2)$. By straightforward expansion, $$\label{eq: est second order error} \begin{split} \E(\|\dG_{n+1,m}\|^2) =& \ \E\left[\left\| A_{n,m}(h)\dG_{n,m} + \frac{1}{2}h\left( B_{n,m}(h)\Delta K_1 + \Delta K_2 \right)\right\|^2\right]\\ =& \ \E(\|A_{n,m}(h)\dG_{n,m})\|^2) + \underbrace{ \frac{1}{4}h^2\E\left[ \left\| B_{n,m}(h)\Delta K_1 + \Delta K_2\right\|^2 \right] }_{\text{quadratic term}} + \\ & + \underbrace{ \mathrm{Re} \, h\E\left[ \text{tr} \left( \left(B_{n,m}(h)\Delta K_1 + \Delta K_2\right)^\dagger \left( A_{n,m}(h)\dG_{n,m} \right) \right) \right]}_{\text{cross term}}. \end{split}$$ To bound the quadratic term, we first derive the following results from the estimates and : $$\begin{split} &\E( \|\tK_1 - K_1\|^2 ) \le 128P^2_1(t_{n-m}) h^2 \left[ \sum_{i = 1}^{n-m} \left( 2 + (n-m-i)h \right)\Ns^{(\std)}_{\Gamma_{n,m}(i)}(\Delta \gb_{n,m}) \right]^2+ 8\bar{\gamma}(t_{n-m})\cdot \frac{1}{N_s},\\ &\E( \|\tK_2 - K_2\|^2 ) \le 1568P^2_1(t_{n-m+1}) h^2 \left[ \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \Ns^{(\std)}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) \right]^2\\ & \hspace{340pt}+ 18\bar{\gamma}(t_{n-m+1})\cdot \frac{1}{N_s}. \end{split}$$ Then the quadratic term is bounded by $$\label{eq: est quadratic term} \begin{split} &\underbrace{\frac{1}{4}h^2\E\left[ \left\|B_{n,m}(h)\Delta K_1 + \Delta K_2\right\|^2 \right] }_{\text{quadratic term}} \le \frac{1}{2}(1+ \bdH^2 h^2)h^2\E(\|\Delta K_1\|^2) +\frac{1}{2}h^2 \E(\|\Delta K_2\|^2) \\ \le & \ h^2\E(\|\Delta K_1\|^2) +\frac{1}{2}h^2 \E(\|\Delta K_2\|^2) \\ \le& \ 912 P^2_1(t_{n-m+1})^2 h^4 \left[ \sum_{i = 1}^{n-m} \big( 2 + (n-m+1-i)h \big) \Ns^{(\std)}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) \right]^2+ 17\bar{\gamma}(t_{n-m+1})\cdot \frac{h^2}{N_s}. \end{split}$$ thanks to the previous requirement on $h$ in and the results and in Lemma \[lemma: deltak and deltak\^2 integ diff\] obtained in the previous section. Similar to the proof of Lemma \[lemma: 2ed error cross term\], the estimation of the cross term in is more subtle. We will again need some key estimates from the following local scheme for the inchworm equation: $$\label{def: scheme 3} \begin{split} &\bG^*_{n+1,m} = (I+\sgn(t_n - t) \ii H_s h)\tG_{n,m} + \bK_1 h, \\ &\bG_{n+1,m} = (I + \frac{1}{2}\sgn(t_n - t)\ii H_s h )\tG_{n,m} +\frac{1}{2}\sgn(t_{n+1} - t)\ii H_s h \bG^*_{n+1,m} + \frac{1}{2}(\bK_1+\bK_2) h, \quad 0 \le m\le n\le 2N, \end{split}$$ where $$\label{def: scheme 3 notation} \begin{gathered} \bK_1 = F_1(\bg_{n,m}),\qquad \bg_{n,m} = \tg_{n,m};\\ \bK_2 = F_2(\bg^*_{n,m}), \qquad \bg^*_{n,m} = (\tg_{n,m};\tG_{n+1,n},\cdots,\tG_{n+1,m+1},\bG^*_{n+1,m}). \end{gathered}$$ These quantities are introduced as the counterpart of , which is a deterministic time step applied to the stochastic solutions. The following results are similar to Lemma \[thm: diff\_tkvsbk\] for the case of differential equations: \[thm: ktilde vs kbar\] Given the time step length $h$ and the number of Monte Carlo sampling at each step $N_s$, we have $$\begin{aligned} &\| \E_{\vec{\sb}}(\bK_1 - \tK_1) \|= 0, \label{thm: 1st moment runge kutta 1} \\ &\| \E_{\vec{\sb},\vec{\sb}'}(\bK_2 - \tK_2) \|\le 4 \bar{\alpha}(t_{n-m+1})\bar{\gamma}(t_{n-m+1})\cdot \frac{h^2}{N_s},\label{thm: 1st moment runge kutta 2} \\ &\left[\E( \|\bK_1 - K_1\|^2 )\right]^{1/2} \le 8P_1(t_{n-m}) h \sum_{i = 1}^{n-m} \left( 2 + (n-m-i)h \right) \Ns^{(\emph{std})}_{\Gamma_{n,m}(i)}(\Delta \gb_{n,m}) , \label{thm: 2ed moment runge kutta 1} \\ &\left[ \E( \|\bK_2 - K_2\|^2 )\right]^{1/2} \le 28P_1(t_{n-m+1}) h \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \Ns^{(\emph{std})}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) \label{thm: 2ed moment runge kutta 2}\end{aligned}$$ where the formula of $\bar{\alpha}(t)$ is given in . The rigorous proof of this lemma is omitted since it is almost identical to that of Lemma \[lemma: deltak and deltak\^2 integ diff\]. Now we are ready to bound the cross-term in . By the same treatment as the case of differential equations, we have $$\label{eq: 2ed error cross split} \begin{split} & \left|\mathrm{Re} \, h\E\left[ \text{tr} \left( \left(B_{n,m}(h)\Delta K_1 + \Delta K_2\right)^\dagger \left( A_{n,m}(h)\dG_{n,m} \right) \right) \right] \right| \\ \le {} & \left| h\E\left[ \text{tr} \left( \left( B_{n,m}(h) ( K_1 - \bK_1 ) + ( K_2 - \bK_2 ) \right)^\dagger \left(A_{n,m}(h)\dG_{n,m}\right) \right) \right] \right| \\ &\hspace{80pt}+ \left| h\E\left[ \text{tr}\left( \left( B_{n,m}(h) ( \bK_1 - \tK_1 ) + ( \bK_2 - \tK_2 ) \right)^\dagger \left(A_{n,m}(h)\dG_{n,m}\right) \right) \right] \right|\\ ={}& \left| h\E\left[ \text{tr} \left( \left( B_{n,m}(h) ( K_1 - \bK_1 ) + ( K_2 - \bK_2 ) \right)^\dagger \left(A_{n,m}(h)\dG_{n,m}\right) \right) \right] \right| \\ &\hspace{50pt} + \left| h\E\left\{ \text{tr} \left( \left[ \E_{\vec{\sb},\vec{\sb}'} \left( B_{n,m}(h) ( \bK_1 - \tK_1 ) + ( \bK_2 - \tK_2 ) \right) \right]^\dagger \left(A_{n,m}(h)\dG_{n,m}\right) \right) \right\}\right| \\ \le {}& h \left[\E\left(\|A_{n,m}(h) \dG_{n,m} \|^2\right)\right]^{1/2}\left\{ \left[ \E\left( \left\| B_{n,m}(h) ( K_1 - \bK_1 ) + ( K_2 - \bK_2 ) \right\|^2 \right) \right]^{1/2} \right.\\ &\left. \hspace{100pt} +\left[\E\left( \left\|\E_{\vec{\sb},\vec{\sb}'} \left( B_{n,m}(h) ( \bK_1 - \tK_1 ) + ( \bK_2 - \tK_2 ) \right) \right\|^2 \right) \right]^{1/2} \right\} \end{split}$$ where we have applied Cauchy-Schwarz inequality in the last step. On the right-hand side of , the term $\left[ \E\left(\|A_{n,m}(h) \dG_{n,m} \|^2\right)\right]^{1/2}$ has already been bounded in . For the other term, we can find the bounds by Lemma \[thm: ktilde vs kbar\] immediately: $$\begin{split} & \left[ \E\left( \big\| B_{n,m}(h) ( K_1 - \bK_1 ) + ( K_2 - \bK_2 ) \big\|^2 \right) \right]^{1/2} \le 2 \left[\E\left(\|K_1 - \bK_1\|^2\right) \right]^{1/2} + \left[ \E\left(\|K_2 - \bK_2\|^2\right) \right]^{1/2}\\ & \hspace{140pt}\le 44 P_1(t_{n-m+1}) h \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \Ns^{(\std)}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) ,\\ & \left[\E\left( \Big\|\E_{\vec{\sb},\vec{\sb}'} \left( B_{n,m}(h) ( \bK_1 - \tK_1 ) + ( \bK_2 - \tK_2 ) \right) \Big\|^2 \right) \right]^{1/2} \le \Big\|\E_{\vec{\sb},\vec{\sb}'} \left( B_{n,m}(h) ( \bK_1 - \tK_1 ) + ( \bK_2 - \tK_2 ) \right) \Big\| \\ & \hspace{100pt}\le 2\big \|\E_{\vec{\sb},\vec{\sb}'} ( \bK_1 - \tK_1) \big\|^2 + \big \|\E_{\vec{\sb},\vec{\sb}'} ( \bK_2 - \tK_2) \big\|^2 \le 4 \bar{\alpha}(t_{n-m+1})\bar{\gamma}(t_{n-m+1})\cdot \frac{h^2}{N_s}. \end{split}$$ Thus the final estimation of the cross term is $$\label{eq: est cross term} \begin{split} & \left|\mathrm{Re} \, h\E\left[ \text{tr} \left( \left(B_{n,m}(h)\Delta K_1 + \Delta K_2\right)^\dagger \left( A_{n,m}(h)\dG_{n,m} \right) \right) \right] \right| \le \left(1+ \frac{1}{8}\bdH^4 h^4\right)h \left[\E(\|\dG_{n,m}\|^2)\right]^{1/2} \times\\ & \left\{ 44 P_1(t_{n-m+1}) h \sum_{i = 1}^{n-m} \big( 2 + (n-m+1-i)h \big) \Ns^{(\std)}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) + 4 \bar{\alpha}(t_{n-m+1})\bar{\gamma}(t_{n-m+1})\cdot \frac{h^2}{N_s} \right\}. \end{split}$$ Finally, we combine the estimate for the quadratic term with for the cross term to obtain the recurrence relation for the numerical error. Proof of in Theorem \[thm: bounds\]—Estimation of the numerical error {#sec: estimates} --------------------------------------------------------------------- In this section we discuss how to apply the recurrence relations in Proposition \[thm: recurrence relations\] to obtain the estimates in Theorem \[thm: bounds\]. Here we only focus on the estimate for the numerical error which we are more interested in. For the bias, we refer the readers to \[sec: recurrence 1\] for the detailed proof. In Proposition \[thm: recurrence relations\], two recurrence relations are given, in which the first relation is easier to analyze due to its linearity. For simplicity, we rewrite this estimate as $$\label{eq: recurrence 2 1/2 simple} \begin{split} &\Ns^{(\std)}_{\Gamma^*_{n,m}(j-m+1)}(\Delta \gb^*_{n,m}) \le (1+c_1 h^4)\Ns^{(\std)}_{\Gamma^*_{n,m}(j-m)}(\Delta \gb^*_{n,m})\\ &\hspace{30pt}+ c_2 h^2 \sum_{i = 1}^{j-m} \big( 2 + (j-m+1-i)h \big) \Ns^{(\std)}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m}) +c_3 \frac{h}{\sqrt{N_s}}, \qquad j = m, \cdots,n-1, \end{split}$$ where we have introduced the notations $c_1 =\frac{1}{8}\bdH^4$, $c_2 =22 P_1(t_{n-m})$ and $c_3 = \frac{7}{2} \sqrt{\bar{\gamma}(t_{n-m})}$ for simplicity. The inequality is obtained by taking the maximum on the diagonals, and using the fact that $\bar{\gamma} (t_{j+1-m}) \le \bar{\gamma}(t_{n-m})$. This inequality shows that the recurrence relation of the error with two indices $n$ and $m$ can be simplified as a recurrence relation with only one index $j$. For each $j$, the quantity $\Ns^{(\std)}_{\Gamma^*_{n,m}(j-m)}(\Delta \gb^*_{n,m})$ denotes the maximum numerical error on the $(j-m)$th diagonal (see Figure \[fig:sets\]). This can also be understood by an alternative order of computation: once all the propagators on the diagonals $\Gamma^*_{n,m}(i)$ for $i \le j-m$ are computed, the propagators on $\Gamma^*_{n,m}(i)$ can actually be computed in an arbitrary order, i.e., the computation of all the propagators on $\Gamma^*_{n,m}(i)$ are independent of each other. The derivation of is inspired by this observation, and this idea will also be used in the proof of Theorem \[thm: bounds\] to be presented later in this section. To study the growth of the error from , one can define a sequence $\{A_j\}$ with the following recurrence relation: $$\label{eq: rec num err 1} A_{j+1} = (1+3c_2 h^2)A_j + c_2h^2 \sum_{i = m+1}^{j-1} \big( 2 + (j+1-i)h \big) A_i + c_3 \frac{h}{\sqrt{N_s}} \text{~for~} j = m,\cdots, n-1,$$ and initial condition $A_m=0$. Then we have $ [\E(\|\dG_{j+1,m}\|^2)]^{1/2} \le A_{j+1}$ if we require $\frac{c_1}{c_2} h^2 + h \le 1$. Increasing the index $j$ in by one, we get $$\label{eq: rec num err 2} A_{j+2} = (1+3c_2 h^2)A_{j+1} + c_2h^2 \sum_{i = m+1}^{j} \big( 2 + (j+2-i)h \big) A_i + c_3 \frac{h}{\sqrt{N_s}}.$$ Subtracting from yields $$\label{eq: rec num err 3} A_{j+2} = (2+ 3c_2h^2 ) A_{j+1} - (1+c_2 h^2 -2c_2 h^3)A_j + c_2 h^3 \sum^{j-1}_{i=m+1}A_i.$$ Similarly, we can reduce the index $j$ in by $1$ and again subtract the two equations, so that a recurrence relation without summation can be derived: $$A_{j+2} - (3+ 3c_2 h^2) A_{j+1} + (3+4c_2 h^2 -2 c_2 h^3)A_j - (1+c_2 h^2 -c_2 h^3)A_{j-1} = 0.$$ The general formula of $A_j$ can then be found by solving the corresponding characteristic equation. We denote $A_j$ as $$\label{eq:A_j} A_{j} = \sigma_1 r^j_1 + \sigma_2 r^j_2 + \sigma_3 r^j_3.$$ The formula of each $r_i$ is give in \[sec: char poly\], based on which we can estimate $A_{n}$ by $$\label{eq: induction n step} A_n \le C(h,N_s) \cdot (1+ \theta_1 \sqrt{P_1(t_{n-m})} h)^{n-m},$$ where $\theta_1$ is a constant and $C(h,N_s)$ is a function to be determined. The recurrence relation helps determine the growth rate of the numerical error. However, if we use to determine the function $C(h,N_s)$, we can only find $C(h,N_s) \propto \sqrt{1/N_s}$, whereas the desired result is $C(h,N_s) \propto \sqrt{h/N_s}$. To this end, the other recurrence relation has to be utilized, as in the proof given below: As mentioned previously, we only present the proof of in this section. We claim that the error satisfies $$\label{eq: error bound} \begin{split} &\Ns^{(\std)}_{\Gamma^*_{n,m}(l+1)}(\Delta \gb^*_{n,m}) \le \theta_2 \sqrt{\bar{\gamma}(t_{n-m+1})}\cdot \sqrt{\frac{h}{N_s}} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^l \\ & \text{for~}l = 0,1,\cdots, n-m. \end{split}$$ Here $\theta_2$ is some $O(1)$ constant. We will prove this claim by mathematical induction using . We first check the initial case $l = 0$. By the recurrence relation , the left-hand side of is bounded by $$\label{induction:initial step} \begin{split} \Ns^{(\std)}_{\Gamma^*_{n,m}(1)}(\Delta \gb^*_{n,m}) = & \ \max\left( [ \E\big(\|\Delta G_{m+1,m}\|^2\big)]^{1/2}, [ \E\big(\|\Delta G_{m+2,m+1}\|^2\big)]^{1/2},\cdots, [ \E\big(\|\Delta G_{n+1,n}\|^2\big)]^{1/2} \right) \\ \le & \ \sqrt{17} \sqrt{\bar{\gamma}(h)} \frac{h}{\sqrt{N_s}} \le \theta_2 \sqrt{\bar{\gamma}(t_{n-m+1})}\cdot \sqrt{\frac{h}{N_s}}, \end{split}$$ as holds for any constant $\theta_2 \geq \sqrt{17}$. Assume that holds for all $l = 0, 1, \cdots, k-1$, when $l=k$, $$\label{induction:parallel} \Ns^{(\std)}_{\Gamma^*_{n,m}(k+1)}(\Delta \gb^*_{n,m})= \max\left( \left[ \E\big(\|\Delta G_{m+k+1,m}\|^2\big)\right]^{1/2}, \cdots, \left[ \E\big(\|\Delta G_{n+1,n-k}\|^2\big)\right]^{1/2} \right).$$ Therefore we just need to find the bounds for each $\left[ \E\big(\|\Delta G_{m+k+j,m+j-1}\|^2\big)\right]^{1/2}$, $j=1,\cdots,n+1-m-k$. This will be done by the recurrence relation . For clearer presentation, we rewrite the equation below only with subscripts replaced: $$\label{eq: recurrence induction} \begin{split} & \E\big(\|\Delta G_{m+k+j,m+j-1}\|^2\big) \le \left(1+\frac{1}{4} \bdH^4 h^4\right) \cdot \E\big(\|\Delta G_{m+k+j-1,m+j-1}\|^2\big) \\ & + \left(1+\frac{1}{8} \bdH^4 h^4\right)44 P_1(t_{n-m+1})h^2 \cdot \left[\E\big(\|\Delta G_{m+k+j-1,m+j-1}\|^2\big)\right]^{1/2}\\ & \hspace{130pt}\times \sum_{i=1}^{k}\left(2+(k+1-i)h\right) \Ns^{(\std)}_{\Gamma^*_{m+k+j-1,m+j-1}(i)}(\Delta \gb^*_{n,m})\\ & + \left(1+\frac{1}{8} \bdH^4 h^4\right)4 \bar{\alpha}(t_{n-m+1}) \bar{\gamma}(t_{n-m+1})\frac{h^3}{N_s} \cdot \left[\E\big(\|\Delta G_{m+k+j-1,m+j-1}\|^2\big)\right]^{1/2} \\ & +\left\{912 P_1^2(t_{n-m+1}) h^4 \left[ \sum_{i=1}^{k}\left(2+(k+1-i)h\right) \Ns^{(\std)}_{\Gamma^*_{m+k+j-1,m+j-1}(i)}(\Delta \gb^*_{n,m}) \right]^2 + 17 \bar{\gamma}(t_{n-m+1}) \frac{h^2}{N_s}\right\}. \end{split}$$ For the first and third terms on the right-hand side, we use the inductive hypothesis with $l = k-1$, which indicates that $$\E\big(\|\Delta G_{m+k+j-1,m+j-1}\|^2\big) \le \left[ \Ns^{(\std)}_{\Gamma^*_{n,m}(k)}(\Delta \gb^*_{n,m})\right]^2 \le \theta_2^2 \bar{\gamma}(t_{n-m+1})\cdot \frac{h}{N_s} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{k-1},$$ to get $$\begin{aligned} \label{eq: recurrence induction 1} \begin{split} & \left(1+\frac{1}{4} \bdH^4 h^4\right) \cdot \E\big( \|\Delta G_{m+k+j-1,m+j-1}\|^2\big)\\ & \hspace{30pt}\le \left(1+\frac{1}{4} \bdH^4 h^4\right) \theta_2^2 \bar{\gamma}(t_{n-m+1})\cdot \frac{h}{N_s} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{2k-2}, \end{split}\\\label{eq: recurrence induction 3} \begin{split} &\left(1+\frac{1}{8} \bdH^4 h^4\right)4 \bar{\alpha}(t_{n-m+1}) \bar{\gamma}(t_{n-m+1})\frac{h^3}{N_s} \cdot \left[\E\big( \|\Delta G_{m+k+j-1,m+j-1}\|^2\big)\right]^{1/2} \\ & \hspace{30pt} \le \left(1+\frac{1}{8} \bdH^4 h^4\right)4 \bar{\alpha}(t_{n-m+1}) [\bar{\gamma}(t_{n-m+1})]^{3/2} \theta_2 \cdot \sqrt{\frac{h^7}{N_s^3}} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{k-1} \\ & \hspace{30pt} \le h\theta_2^2 \bar{\gamma}(t_{n-m+1})\frac{h}{N_s} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{2k-2}, \end{split}\end{aligned}$$ where we have assumed $\left(1+\frac{1}{8} \bdH^4 h^4\right) \sqrt{\frac{h^3}{N_s}} \le \frac{\theta_2 }{4\bar{\alpha}(t_{n-m+1})\sqrt{\bar{\gamma}(t_{n-m+1})}}$ to get the last “$\le$” in . For the second and the fourth terms on the right-hand side of , we first use the inductive hypothesis to get $$\begin{split} \Ns^{(\std)}_{\Gamma^*_{m+k+j-1,m+j-1}(i)}(\Delta \gb^*_{n,m}) \le & \ \Ns^{(\std)}_{\Gamma^*_{n,m}(i)}(\Delta \gb^*_{n,m})\\ \le & \ \theta_2 \sqrt{\bar{\gamma}(t_{n-m+1})}\cdot \sqrt{\frac{h}{N_s}} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{i-1},\text{~for~}i=1,\cdots,k. \end{split}$$ The next step is to insert the above estimation into . To assist our further calculation, we let $c_4 =\theta_1 \sqrt{P_1(t_{n-m+1})}$ and perform the following calculation: $$\begin{split} &\sum_{i = 1}^{k} \big( 2 + (k+1-i)h \big)\cdot (1+ c_4 h)^{i}\\ = & \ 2\sum_{i = 1}^{k} (1+ c_4 h)^{i} + h \sum_{i = 1}^{k} (k+1-i)\cdot (1+ c_4 h)^{i} \\ = & \ 2 \frac{(1+c_4 h)^{k+1} - (1+c_4 h)}{c_4 h} + h \frac{(1+c_4 h)^{k+2} - (1+c_4 h)^{2} - (1+c_4 h)c_4 k h }{c_4^2 h^2} \\ \le & \ 2 \frac{(1+c_4 h)^{k+1} }{c_4 h} + 2\frac{(1+c_4 h)^{k+1} }{c_4^2 h} = \frac{(1+c_4 h)^{k+1}}{h}\cdot 2\left( \frac{1}{c_4}+\frac{1}{c_4^2} \right), \end{split}$$ where we have assumed $h \le \frac{1}{c_4}$. By this inequality, we see that when $h$ is sufficiently small, for any $\theta_1 > 1$, the second term on the right-hand side of satisfies $$\label{eq: recurrence induction 2} \begin{split} & \left(1+\frac{1}{8} \bdH^4 h^4\right)44 P_1(t_{n-m+1})h^2 \cdot \left[\E\big(\|\Delta G_{m+k+j-1,m+j-1}\|^2\big)\right]^{1/2}\\ & \hspace{130pt}\times \sum_{i=1}^{k}\left(2+(k+1-i)h\right) \Ns^{(\std)}_{\Gamma^*_{m+k+j-1,m+j-1}(i)}(\Delta \gb^*_{n,m})\\ \le & \ \left(1+\frac{1}{8} \bdH^4 h^4\right)44 P_1(t_{n-m+1})\theta_2^2 \bar{\gamma}(t_{n-m+1}) \frac{h^3}{N_s} \sum_{i = 1}^{k} \big( 2 + (k+1-i)h \big)\cdot (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{k+i-2} \\ \le & \ 88\underbrace{\left(1+\frac{1}{8} \bdH^4 h^4\right)}_{\le 2} \underbrace{(1+\theta_1 \sqrt{P_1(t_{n-m+1})} h)}_{\le 2} \underbrace{\left( \frac{\sqrt{P_1(t_{n-m+1})}}{\theta_1 } + \frac{1}{\theta^2_1}\right)}_{\le 2\sqrt{P_1(t_{n-m+1})}} \\ & \hspace{200pt}\times h\theta_2^2 \bar{\gamma}(t_{n-m+1})\frac{h}{N_s} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{2k-2} \\ \le& \ 704 \sqrt{P_1(t_{n-m+1})} h\theta_2^2 \bar{\gamma}(t_{n-m+1})\frac{h}{N_s} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{2k-2}, \end{split}$$ and when $\theta_2 = \sqrt{34}$, the fourth term on the right-hand side of satisfies $$\label{eq: recurrence induction 4} \begin{split} &912 P_1^2(t_{n-m+1}) h^4 \left[ \sum_{i=1}^{k}\left(2+(k+1-i)h\right) \Ns^{(\std)}_{\Gamma^*_{m+k+j-1,m+j-1}(i)}(\Delta \gb^*_{n,m}) \right]^2 + 17 \bar{\gamma}(t_{n-m+1}) \frac{h^2}{N_s}\\ \le & \ 912 P_1^2(t_{n-m+1}) \theta_2^2 \bar{\gamma}(t_{n-m+1}) \frac{h^5}{N_s} \left[ \sum_{i = 1}^{k} \left( 2 + (k+1-i)h \right) (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{i-1} \right]^2 + 17 \bar{\gamma}(t_{n-m+1}) \frac{h^2}{N_s} \\ = & \ 3648\underbrace{(1+\theta_1 \sqrt{P_1(t_{n-m+1})} h)^2}_{\le 2}\left( \frac{\sqrt{P_1(t_{n-m+1})}}{\theta_1 } + \frac{1}{\theta^2_1 }\right)^2 h \times h\theta_2^2 \bar{\gamma}(t_{n-m+1})\frac{h}{N_s} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{2k-2}\\ &\hspace{130pt}+\underbrace{\frac{17}{\theta_2^2}}_{\le \frac{1}{2}}\times h\theta_2^2 \bar{\gamma}(t_{n-m+1})\frac{h}{N_s} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{2k-2} \\ \le & \ h\theta_2^2 \bar{\gamma}(t_{n-m+1})\frac{h}{N_s} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{2k-2} \end{split}$$ by assuming $7296 h \left( \frac{\sqrt{P_1(t_{n-m+1})}}{\theta_1 } + \frac{1}{\theta^2_1 }\right)^2 \le 1/2$. Inserting into , we get $$\begin{split} & \E\big(\|\Delta G_{m+k+j,m+j-1}\|^2\big) \\ \le{} & \frac{1 + (704\sqrt{P_1(t_{n-m+1})}+2)h + \frac{1}{4}\bdH^4 h^4}{(1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^2} \times \theta_2^2 \bar{\gamma}(t_{n-m+1})\frac{h}{N_s} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{2k}\\ \le{} & \theta_2^2 \bar{\gamma}(t_{n-m+1})\frac{h}{N_s} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{2k} \end{split}$$ by setting $\theta_1 = 353$. Since this inequality holds for all $j = 1,\cdots,n+1-m-k$, by , we know that holds for $l = k$. By the principle of mathematical induction, we have completed the proof of . Finally, we set $l=n-m$ in to get $$[ \E\big(\|\Delta G_{n+1,m}\|^2\big)]^{1/2} = \Ns^{(\std)}_{\Gamma^*_{n,m}(n-m+1)}(\Delta \gb^*_{n,m}) \le \theta_2 \sqrt{\bar{\gamma}(t_{n-m+1})}\cdot \sqrt{\frac{h}{N_s}} (1+ \theta_1 \sqrt{P_1(t_{n-m+1})} h)^{n-m},$$ resulting in the final estimate for the numerical error. Due to the jump conditions , we actually need to multiply by the norm of the observable $\|O_s\|$ on the right-hand side of when crossing the discontinuities. However, we can always first consider the observable $O_s/\|O_s\|$, and then multiply the result by $\|O_s\|$. Therefore we can always assume $\|O_s\| = 1$ in this paper and thus our analysis is not affected. Proof of Proposition \[thm: Runge Kutta error\]—Estimation of the error for the deterministic method {#sec: Runge Kutta error} ---------------------------------------------------------------------------------------------------- In this section, we consider the error $E_{n+1,m} = \Ge(t_{n+1},t_m) - G_{n+1,m}$ for the deterministic scheme . By triangle inequality, $$\label{eq: part 1&2} \begin{split} &\|E_{n+1,m}\| \le \underbrace{\left\|\Ge(t_{n+1},t_m) - \left[A_{n,m}(h)\Ge(t_{n},t_m) + \frac{1}{2}h \left( B_{n,m}(h) F_1(\ge_{n,m}) + F_2(\ges_{n,m}) \right) \right]\right\|}_{\text{Part 1}}\\ & \hspace{100pt}+ \underbrace{\left\|\left[A_{n,m}(h)\Ge(t_{n},t_m) + \frac{1}{2}h \left( B_{n,m}(h) F_1(\ge_{n,m}) + F_2(\ges_{n,m}) \right) \right] - G_{n+1,m} \right\|}_{\text{Part 2}} \end{split}$$ where $\ge_{n,m}$ and $\ges_{n,m}$ are defined as $$\begin{aligned} \ge_{n,m} &= \left(\Ge(t_{m+1},t_m);\Ge(t_{m+2},t_{m+1}),\Ge(t_{m+2},t_m);\cdots;\Ge(t_n,t_{n-1}),\cdots,\Ge(t_n,t_m)\right), \\ \ges_{n,m} &= \left(\ge_{n,m};\Ge(t_{n+1},t_n),\cdots,\Ge(t_{n+1},t_{m+1}),\Ge(t_{n+1},t_m)\right),\end{aligned}$$ which are similar to the definitions of $\gb_{n,m}$ and $\gb^*_{n,m}$. We note that $\Ge$ on the discontinuities are again defined to be multiple-valued as $$\Ge(t_N,t_k) = \left(\Ge(t^+,t_k),\Ge(t^-,t_k) \right)\text{~and~}\Ge(t_j,t_N ) = \left(\Ge(t_j,t^+),\Ge(t_j,t^-) \right) \\$$ for $0\le k \le N-1$ and $N+1\le j \le 2N$. We further define $\eb_{n,m} = \ge_{n,m} - \gb_{n,m}$ and $\eb^*_{n,m} = \ges_{n,m} - \gb^*_{n,m}$, which will be used later. The estimation of the two parts in will be discussed in the following two subsections. ### Estimation of Part 1 in We further split this part of the error by $$\label{eq: conv_heuns_1} \begin{split} &\Ge(t_{n+1},t_m) - \left[A_{n,m}(h)\Ge(t_{n},t_m) + \frac{1}{2}h \left( B_{n,m}(h) F_1(\ge_{n,m}) + F_2(\ges_{n,m}) \right) \right] \\ ={} & \left(\Ge(t_{n+1},t_m) -A_{n,m}(h)\Ge(t_{n},t_m) \right) - \frac{1}{2}h \left( B_{n,m}(h)\Hs(t_n,\Ge,t_m) + \Hs(t_{n+1},\Ge,t_m) \right) \\ &\hspace{50pt} +\frac{1}{2}h \left( B_{n,m}(h)\Hs(t_n,\Ge,t_m) + \Hs(t_{n+1},\Ge,t_m) \right) - \frac{1}{2}h \left( B_{n,m}(h) F_1(\ge_{n,m}) + F_2(\ges_{n,m}) \right) \end{split}$$ where the definition of $\Hs$ is given in . Using Taylor expansion, we may easily obtain the bound for the first term of : $$\label{eq: trunc_err_1} \left\|\left(\Ge(t_{n+1},t_m) -A_{n,m}(h)\Ge(t_{n},t_m) \right) - \frac{1}{2}h \left( B_{n,m}(h)\Hs(t_n,\Ge,t_m) + \Hs(t_{n+1},\Ge,t_m) \right) \right\| \le (\frac{1}{4} \bdH \bdG''+ \frac{5}{12} \bdG''' )\cdot h^3.$$ Meanwhile, since $$\label{eq: interp_error} \begin{split} &\Hs(t_n,\Ge,t_m) - F_1(\ge_{n,m}) \\ = & \ \sgn(t_n-t)\sum^{\bar{M}}_{\substack{M=1 \\ M \text{~is odd~}}} \ii^{M+1} \int_{t_n > s_M^M > \cdots > s_1^M > t_m } (-1)^{\#\{\vec{\sb}^M \le t\}}\\ &\hspace{20pt}W_s\left[\Ge(t_n,s^M_M)W_s\cdots W_s\Ge(s^M_1,t_m)-I_h\Ge(t_n,s^M_M)W_s\cdots W_sI_h \Ge(s^M_1,t_m) \right]\Ls(t_n,\vec{\sb}^M)\dd \vec{\sb}^M, \end{split}$$ we need the following estimation to bound the above term: $$\label{eq: integrand estimation} \begin{split} & \left\| \Ge(t_n,s^M_M)W_s\cdots W_s\Ge(s^M_1,t_m)-I_h\Ge(t_n,s^M_M)W_s\cdots W_sI_h \Ge(s^M_1,t_m) \right\| \\ \le & \ \sum_{j = 0}^M \left\| \Ge(t_n,s^M_M)W_s \cdots W_s \Ge(s^M_{j+2},s^M_{j+1})\right\| \cdot \bdW \cdot \left\|\Ge(s_{j+1},s^M_{j}) - I_h \Ge(s^M_{j+1},s^M_{j})\right\| \cdot \bdW \\ & \hspace{160pt}\cdot \left\| I_h\Ge(s^M_{j},s^M_{j-1})W_s \cdots W_s I_h\Ge(s^M_{1},t_m)\right\|. \end{split}$$ The term $\left\|\Ge(s_{j+1},s^M_{j}) - I_h \Ge(s^M_{j+1},s^M_{j})\right\|$ in the above equation is the linear interpolation error. By the standard error estimation of interpolation (see e.g. [@Lin_Interp_Err_Simpl]), if the point $(s^M_{j+1},s^M_j)$ locates inside the triangle $T'$, the interpolation error is $$\left|\Ge^{(rs)}(s^M_{j+1},s^M_j) - I_h \Ge^{(rs)}(s^M_{j+1},s^M_j)\right| \le \frac{1}{2}R_{T'}^2 \left\| \rho\left(D^2 \Ge^{(rs)}\right) \right\|_{L^\infty(T')},$$ where $R_{T'}$ is the radius of the circumscribed circle of $T'$ and $\rho(D^2 \Ge^{(rs)}\big)$ denotes the spectral radius of the Hessian of $\Ge^{(rs)}$, which can be bounded by $$\rho \left(D^2 \Ge^{(rs)}\right) \le \|\nabla^2 \Ge^{(rs)}\| \le 2 \bdG''.$$ Plugging the above result into and using the bounds of the exact and numerical solutions, we get $$\left\| \Ge(t_n,s^M_M)W_s\cdots W_s\Ge(s^M_1,t_m)-I_h\Ge(t_n,s^M_M)W_s\cdots W_sI_h \Ge(s^M_1,t_m) \right\| \le (M+1)\bdG^M \bdW^M \bdG'' h^2.$$ Finally we apply this bound to to obtain $$\label{eq: trunc_err_2_1} \left\|\Hs(t_n,\Ge,t_m) - F_1(\ge_{n,m})\right\| \le \bar{\beta}(t_{n-m}) h^2,$$ where $$\bar{\beta}(t) = \bdW \bdG'' \bdL^{1/2} \left( \sum^{\bar{M}}_{\substack{M=1 \\ M \text{~is odd~}}} \frac{M+1}{(M-1)!!}(\bdW \bdG \bdL^{1/2} t)^M \right).$$ Note that $\Hs(t_{n+1},\Ge,t_m) - F_2(\ges_{n,m})$ can be obtained by changing $t_n$ to $t_{n+1}$ in the expression of $\Hs(t_n,\Ge,t_m) - F_1(\ge_{n,m})$. Therefore its bound can be given by $\bar{\beta}(t_{n-m+1}) h^2$. Thus, $$\label{eq: trunc_err_2} \left\|\frac{1}{2}h \left( \Hs(t_n,\Ge,t_m) + \Hs(t_{n+1},\Ge,t_m) \right) - \frac{1}{2}h \left( F_1(\ge_{n,m}) + F_2(\ges_{n,m}) \right)\right\| \le\bar{\beta}(t_{n-m+1}) h^3.$$ Inserting the estimates into , we have the following estimation for Part 1 of : $$\label{eq: part 1 final} \begin{split} &\underbrace{\left\| \Ge(t_{n+1},t_m) - \left[A_{n,m}(h)\Ge(t_{n},t_m) + \frac{1}{2}h \left( B_{n,m}(h) F_1(\ge_{n,m}) + F_2(\ges_{n,m}) \right) \right] \right\|}_{\text{Part 1}} \\ & \hspace{200pt} \le \left(\frac{1}{4} \bdH \bdG''+ \frac{5}{12} \bdG''' + \bar{\beta}(t_{n-m+1}) \right) h^3. \end{split}$$ ### Estimation of Part 2 in The estimation for this part of the error is similar to that of the stochastic error. By the numerical scheme , we have $$\label{accumulate_err_split} \begin{split} &\left[A_{n,m}(h)\Ge(t_{n},t_m) + \frac{1}{2}h \left( B_{n,m}(h) F_1(\ge_{n,m}) + F_2(\ges_{n,m}) \right) \right] - G_{n+1,m} \\ = & \ A_{n,m}(h) E_{n,m} + \frac{1}{2}h B_{n,m}(h)\left( F_1(\ge_{n,m}) - F_1(\gb_{n,m})\right) + \frac{1}{2}h \left( F_2(\ges_{n,m}) - F_2(\gb^*_{n,m})\right), \end{split}$$ For the second term on the right-hand side, we can mimic the analysis of to get $$\label{accumulate_err_1} \left\| F_1(\ge_{n,m}) - F_1(\gb_{n,m})\right\| \le 8P_1(t_{n-m}) h \sum_{i = 1}^{n-m} \left( 2 + (n-m-i)h \right) \|\eb_{n,m}\|_{\Gamma_{n,m}(i)}.$$ For the third term, due to the same analysis, we have $$\label{eq: third term} \begin{split} &\left\| F_2(\ges_{n,m}) - F_2(\gb^*_{n,m})\right\|\\ \le{} & 8P_1(t_{n-m+1}) h \left\{ \left\|\Ge(t_{n+1},t_m) - G^*_{n+1,m} \right\| + \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \|\eb^*_{n,m}\|_{\Gamma^*_{n,m}(i)} \right\} \end{split}$$ where $$\begin{split} &\left\| \Ge(t_{n+1},t_m) - G^*_{n+1,m} \right\|\\ ={} & \left\|\left[\Ge(t_n,t_m) + h \frac{\partial}{\partial \sa}\Ge(t_n,t_m) + \frac{1}{2}h^2 \frac{\partial^2}{\partial \sa^2}\Ge(\nu_n,t_m) \right]- \left[ \left( I + \sgn(t_n-t)\ii H_s h\right)G_{n,m} + h F_1(\gb_{n,m}) \right] \right\|\\ ={} & \left\|\left( I + \sgn(t_n-t)\ii H_s h\right) E_{n,m} + h \left[ \Hs(t_n,\Ge,t_m) - F_1(\gb_{n,m}) \right] + \frac{1}{2}h^2 \frac{\partial^2}{\partial \sa^2}\Ge(\nu_n,t_m) \right\| \\ \le{} & \left\| \left( I + \sgn(t_n-t)\ii H_s h\right) E_{n,m}\right\| + h \left\| \Hs(t_n,\Ge,t_m) - F_1(\ge_{n,m}) \right\| \\ & \hspace{150pt}+ h \left\| F_1(\ge_{n,m}) - F_1(\gb_{n,m}) \right\| + \left\|\frac{1}{2}h^2 \frac{\partial^2}{\partial \sa^2}\Ge(\nu_n,t_m)\right\| . \end{split}$$ Note that the second-order derivative above is the remainder of the Taylor expansion and should be interpreted as $$\frac{\partial^2}{\partial \sa^2}\Ge(\nu_{n},t_m) = \begin{pmatrix} \frac{\partial^2}{\partial \sa^2}\Ge^{(11)}(\nu^{(11)}_{n},t_m) & \frac{\partial^2}{\partial \sa^2}\Ge^{(12)}(\nu^{(12)}_{n},t_m) \\ \frac{\partial^2}{\partial \sa^2}\Ge^{(21)}(\nu^{(21)}_{n},t_m) & \frac{\partial^2}{\partial \sa^2}\Ge^{(22)}(\nu^{(22)}_{n},t_m) \end{pmatrix}.$$ Using the previous results and , we have the bound $$\begin{split} & \left\|\Ge(t_{n+1},t_m) - G^*_{n+1,m}\right\|\\ \le{} & (1+\frac{1}{2}\bdH^2 h^2)\|E_{n,m}\| + 8P_1(t_{n-m}) h^2 \sum_{i = 1}^{n-m} \left( 2 + (n-m-i)h \right) \|\eb_{n,m}\|_{\Gamma_{n,m}(i)}+\left(\bdG''h^2+\bar{\beta}(t_{n-m}) h^3\right)\\ \le{} & (1+\frac{1}{2}\bdH^2 h^2)\|E_{n,m}\| + 8P_1(t_{n-m}) h^2 \sum_{i = 1}^{n-m} \left( 2 + (n-m-i)h \right) \|\eb_{n,m}\|_{\Gamma_{n,m}(i)}+2\bdG''h^2 \end{split}$$ upon assuming $h \le \bdG''/\bar{\beta}(t_{n-m})$. Plugging the above estimate into , we obtain $$\label{accumulate_err_2} \left\| F_2(\ges_{n,m}) - F_2(\gb^*_{n,m})\right\| \le 28P_1(t_{n-m+1}) h \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \|\eb^*_{n,m}\|_{\Gamma^*_{n,m}(i)} + 16P_1(t_{n-m+1})\bdG'' h^3$$ which is a result similar to . Now we plug the estimates and into , we obtain the following estimation for Part 2 of $E_{n+1,m}$: $$\label{eq: part 2 final} \begin{split} &\underbrace{\left\|\left[A_{n,m}(h)\Ge(t_{n},t_m) + \frac{1}{2}h \left( B_{n,m}(h) F_1(\ge_{n,m}) + F_2(\ges_{n,m}) \right) \right] - G_{n+1,m} \right\|}_{\text{Part 2}} \\ \le {}& \left(1+\frac{1}{8}\bdH^4 h^4\right) \|E_{n,m}\| + 22P_1(t_{n-m+1}) h^2 \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \|\eb^*_{n,m}\|_{\Gamma^*_{n,m}(i)} + 8P_1(t_{n-m+1})\bdG'' h^3. \end{split}$$ By now, we can combine the estimates for Part 1 and Part 2 , so that the estimation yields the recurrence relation: $$\label{eq: Runge Kutta error} \begin{split} &\|E_{n+1,m}\| \\ \le{} & \left(1+\frac{1}{8}\bdH^4 h^4 \right)\|E_{n,m}\| + 22P_1(t_{n-m+1}) h^2 \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \|\eb^*_{n,m}\|_{\Gamma^*_{n,m}(i)} + P^{\text{e}}(t_{n-m+1})\cdot h^3, \end{split}$$ where $$P^{\text{e}}(t) = \left( \frac{1}{4} \bdH + 8P_1(t) \right)\bdG''+ \frac{5}{12} \bdG''' + \bar{\beta}(t) .$$ One may compare the recurrence relation above with to find that the only difference between these two inequalities is the truncation error (last term). Therefore, we may simply replicate the procedures – and conclude that the deterministic error has the exact same growth rate in the exponential part as the numerical error: $$\label{recurrence 3} \|E_{n+1,m}\| \le P^{\text{e}}(t_{n-m+1}) (1+\theta_1 \sqrt{P_1(t_{n-m+1})}h)^{n-m+1}\cdot h^2$$ which can again be verified by mathematical induction. Therefore, we arrive at the estimate for the deterministic error at stated in Proposition \[thm: Runge Kutta error\]. Conclusion {#sec: conclusion} ========== We have presented a detailed analysis to study the error growth in the inchworm Monte Carlo method, emphasizing the trade-off between the numerical sign problem and error growth due to accumulation and amplification due to time marching. The result explains why the inchworm Monte Carlo method has a slower error growth than the classical quantum Monte Carlo method, and our analysis reveals how partial resummation trades-off the numerical sign problem and the error amplification. Our work points to the research direction of improving the time integrator to further suppress the error growth, which will be considered in future works. Acknowledgement {#acknowledgement .unnumbered} =============== Zhenning Cai was supported by the Academic Research Fund of the Ministry of Education of Singapore under grant No. R-146-000-291-114. The work of Jianfeng Lu was supported in part by the National Science Foundation via grants DMS-1454939 and DMS-2012286. Formulas of the roots of characteristic polynomial {#sec: char poly} ================================================== Here we provide the formulas for $r_i$ appearing in . Let $\epsilon = c_2 h^2$. Then we have $$\label{eq: r_i} \begin{split} &r_1 = 1+\epsilon+\frac{\left(\frac{2}{3}\right)^{1 / 3} \epsilon(2+2 h+3 \epsilon)}{R}+\frac{1}{2^{1 / 3} \times 3^{2 / 3}}R,\\ &r_2 = 1+\epsilon-\frac{(1+ \sqrt{3}\ii) \epsilon (2+2 h+3 \epsilon)} {2^{2 / 3} \times 3^{1 / 3}R}+\frac{\mathrm{i}(\mathrm{i}+\sqrt{3})}{2 \times 2^{1 / 3} \times 3^{2 / 3}}R, \\ &r_3 = 1+\epsilon+\frac{\ii(\ii+ \sqrt{3}) \epsilon (2+2 h+3 \epsilon)} {2^{2 / 3} \times 3^{1 / 3}R}+\frac{\mathrm{i}(\mathrm{i}+\sqrt{3})}{2 \times 2^{1 / 3} \times 3^{2 / 3}} R \end{split}$$ where $$R = \left(\epsilon\left(9 h+18 \epsilon+18 h \epsilon+18 \epsilon^{2}+\sqrt{3} \sqrt{-4 \epsilon(2+2 h+3 \epsilon)^{3}+27(h+2 h \epsilon+2 \epsilon(1+\epsilon))^{2}}\right)\right)^{1 / 3}.$$ When $h$ is small, it can be verified that $R \sim O(h)$. One can then see from that $r_i \sim O(h)$ for $i = 1,2,3$. Proofs related to the bias estimation of the inchworm Monte Carlo method {#sec: recurrence 1} ======================================================================== In this appendix, we would like to complete the proof of the bias estimation for the inchworm Monte Carlo method. Specifically, the proofs of in Lemma \[lemma: deltak and deltak\^2 integ diff\] and the proof of in Theorem \[thm: bounds\] will be given below. The final result can be obtained directly by the triangle inequality. **(i)** Estimate of $\left\|\E(\tK_1 - K_1)\right\|$: We again use the relation to get $$\E_{\vec{\sb}}(\tK_1 - K_1) = F_1 (\tg_{n,m}) - F_1(\gb_{n,m}).$$ Then by Taylor expansion, we get $$\label{eq: est k1 expansion} \begin{split} &\E\left(\tK_1^{(rs)} - K_1^{(rs)}\right) = \left( \nabla F_1^{(rs)}(\gb_{n,m})\right)^{{\mathrm{T}}}\cdot \E(\htg_{n,m} - \hg_{n,m}) + \\ &\hspace{140pt}\frac{1}{2}\E\left[(\htg_{n,m} - \hg_{n,m})^{{\mathrm{T}}}\left(\nabla^2 F_1^{(rs)}(\xib_{n,m})\right) (\htg_{n,m} - \hg_{n,m}) \right] \end{split}$$ where $\xib_{n,m}$ is a convex combination of $\tg_{n,m}$ and $\gb_{n,m}$. The estimate for the first term on the right-hand side of the equation above is similar to which is $$\label{eq: est k1 Taylor 1} \left| \left( \nabla F_1^{(rs)}(\gb_{n,m})\right)^{{\mathrm{T}}}\cdot \E(\htg_{n,m} - \hg_{n,m}) \right| \le 4P_1(t_{n-m}) h \sum_{i = 1}^{n-m} \left( 2 + (n-m-i)h \right)\left\| \E \left( \Delta \gb_{n,m} \right) \right\|_{\Gamma_{n,m}(i)}.$$ For the second term on the right-hand side of , we use Proposition \[thm: second order derivative\] to bound it by $$\label{eq: est k1 Taylor 2} \begin{split} &\left|\E\left[\left(\htg_{n,m} - \hg_{n,m}\right)^{{\mathrm{T}}}\left(\nabla^2 F_1^{(rs)}(\xib_{n,m})\right) \left(\htg_{n,m} - \hg_{n,m}\right)\right] \right| \\ ={} & \left|\E \left(\sum_{(k_1,\ell_1) \in \Omega_{n,m}; \atop (k_2,\ell_2) \in \Omega_{n,m}} \sum_{p_1,q_1 =1,2; \atop p_2,q_2 =1 ,2} \frac{\partial^2 F^{(rs)}_1(\xib_{n,m})}{\partial G_{k_1,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} \Delta G^{(p_1 q_1)}_{k_1,\ell_1} \Delta G^{(p_2 q_2)}_{k_2,\ell_2} \right)\right|\\ \le{} & \left[ \left(\sum_{(k_1,\ell_1) \in \partial \Omega_{n,m}; \atop (k_2,\ell_2) \in \partial \Omega_{n,m}} + \sum_{(k_1,\ell_1) \in \partial \Omega_{n,m}; \atop (k_2,\ell_2) \in \mathring{\Omega}_{n,m}} + \sum_{(k_1,\ell_1) \in \mathring{\Omega}_{n,m}; \atop (k_2,\ell_2) \in \partial \Omega_{n,m}} + \sum_{(k_1,\ell_1) \in \mathring{\Omega}_{n,m} ;\atop (k_2,\ell_2) \in \mathring{\Omega}_{n,m}} \right) \right. \\ &\hspace{150pt} \left. \sum_{p_1,q_1 =1,2; \atop p_2,q_2 =1 ,2} \E \left( \left|\frac{\partial^2 F^{(rs)}_1(\xib_{n,m})}{\partial G_{k_1,\ell_1}^{(p_1 q_1)}\partial G_{k_2,\ell_2}^{(p_2 q_2)}} \right| \right) \right]\cdot \max_{(k,\ell)\in \Omega_{n,m}; \atop p,q=1,2}\E\left( \left| \Delta G_{k,\ell}^{(pq)} \right|^2 \right) \\ \le{} & \bar{\alpha}(t_{n-m})\left[ \Ns^{(\std)}_{\Omega_{n,m}}(\Delta \gb_{n,m}) \right]^2, \end{split}$$ where $$\bar{\alpha}(t) = 16 P_2(t)(10t+16t^2+5t^3+\frac{1}{4}t^4).$$ Here we first need to count the number of the nodes in the set $\left|\partial \Omega_{n,m}\right| = 2(n-m)-1$ and $\left|\mathring{\Omega}_{n,m}\right| = \frac{1}{2}(n-m-1)(n-m-2)$. Then the last “$\le$" above is done by combining the second-order derivatives of different magnitudes given in Proposition \[thm: second order derivative\] with the corresponding number of such derivatives. For example, the first summation in the third line above together with the second-order derivatives such that conditions **(a)**-**(d)** are satisfied will contribute $P_2(t_{n-m})h \cdot (4+2\times 3(n-m)) \le P_2(t_{n-m}) \cdot 10(t_{n-m})$ to the final estimate in the last line. Similar analysis apply to other three summations. Inserting the two estimates above into gives us the evaluation for $\left\|\E(\tK_1 - K_1)\right\|$ as . **(ii)** Estimate of $\left\|\E(\tK_2 - K_2)\right\|$: Using the same method as the estimation of $\left\|\E(\tK_1 - K_1)\right\|$, we have $$\label{eq: K2} \begin{split} \left\|\E(\tK_2 - K_2)\right\| & \le 4P_1(t_{n-m+1}) h \left\{ \left\|\E(\tG^*_{n+1,m} - G^*_{n+1,m}) \right\| + \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \left\| \E \left( \Delta \gb^*_{n,m} \right) \right\|_{\Gamma^*_{n,m}(i)} \right\} \\ & \quad + \bar{\alpha}(t_{n-m+1})\left[ \Ns^{(\std)}_{\Omega^*_{n,m}}(\Delta \gb^*_{n,m}) \right]^2, \end{split}$$ which is similar to . By the definition of $G^*_{n+1,m}$ and $\tG^*_{n+1,m}$ given in and respectively, we can estimate $\left\|\E(\tG^*_{n+1,m} - G^*_{n+1,m}) \right\|$ as $$\label{eq: EG^*} \begin{split} & \|\E(\tG^*_{n+1,m} - G^*_{n+1,m})\| \\ \le{} & \left\|\E\left(( I + \sgn(t_n - t)\ii H_s h ) \Delta G_{n,m}\right) \right\| + h\|\E(\tK_1 - K_1)\| \\ \le{} & \left(1+\frac{1}{2}\bdH^2 h^2\right)\left\|\E\left( \Delta G_{n,m}\right) \right\| + 8P_1(t_{n-m}) h^2 \sum_{i = 1}^{n-m} \left( 2 + (n-m-i)h \right)\left\| \E \left( \Delta \gb_{n,m} \right) \right\|_{\Gamma_{n,m}(i)}\\ &\hspace{200pt}+ \bar{\alpha}(t_{n-m})h \left[ \Ns^{(\std)}_{\Omega_{n,m}}(\Delta \gb_{n,m}) \right]^2, \end{split}$$ where we have applied in the last inequality. Thus it remains only to bound $$\label{eq:Nstd} \left[\Ns^{(\std)}_{\Omega^*_{n,m}}(\Delta \gb^*_{n,m}) \right]^2 = \max \left(\left[\Ns^{(\std)}_{\bar{\Omega}_{n,m}}(\Delta \gb^*_{n,m}) \right]^2,\E(\|\tG^*_{n+1,m} - G^*_{n+1,m}\|^2)\right),$$ for which we just need to focus on the estimation of $\E(\|\tG^*_{n+1,m} - G^*_{n+1,m}\|^2)$. Again by the definitions and , we have $$\begin{split} \E(\|\tG^*_{n+1,m} - G^*_{n+1,m}\|^2) \le 2 (1 + \bdH^2 h^2 ) \E(\|\Delta G_{n,m}\|^2) + 2h^2\E(\|\tK_1 - K_1\|^2). \end{split}$$ By , we can estimate $\E(\|\tK_1 - K_1\|^2)$ as $$\begin{split} & \E(\|\tK_1 - K_1\|^2) \\ \le & \ 128P^2_1(t_{n-m}) h^2 \left\{ \sum_{i = 1}^{n-m} \left( 2 + (n-m-i)h \right) \Ns^{(\std)}_{\Gamma_{n,m}(i)}(\Delta \gb_{n,m}) \right\}^2+ 8\bar{\gamma}(t_{n-m})\cdot \frac{1}{N_s} \\ \le & \ 128 P^2_1(t_{n-m})(2+t_{n-m})^2 t^2_{n-m} \left[ \Ns^{(\std)}_{\Omega_{n,m}}(\Delta \gb_{n,m}) \right]^2 + 8\bar{\gamma}(t_{n-m})\cdot \frac{1}{N_s} \end{split}$$ Therefore $$\begin{split} &\E(\|\tG^*_{n+1,m} - G^*_{n+1,m}\|^2)\\ \le{} & 2 (1 + \bdH^2 h^2 ) \E(\|\Delta G_{n,m}\|^2) + 256 P^2_1(t_{n-m})(2+t_{n-m})^2 t^2_{n-m}h^2 \left[ \Ns^{(\std)}_{\Omega_{n,m}}(\Delta \gb_{n,m}) \right]^2+ 16\bar{\gamma}(t_{n-m})\cdot \frac{h^2}{N_s} \\ \le{} & 2\left[1 + \left( \bdH^2 + 128 P^2_1(t_{n-m})(2+t_{n-m})^2 t^2_{n-m}\right)h^2 \right]\left[ \Ns^{(\std)}_{\Omega_{n,m}}(\Delta \gb_{n,m}) \right]^2 + 16\bar{\gamma}(t_{n-m})\cdot\frac{h^2}{N_s} \\ \le{} & 4\left[ \Ns^{(\std)}_{\Omega_{n,m}}(\Delta \gb_{n,m}) \right]^2 + 16\bar{\gamma}(t_{n-m})\cdot\frac{h^2}{N_s} \end{split}$$ if $h\le 1/\sqrt{\bdH^2 + 128 P^2_1(t_{n-m})(2+t_{n-m})^2 t^2_{n-m}}$. Inserting this inequality into , one obtains $$\label{eq: est delta G^2} \left[ \Ns^{(\std)}_{\Omega^*_{n,m}}(\Delta \gb^*_{n,m}) \right]^2 \le 4\left[ \Ns^{(\std)}_{\bar{\Omega}_{n,m}}(\Delta \gb^*_{n,m}) \right]^2 + 16\bar{\gamma}(t_{n-m})\cdot\frac{h^2}{N_s}.$$ Finally, the estimate can be obtained by inserting the estimates and into and require $h \le \frac{1}{\sqrt{2 P_1(t_{n-m+1})}}$. By , we estimate the bias by $$\label{eq: est first order error} \begin{split} \|\E(\dG_{n+1,m})\| \le & \ \|A_{n,m}(h)\E(\dG_{n,m})\| + \frac{1}{2} h \|B_{n,m}(h)\E(\tK_1 - K_1)\| + \frac{1}{2} h \|\E(\tK_2 - K_2)\| \\ \le & \ \left(1+\frac{1}{8}\bdH^4 h^4\right)\|\E(\dG_{n,m})\| + h \|\E(\tK_1 - K_1)\| + \frac{1}{2} h \|\E(\tK_2 - K_2)\|. \end{split}$$ Now we can insert and into the above equation to get the recurrence relation stated in , where the error $\Ns^{(\mathrm{std})}_{\bar{\Omega}_{n,m}}(\Delta \gb^*_{n,m})$ can be bounded by , resulting in $$\begin{split} &\|\E(\dG_{n+1,m})\|\\ \le& \ \left(1+\frac{1}{8}\bdH^4 h^4\right)\|\E(\dG_{n,m})\| + 22P_1(t_{n-m+1}) h^2 \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \left\| \E ( \Delta \gb^*_{n,m}) \right\|_{\Gamma^*_{n,m}(i)}\\ & + \left(\frac{7}{2}\bar{\alpha}(t_{n-m+1})\theta^2_2 \bar{\gamma}(t_{n-m+1})\left({\mathrm{e}}^{2 \theta_1 \sqrt{P_1(t_{n-m+1})} t_{n-m+1}}\right) \cdot \frac{h^2}{N_s} + 8 \bar{\alpha}(t_{n-m+1})\bar{\gamma}(t_{n-m+1})\cdot \frac{h^3}{N_s} \right) \\ \le& \ \left(1+\frac{1}{8}\bdH^4 h^4\right)\|\E(\dG_{n,m})\| + 22P_1(t_{n-m+1}) h^2 \sum_{i = 1}^{n-m} \left( 2 + (n-m+1-i)h \right) \left\| \E ( \Delta \gb^*_{n,m}) \right\|_{\Gamma^*_{n,m}(i)}\\ &+ 4\bar{\alpha}(t_{n-m+1})\theta^2_2 \bar{\gamma}(t_{n-m+1})\left({\mathrm{e}}^{2 \theta_1 \sqrt{P_1(t_{n-m+1})} t_{n-m+1}}\right) \cdot \frac{h^2}{N_s} \end{split}$$ upon assuming $h \le \frac{1}{16}$. We notice that the above inequality is simply the recurrence relation with the last term changed. Therefore, we can repeat the application of and find the following estimate: $$\|\E(\dG_{n+1,m})\| \le 4\theta^2_2 \bar{\alpha}(t_{n-m+1})\bar{\gamma}(t_{n-m+1})\left({\mathrm{e}}^{2 \theta_1 \sqrt{P_1(t_{n-m+1})} t_{n-m+1}}\right)(1+ \theta_1\sqrt{P_1(t_{n-m+1})}h)^{n-m+1}\cdot \frac{h}{N_s}$$ which leads to the final estimate for the bias stated in .
--- address: | Institüt fur Physik, Universität Dortmund,\ D 44221 Dortmund, Germany author: - 'A. MUKHERJEE and C. PISANO' title: 'DELINEATING THE (UN)POLARIZED PHOTON CONTENT OF THE NUCLEON' --- Introduction ============ QED Compton process (QEDCS) in the scattering $ep \rightarrow e\gamma X$ has a distinctive experimental signature: both the outgoing electron and photon are detected at large polar angles and their transverse momenta almost balance each other, with little or no hadronic activity at the detectors [@blu; @ruju]. QEDCS in unpolarized $ep$ scattering has long been suggested as an excellent channel to measure the structure function $F_2(x_B, Q^2)$ and also to extract the unpolarized photon content of the proton in the equivalent photon approximation (EPA) [@blu; @ruju; @kessler]. In fact, this has been recently analyzed by members of the H1 collaboration at HERA [@lend]. Improved kinematical constraints have been suggested in [@pap1; @pp2] for a more accurate extraction of the unpolarized photon distribution. The polarized photon content of the nucleon consists of two components, elastic and inelastic, like its unpolarized counterpart [@gpr1; @gpr2]. Recently we showed that when the virtuality of the exchanged photon is small, the ’exact’ polarized QEDCS cross section is expressed in terms of the polarized equivalent photon distribution of the proton [@pap3]. We gave the necessary kinematical cuts to extract the polarized photon distribution at HERMES and eRHIC by using QED Compton peak; QEDCS can also provide valuable information on $g_1(x_B, Q^2)$ in small $Q^2$, medium $x_B$ region at HERMES and over a broad range of $x_B$, $Q^2$ at eRHIC. Here we report on our main results. QED COMPTON SCATTERING CROSS SECTION AND THE EPA ================================================ We consider the process shown in Fig. 1. $X$ is a generic hadronic system with momentum $P_{X}=\sum_{X_i} P_{X_i}$. For elastic scattering $P_X=P'$ and $X$ is a proton. \ We introduce the invariants S=(P+l)\^2,     s=(l+k)\^2,     t=(l-l’)\^2,     t=k\^2, \[invar\] where $k$ is the 4-momentum of the virtual photon. The photon in the final state is real, $k'^2=0$. We take the proton mass to be $m$. The cross section can be calculated in a covariant way [@pap1], both in the elastic and inelastic channels and it can be shown that in the limit $S \gg m^2$ and $\hat s \gg |t| $, one can approximate the cross section as (S) \^ = \_[x\_]{}\^[(1-[m/ S]{})\^2]{} dx \_[m\_e\^2 -s]{}\^0 dt (x, x S) [d (x S, t) dt]{} , \[epael\] where $x={\hat s/ S}$ and $\gamma (x, x S)$ is the equivalent photon distribution of the proton [@blu; @ruju; @kniehl; @gpr1; @gpr2], which has an elastic and an inelastic component; ${d \hat \sigma (\hat s, \hat t)\over d\hat t}$ is the real photoproduction cross section. When the incident electron and proton are both longitudinally polarized, the cross section in the elastic channel becomes [@pap3] \_ &=& [8 (S-m\^2)\^2]{} \_[m\_e\^2]{}\^[( -m)\^2]{} ds \_[t\_]{}\^[t\_]{} [dtt]{} \_[t\_]{}\^[t\_]{} dt \_[0]{}\^[2 ]{} d X\^A\_2(s, t, t)\ &&  . \[elsigg\] $G_E$ and $G_M$ are the proton’s electric and magnetic form factors and $\phi$ is the azimuthal angle of the outgoing $e-\gamma$ system in the center-of-mass frame. The limits of integrations follow from kinematics and are the same as in the unpolarized case [@pap1]. $X_2^A(\hat s, t, \hat t)$ can be obtained from the leptonic tensor (see [@pap3] for the definition). The cross section in the inelastic channel is [@pap3] \_(S) &=& [4 (S-m\^2)\^2]{} \_[W\^2\_]{}\^[W\^2\_]{} dW\^2 \_[m\_e\^2]{}\^[(-m)\^2]{} d s \_[Q\^2\_]{}\^[Q\^2\_]{} [dQ\^2Q\^2]{} [1(W\^2+Q\^2-m\^2)]{}\ &&{ g\_1 (x\_B, Q\^2)\ &&     +[4 m\^2W\^2+Q\^2-m\^2]{} g\_2 (x\_B, Q\^2) } \_2\^A(s, Q\^2), here $\tilde{X}_2^A (\hat s, Q^2) = 2\pi\int d \hat t X_2^A(\hat s, Q^2, \hat t) $, $W$ is the invariant mass of the produced hadronic system and $Q^2=-t$. The limits of the integrations are the same as in the unpolarized case and can be found in [@pap1]. When $S \gg m^2$ and $\hat s \gg Q^2$, the cross section is approximated to a form similar to (\[epael\]) with $\gamma(x,x S)$ replaced by $\Delta \gamma(x,x S)$ which is the polarized equivalent photon distribution of the proton and ${d \hat\sigma (\hat s, \hat t)\over d\hat t}$ replaced by ${d \Delta \hat \sigma (\hat s, \hat t)\over d\hat t}$, which is the polarized real photoproduction cross section. $\Delta \gamma(x,x S)$ has both elastic and inelastic components [@pap1; @pap3]. The elastic component of $(\Delta) \gamma$ is expressed in terms of the form factors for which the well-known dipole parametrizations can be used [@kniehl; @gpr1]. The inelastic component is expressed in terms of the proton structure functions. This component is scale dependent and is our main concern here. NUMERICAL RESULTS ================= In this section, we show our numerical estimates of the QEDCS process for HERMES and eRHIC kinematics respectively. QEDCS events can be selected by imposing the following constraints on the energies $E_e'$ and $E_\gamma'$ of the outgoing electron and photon respectively, and on their polar angles $\theta_e$, $\theta_\gamma $ (these constraints are similar to the ones used at HERA for unpolarized scattering): E\_e’, E\_’ &gt; 4  \^2                                         \ 0.04 \_e, \_0.2   ;      0.06 \_e, \_-0.06   \ s &gt; 1  \^2;    s &gt; Q\^2                                         \[cuts\] For HERMES, the incident electron beam energy is $E_e=27.5 $  GeV. For eRHIC, we have taken $E_e=10$ GeV; $E_p=250$ GeV. The constraints on the energies and the polar angles of the outgoing particles remove the initial and final state radiative events [@blu; @kessler] unrelated to QEDCS. The last two cuts basically select the preferable kinematical region where the EPA is expected to hold. The asymmetry $A_{LL}$ is defined as A\_[LL]{}=[\_[++]{}-\_[+-]{}\_[++]{}+\_[+-]{}]{} , where the indices $+$ and $-$ refer to the helicities of the incident electron and proton respectively. Fig. 2(a) shows the asymmetry for HERMES kinematics in bins of $x_\gamma={l \cdot k\over P \cdot l}$, which is the fraction of the proton’s momentum carried by the photon. In the EPA, $x_\gamma=x$. The total (elastic+inelastic) asymmetry shows an excellent agreement with that calculated in the EPA (shown by the dot-dashed line) in all bins except the last one for higher $x_\gamma$. The expected statistical error in each bin is calculated using the formula $\delta A_{LL} \approx {1\over \mathcal{P}_e \mathcal{P}_p \sqrt {\mathcal {L}\sigma_{bin}}} $, where $\mathcal {P}_e$ and $\mathcal {P}_p$ are the polarizations of the incident lepton and proton respectively, $\mathcal {L}$ is the integrated luminosity and $\sigma_{\mathrm{bin}}$ is the unpolarized cross section in the corresponding $x_\gamma$ bin. We have taken $\mathcal {P}_e=\mathcal {P}_p=0.7$ and $\mathcal {L}={1 fb^{-1}}$ for both HERMES and eRHIC. The asymmetry in the inelastic channel is also shown. The asymmetry is sizable and can give access to the polarized equivalent photon distribution at HERMES. Fig. 2(b) shows the asymmetry for eRHIC. Here events are observed over a broader range of $x_\gamma$, however the asymmetry is very small for small $x_\gamma$ bins, it increases as $x_\gamma$ increases. As in HERMES, good agreement with the EPA is observed in all but the last bin, where the expected statistical error is also higher due to the smaller number of events. The cross section receives a major background contribution coming from virtual Compton scattering (VCS), when the final state photon is emitted from the proton side. Particularly important is the inelastic VCS, because it affects the determination of the inelastic component of $(\Delta) \gamma$. The inelastic VCS was estimated in [@pp2; @pap3] in an ’effective’ parton model (also valid at low $Q^2$). It was observed that both the polarized and unpolarized VCS contributions are suppressed in the region $\hat s < \hat S$ where $\hat S= {\hat t ( x_l-x_B)\over x_l}$ with $x_l= {-\hat t \over 2 P \cdot (l-l')}$. Both $\hat s$ and $\hat S $ are measurable quantities. The interference between QEDCS and VCS was found to be suppressed in this region at eRHIC but not so much at HERMES. However, it changes sign when a positron beam is used instead of the electron beam, a combination of electron and positron scattering data can eliminate this contribution. Finally, we point out that such an experiment can provide valuable information on the spin structure function $g_1(x_B, Q^2)$ in a kinematical region not well-covered by fully inclusive experiments (in fact, the unpolarized structure function $F_2(x_B, Q^2)$ has been measured by measuring QED Compton peak at HERA [@lend; @thesis]) because of its different kinematics compared to inclusive deep inelastic scattering. $g_1(x_B, Q^2)$ can be accessed especially at low $Q^2$, medium $x_B$ region at HERMES and over a very broad $x_B$, $Q^2$ range at eRHIC using the QED Compton process [@pap3]. Acknowledgements {#acknowledgements .unnumbered} ================ We warmly acknowledge E. Reya and M. Glück for initiating this study, as well as for many fruitful discussions. AM thanks the organizers of the $39$ th Rencontres de Moriond session on QCD and High Energy Hadronic Interactions for a wonderful and stimulating workshop. This work has been supported in part by the ’Bundesministerium für Bildung und Forschung’, Berlin/Bonn. References {#references .unnumbered} ========== [99]{} J. Blümlein, G. Levman, H. Spiesberger, . A. De Rujula, W. Vogelsang, . A. Courau and P. Kessler, . V. Lendermann, H. C. Schultz-Coulon, D. Wegener, . A. Mukherjee, C. Pisano, . A. Mukherjee, C. Pisano, hep-ph/0402046, to appear in [*Eur. Phys. J. C*]{}. M. Glück, C. Pisano, E. Reya, . M. Glück, C. Pisano, E. Reya, I. Schienbein, . A. Mukherjee, C. Pisano, hep-ph/0405079. B. Kniehl, Phys. Lett. [**B 254**]{}, 267, (1991). V. Lenderman, Ph. D. thesis, Univ. Dortmund, H1 collaboration, DESY-THESIS-2002-004, (2002).
--- abstract: 'We derive the spectral dependence of the non-linear susceptibility of any order, generalizing the common form of Sellmeier equations. This dependence is fully defined by the knowledge of the linear dispersion of the medium. This finding generalizes the Miller formula to any order of non-linearity. In the frequency-degenerate case, it yields the spectral dependence of non-linear refractive indices of arbitrary order.' author: - 'W. Ettoumi' - 'Y. Petit' - 'J. Kasparian' - 'J.-P. Wolf' title: Generalized Miller Formulæ --- Introduction ============ Non-linear optics [@Boyd] relies on the knowledge of the non-linear susceptibilities (or, alternatively, the non-linear indices) of the propagation media. This description is generally truncated to the first term, *i.e.* the second-order susceptibility in non-centrosymmetric media, or the third-order susceptibility in centrosymmetric ones. However, the increase of the available laser powers as well as the investigation of systems like optical fibers [@DudleyGC06] or photonic crystals [@Soljacic04] where the confinement of the light increases its intensity raise the need to consider higher-order processes. Recently, the non-linear refractive indices of O$_2$, N$_2$ were measured up to $n_8$ (*i.e.* the $9^{th}$-order susceptibility $\chi^{(9)}$) and in Ar up to $n_{10}$ (*i.e.* $\chi^{(11)}$) at 800 nm [@LoriotHFL09]. Furthermore, we demonstrated [@LoriotBHFLHKW09] that they must be considered in the description of the filamentation of ultrashort pulses [@BraunKLDSM95; @ChinHLLTABKKS05; @BergeSNKW07; @CouaironM07; @KasparianW08]. This result, which was unexpected, provides a clear illustration of this need, and of the associated requirement to evaluate these terms at any wavelength. But systematic measurements over the spectrum are out of reach of present experimental capabilities. A theoretical support is therefore required to extend the existing experimental results to any frequency, which would provide a new insight into non-linear optics. Such relation was provided by Miller in the case of the second-order susceptibility. He observed that in many crystals the knowledge of $\chi^{(2)}$ for one single triplet of frequencies $(\omega'_0;\omega'_1,\omega'_2)$ and the dispersion relation of the medium is sufficient to determine $\chi^{(2)}$ for any triplet of frequencies $(\omega_0;\omega_1,\omega_2)$ [@Miller64]: $$\frac{\chi^{(2)}(\omega_0;\omega_1,\omega_2)}{\chi^{(2)}(\omega_0';\omega'_1,\omega'_2)} = \frac{\chi^{(1)}(\omega_0)\chi^{(1)}(\omega_1)\chi^{(1)}(\omega_2)} {\chi^{(1)}(\omega_0')\chi^{(1)}(\omega'_1)\chi^{(1)}(\omega'_2)} \label{Miller}$$ This formula has been widely used *e.g.* determine the to derive the second order static susceptibility in semiconductors [@ScandoloB95]. Mizrahi and Shelton later suggested that the Miller formula could be extended to provide the spectral dependence of the non-linear index $n_2$, *i.e.* to the third-order frequency-degenerate non-linearity [@MizrahiS85]. However, they did not demonstrate this result in their article, but rather referred to an unpublished work [@Owyoung72]. Later, Bassani *et al.* demonstrated this relation for any order of non-linearity in the specific case of harmonic generation processes [@BassaniL98; @LucariniBPS03]. In this Letter, provide an explicit expression of the spectral dependence of the non-linear susceptibility of any order, generalizing the common form of Sellmeier equations. We show that this spectral dependence is fully defined by the knowledge of the linear dispersion of the medium. As a consequence, the Miller formula (\[Miller\]) [@Miller64] can be extended to any order of non-linearity. In the frequency-degenerate case, this finding yields the spectral dependence of non-linear refractive indices of arbitrary order, confirming and widely generalizing Mizrahi and Shelton’s statement about $\chi^{(3)}$ [@MizrahiS85]. Derivation of generalized Miller formulæ ======================================== Electrons in an anharmonic oscillator ------------------------------------- Elucidating the spectral dependence of the electric susceptibility requires to consider the equation of motion of an electron located at $\vec{r}$ in a three-dimensional potential $V(\vec{r})$. $V$ can be expanded as a 3-dimensional Taylor series around the equilibrium position $\vec{r}=\vec{0}$: $$\displaystyle{V(\vec{r}) = V(\vec{0}) + \sum_{i+j+k \ge 2}{\frac{x^i y^j z^k}{i!j!k!} \left[\frac{\mathrm{d}^{i+j+k} V}{\partial x^i \partial y^j \partial z^k}\right]_{\vec{r}=\vec{0}}}}$$ where the summation begins at $q\equiv i+j+k=2$ since $\left[\frac{\partial V}{\partial x}\right]_{x=0}=\left[\frac{\partial V}{\partial y}\right]_{y=0}=\left[\frac{\partial V}{\partial z}\right]_{z=0}=0$ by definition of the equilibrium position. As a consequence, the electron experiences a force equal to $$\vec{F}(\vec{r}) = -\vec{\nabla} V(\vec{r})= - \sum_{q \ge 2}{\begin{pmatrix} \frac{x^{i-1} y^j z^k}{(i-1)!j!k!} \\ \frac{x^{i} y^{j-1} z^k}{i!(j-1)!k!} \\ \frac{x^{i} y^j z^{k-1}}{i!j!(k-1)!} \end{pmatrix}} \left[\frac{\partial^{q} V}{\partial x^i \partial y^j \partial z^k}\right]_{\vec{r}=\vec{0}} \label{force}$$ The macroscopic polarization along the axes $x$, $y$ and $z$ are respectively $P_x=-Nex$, $P_y=-Ney$ and $P_z=-Nez$, $N$ being the local density of electrons and $-e$ their charge. Equation (\[force\]) therefore rewrites: $$\vec{F}(\vec{r}) = -\sum_{q \ge 2}{\left(\frac{-1}{Ne}\right)^{q-1} \begin{pmatrix} \frac{P_x^{i-1} P_y^j P_z^k}{(i-1)!j!k!} \\ \frac{P_x^{i} P_y^{j-1} P_z^k}{i!(j-1)!k!} \\ \frac{P_x^{i} P_y^j P_z^{k-1}}{i!j!(k-1)!} \end{pmatrix}} \left[\frac{\partial^{q} V}{\partial x^i \partial y^j \partial z^k}\right]_{\vec{r}=\vec{0}}$$ or, equivalently: $$\vec{F}(\vec{r}) = -\sum_{q \ge 1}\left(\frac{-1}{Ne}\right)^{q} \frac{P_x^{i} P_y^j P_z^k}{i!j!k!} %\nonumber \\ \begin{pmatrix} \left[\frac{\partial^{q+1} V}{\partial x^{i+1} \partial y^j \partial z^k}\right]_{\vec{r}=\vec{0}} \\ \left[\frac{\partial^{q+1} V}{\partial x^i \partial y^{j+1} \partial z^k}\right]_{\vec{r}=\vec{0}} \\ \left[\frac{\partial^{q+1} V}{\partial x^i \partial y^j \partial z^{k+1}}\right]_{\vec{r}=\vec{0}} \end{pmatrix}$$ We introduce the $(q+1)^{th}$ rank tensor $Q^{(q)}$, which elements are given by: $$\begin{pmatrix} Q^{(q)}_{x;x^{(i)},y^{(j)},z^{(k)}} \\ Q^{(q)}_{y;x^{(i)},y^{(j)},z^{(k)}} \\ Q^{(q)}_{z;x^{(i)},y^{(j)},z^{(k)}} \end{pmatrix} \equiv \frac{(-1)^{q}}{m \times i!j!k!} \begin{pmatrix} \left[\frac{\partial^{q+1} V}{\partial x^{i+1} \partial y^j \partial z^k}\right]_{\vec{r}=\vec{0}} \\ \left[\frac{\partial^{q+1} V}{\partial x^i \partial y^{j+1} \partial z^k}\right]_{\vec{r}=\vec{0}} \\ \left[\frac{\partial^{q+1} V}{\partial x^i \partial y^{j} \partial z^{k+1}}\right]_{\vec{r}=\vec{0}} \end{pmatrix}$$ where, for example, $x^{(i)}$ indicates that the coordinate $x$ appears $i$ times in the index list of the considered tensor element. As a consequence, the classical equation of motion of the electron becomes: $$\frac{\mathrm{d}^2 \vec{P}}{\mathrm{d}t^2} + \bar{\bar{\gamma}} \frac{\mathrm{d} \vec{P}}{\mathrm{d}t} + \bar{\bar{\omega}}_{e}^2 \vec{P} + N e \sum^{\infty}_{q=1} {Q^{(q)} : \bigotimes_{l=1}^{q} \frac{\vec{P}}{Ne}}=\frac{N e^2}{m} \vec{E}(t) \label{eqn:P_tensoriel}$$ where $:$ denotes the contracted product and $\bigotimes$ the tensorial product. $\vec{E}$ is the driving electric field, $\bar{\bar{\omega}}_{e}^2 = \begin{pmatrix} \omega_{e,x}^2&0&0\\ 0&\omega_{e,y}^2&0\\ 0&0&\omega_{e,z}^2 \end{pmatrix}$ is the eigenfrequency matrix of the considered medium, which is diagonal provided $x$, $y$, and $z$ are the principal axes of the optical frame of the medium (*e.g.* $\omega_{e,x}=\sqrt{\frac{1}{m}\frac{\partial^2V}{\partial x^2}}$), and the matrix $\bar{\bar{\gamma}}$ stands for the linear absorption. No multiphoton absorption is considered here. Note that a medium symmetry of at least K$\times$C$_2$ (*i.e.* a crystal with at least orthorhombic symmetry, or a statistically isotropic medium like air) allows to simultaneously diagonalize $\bar{\bar{\omega}}_{e}^2$ and $\bar{\bar{\gamma}}$ in the optical frame [@BoulangerPSFMZA08]. A perturbative solution of equation (\[eqn:P\_tensoriel\]) is searched as: $$\displaystyle{\vec{P} = \sum^{\infty}_{l=1} \alpha^{l} \vec{P}^{(l)}}$$ where $\alpha \in ]0..1[$ is a free parameter. The series begins at 1, assuming $\vec{P}^{(0)}=\vec{0}$. Inserting it in equation (\[eqn:P\_tensoriel\]) and equating the terms in $\alpha^{q}$ yields a set of equations $$\frac{\mathrm{d}^2 \vec{P}^{(1)}}{\mathrm{d}t^2} + \bar{\bar{\gamma}} \frac{\mathrm{d} \vec{P}^{(1)}}{\mathrm{d} t} + \bar{\bar{\omega}}_{e}^2 \vec{P}^{(1)} = \frac{N e^2}{m} \vec{E}(t) \label{eqn:ordre1_vectoriel}\\$$ and, $\forall q \ge 2$, $$\frac{\mathrm{d}^2 \vec{P}^{(q)}}{\mathrm{d}t^2} + \bar{\bar{\gamma}} \frac{\mathrm{d} \vec{P}^{(q)}}{\mathrm{d} t} + \bar{\bar{\omega}}_{e}^2 \vec{P}^{(q)} = - N e Q^{(q)}: \bigotimes_{u=1}^{q} \frac{\vec{P}^{(1)}}{Ne} \label{eqn:systeme_tensoriel}$$ Note that in the right-hand side of Equation (\[eqn:systeme\_tensoriel\]), we have deliberately omitted the terms in $\bigotimes_{\sum{q'_u}=q}\left(P^{(q'_u)}\right)$, with $q>q'_u>1$. These terms imply non-linear polarizations in the construction of the considered higher-order non-linear polarization and hence correspond to cascades of frequency mixings, like *e.g.* in the generation of third harmonic by frequency-doubling the fundamental wavelength and then mixing it with its second-harmonic. Consistently, in the identification of the non-linear susceptibilities in Equation (\[P\_chik\]), only single-step mixing will be taken into account. Sellmeier equations ------------------- If $\bar{\bar{\Omega}}$ can be diagonalized (*i.e.* if the medium symmetry is at least K$\times$C$_2$ or absorption can be neglected), Equation (\[eqn:ordre1\_vectoriel\]) defining the linear polarization is purely vectorial and can be solved on each axis independently. Omitting the indices displaying the axis of the considered polarization, we write $P^{(1)}(t)=\int_{-\infty}^{+\infty}{P^{(1)}_0(\omega) e^{i \omega t}d\omega}$ and $E(t)=\int_{-\infty}^{+\infty}{E_0({\omega}) e^{i{\omega} t} d{\omega}}$, where, following the usual notation, positive frequencies denote incident ones and negative frequencies are emitted ones. Introducing the spectral dependency parameter $\Omega(\omega) = {\omega_e}^2-\omega^2+i\omega\gamma$ yields $$\int_{-\infty}^{+\infty}{\Omega(\omega)P_0^{(1)}(\omega)e^{i\omega t}d\omega}=\frac{N e^2}{m}\int_{-\infty}^{+\infty}{E_0({\omega}) e^{i{\omega} t} d{\omega}} \label{OmegaP_E}$$ which implies, for any frequency $\omega_0$ of the emitted field except for the resonance frequency : $$P^{(1)}_{0}(\omega_0) = \frac{N e^2}{m} \: \frac{E_0(\omega_0)}{\Omega(\omega_0)}$$ Hence, the linear susceptibility at frequency $\omega_0$ is: $$\chi^{(1)}(\omega_0) = \frac{1}{\Omega(\omega_0)} \frac{N e^2}{m \epsilon_0} = \frac{N e^2}{m \epsilon_0({\omega_e}^2-\omega_0^2+i\omega_0\gamma)} \label{Sellmeier1}$$ which defines the spectral dependence of $\chi^{(1)}$ on any principal axis. If absorption can be neglected ($\gamma\sim0 $) and $\chi^{(1)} \ll 1$, Equation (\[Sellmeier1\]) takes a form similar to a typical Sellmeier formula: $n^2-1=\frac{A}{B-1/ \lambda^2}$, where $n^2=1+\chi^{(1)}$. Uniaxial generalized Miller formulæ ----------------------------------- Equation (\[eqn:systeme\_tensoriel\]) can be reduced to a scalar form provided the polarization is excited along one single principal axis. We will first focus on this case which simplifies the writing and thus helps focusing the discussion on the the physical aspects, without altering the principle of the derivation. Omitting the index corresponding to the axis, the scalar form of Equation (\[eqn:systeme\_tensoriel\]) reads, for any $q \ge 2$: $$\frac{\mathrm{d}^2 P^{(q)}}{\mathrm{d}t^2} + \gamma \frac{\mathrm{d} P^{(q)}}{\mathrm{d} t} + {\omega_e}^2 P^{(q)} = -NeQ^{(q)} \left(\frac{P^{(1)}}{N e}\right)^q \label{eqn:systeme}$$ Since $P^{(q)}(t)=\int_{-\infty}^{+\infty}{P^{(q)}_0(\omega) e^{i \omega t}d\omega}$, Equation (\[eqn:systeme\]) rewrites: $$\begin{aligned} \int_{-\infty}^{+\infty}\Omega(\omega)P_0^{(q)}(\omega)e^{i\omega t}d\omega &=& -\frac{Q^{(q)}}{(Ne)^{q-1}} \left(\int_{-\infty}^{+\infty}{P_0^{(1)}(\omega)e^{i\omega t}d\omega}\right)^q \\ &=& -\frac{Q^{(q)}}{(Ne)^{q-1}} \iint..\iint \prod_{l=1}^q \left(P_0^{(1)} (\omega_l) e^{i\omega_l t}d\omega_l\right)\end{aligned}$$ Identifying the terms at an arbitrary frequency $\omega_0$ on both sides of the equation, we obtain: $$\begin{aligned} P^{(q)}_{0}(\omega_0) = -\frac{Q^{(q)}}{\Omega(\omega_0)(Ne)^{q-1}} \iint..\iint \delta\left(\sum_{l=0}^q{\omega_l}=0\right) \times \prod_{l=1}^q \left({P^{(1)}(\omega_l)d\omega_l}\right)\end{aligned}$$ $$P^{(q)}_{0}(\omega_0) = -Ne\left(\frac{e}{m}\right)^{q}\frac{Q^{(q)}}{\Omega(\omega_0)} \sum_{\sum_{l=0}^q{\omega_l}=0} \left(\prod_{l=1}^q{\frac{E_0(\omega_l)}{\Omega(\omega_l)}}\right) \label{eqn:polar}$$ Let us now consider the construction of a wave at $\omega_0$ from a set of $q' \leq q$ incident waves at frequencies $\omega_1$,...,$\omega_{q'}$, each algebric frequency $\omega_l$ being implied $u_l$ times in the process: $\sum_{l=1}^{q'}{u_l}=q$, while energy conservation imposes $\sum_{l=1}^{q'}{u_l\omega_l}=\omega_0$. In this case, $$P^{(q)}_{0}(\omega_0) = - Ne\left(\frac{e}{m}\right)^{q}\frac{Q^{(q)}}{\Omega(\omega_0)} C_q^{u_1,..,u_{q'}} \prod_{l=1}^{q'}{\left(\frac{E_0(\omega_l)}{\Omega(\omega_l)}\right)^{u_l}}$$ where $C_q^{u_1,..,u_{q'}}={q!}/{\left(u_1!\times..\times u_{q'}!\right)}$ is the number of combinations achievable from $q'$ sets of $u_1,..,u_{q'}$ objects, respectively. The terms of Equation (\[eqn:polar\]) corresponding to each combination of frequencies conserving energy can be identified with the expression of the non-linear polarization using the $(q+1)^{th}$ rank tensor. $$P^{(q)}_{0}(\omega_0) = \epsilon_0 C_q^{u_1,..,u_{q'}} \chi^{(q)}(\omega_0;\omega_1,...,\omega_q) \prod_{l=1}^{q'}{E_0^{u_l}(\omega_l)} \label{P_chik}$$ As stated above, consistently with Equation (\[eqn:systeme\]) we do not consider here the cascaded processes. Therefore, the identification yields: $$\begin{aligned} \chi^{(q)}(\omega_0;\omega_1,...,\omega_q)&=&-Ne\left(\frac{e}{m}\right)^{q}\frac{Q^{(q)}}{\epsilon_0 \prod_{l=0}^q{\Omega^(\omega_l)}} \label{chi_k_general} \\ &=&\frac{m\epsilon_0^q}{N^q e^{q+1}}Q^{(q)}\prod_{l=0}^q{\chi^{(1)}(\omega_l)} \label{chi_k_general_chi1}\end{aligned}$$ This expression provides a general description of the non-linear susceptibility of any order, provided the shape of the potential $V$ is known. In practice, this potential is rarely known, but it is independent from the excitation frequency. The spectral dependence of the above expression is therefore only driven by $\Omega$, *i.e.* the spectral dependence of the first-order susceptibilities. As a consequence, in a given medium, *the knowledge of both the frequency dependence of the linear susceptibility and of the $q^{th}$-order susceptibility for a specific set of wavelengths is sufficient to extrapolate this susceptibility to any other set of wavelengths*, through the relation: $$\frac{\chi^{(q)}(\omega_0;\omega_1,...,\omega_q)}{\chi^{(q)}(\omega_0';\omega'_1,...,\omega'_q)} = \frac{\prod_{l=0}^q{\Omega(\omega'_l)}}{\prod_{l=0}^q{\Omega(\omega_l)}} = \frac{\prod_{l=0}^q{\chi^{(1)}(\omega_l)}}{\prod_{l=0}^q{\chi^{(1)}(\omega'_l)}} \label{Miller_general}$$ When applied to $\chi^{(2)}$, this generalized Miller formula immediately reduces to the original one of Equation (\[Miller\]) [@Miller64]. Three-dimensionnal Miller formulæ --------------------------------- While the one-dimensional treatment of Equations (\[eqn:systeme\])-(\[Miller\_general\]) provides an easy writing of the derivation, the same sequence can be applied to solve Equation (\[eqn:systeme\_tensoriel\]) in the general three-dimensional case. Products simply have to be transposed to the adequate tensorial products. The identification of the spectral components of the Fourier expression of $\vec{P}^{(q)}$ and $\bigotimes \vec{P}^{(1)}$ yields a three-dimensional equivalent of Equation (\[eqn:polar\]): $$\bar{\bar{\Omega}}(\omega_0) \vec{P}^{(q)}_{0}(\omega_0) = - Ne \left(\frac{e}{m}\right)^{q} {{Q}^{(q)} : \left( \bigotimes_{l=1}^{q} \frac{\vec{P}^{(1)}(\omega_l)}{Ne}\right)}$$ In media with at least K$\times$C$_2$ symmetry, or if absorption can be neglected, $\bar{\bar{\Omega}}$ is diagonal so that this expression can easily be identified with the counterpart of Equation (\[P\_chik\]) expressing the non-linear polarization in terms of non-linear susceptibility: $$\vec{P}^{(q)}_{0}(\omega_0) = \epsilon_0 \chi^{(q)} : \bigotimes_{l=1}^{q} \vec{E}(\omega_l)$$ $$\begin{aligned} \frac{\chi^{(q)}_{v;x^{(i)},y^{(j)},z^{(k)}}(\omega_0;\omega_1,...,\omega_{q})}{\chi^{(q)}_{v;x^{(i)},y^{(j)},z^{(k)}}(\omega_0';\omega'_1,...,\omega'_{q})} &=& \frac{\prod_{l=0}^{q}{\Omega_{v_l}(\omega'_l)}}{\prod_{l=0}^{q}{\Omega_{v_l}(\omega_l)}} \\ %\frac{\chi^{(q)}_{v;x^{(i)},y^{(j)},z^{(k)}}(\omega_0;\omega_1,...,\omega_{q})}{\chi^{(q)}_{v;x^{(i)},y^{(j)},z^{(k)}}(\omega_0';\omega'_1,...,\omega'_{q})} &=& \frac{\prod_{l=0}^{q}{\chi_{v_l}^{(1)}(\omega_l)}}{\prod_{l=0}^{q}{\chi_{v_l}^{(1)}(\omega'_l)}} \label{Miller_general_tenseur}\end{aligned}$$ where each index $v_l$ denotes either the $x$, $y$, or $z$ axis. The spectral dependence of $\chi^{(q)}$ only depends on the product of the $\Omega_{v_l}(\omega_l)$, meaning that all frequencies commute. This property, together with the fact that the $\Omega(\omega)$ are even functions as soon as absorption can be neglected, provides a direct evidence and generalization to arbitrary orders of the ABDP relations. These relations state that as soon as $\bar{\bar{\Omega}}$ can be diagonalized, $\omega_0$ commutes with any $\omega_l$ in the expression of $\chi^{(2)}$ and $\chi^{(3)}$ [@Armstrong62]. Application to non-linear refractive indices -------------------------------------------- In the case of frequency-degenerate interactions of odd order excited along one single principal axis, the susceptibility tensor reduces to $\chi^{(2p+1)}(\omega_0) \equiv \frac{2^{p+1}p!(p+1)!}{(2p+1)!}\frac{\left(n_0^2(\omega_0)\epsilon_0 c^2\right)^p}{n_0(\omega_0)}n_{2p}(\omega_0)$, which defines the $p^{th}$-order non-linear refractive index $n_{2p}$. Therefore, the knowledge of the dispersion curve for the linear susceptiblity and the measurement of $\chi^{(2p+1)}$ at one single frequency $\omega'_0$ are sufficient to provide $\chi^{(2p+1)}$ at any frequency $\omega_0$ thanks to Equation (\[Miller\_general\]): $$\frac{\chi^{(2p+1)}(\omega_0)}{\chi^{(2p+1)}(\omega_0')} = \left(\frac{\Omega(\omega_0')}{\Omega(\omega_0)}\right)^{2p+2} = \left(\frac{\chi^{(1)}(\omega_0)}{\chi^{(1)}(\omega_0')}\right)^{2p+2} \label{Miller_general_degenere}$$ or equivalently, if absorption can be neglected, in terms of the real parts of the non-linear refractive indices: $$\frac{n_{2p}(\omega_0)}{n_{2p}(\omega_0')}= \left(\frac{n^{2}_{0}(\omega_0)-1}{n^{2}_{0}(\omega_0')-1}\right)^{2p+2} \label{miller_n_k}$$ As an illustration, Figure \[n\_i\_spectral\] displays the values of $n_2$ through $n_8$ for N$_2$, O$_2$ and Ar, based on the recent experimental measurements of Loriot *et al.* at 800 nm [@LoriotHFL09; @LoriotBHFLHKW09], extrapolated to the whole spectrum using Equation (\[miller\_n\_k\]) and the dispersion data of Zhang *et al.* [@ZhangLW08]. These dispersion data have been chosen because they have been measured consistently for all proposed species. However, we checked that they agree within 2% with typical data available over the spectral range plotted [@PeckK66; @PeckR72; @Birch94]. ![Spectral dependence of the non-linear refractive indices (a) $n_2$, (b) $n_4$, (c) $n_6$, and (d) $n_8$ of O$_2$, N$_2$, air and Ar at atmospheric pressure[]{data-label="n_i_spectral"}](Figure1a.eps "fig:"){width="6cm"} ![Spectral dependence of the non-linear refractive indices (a) $n_2$, (b) $n_4$, (c) $n_6$, and (d) $n_8$ of O$_2$, N$_2$, air and Ar at atmospheric pressure[]{data-label="n_i_spectral"}](Figure1b.eps "fig:"){width="6cm"} ![Spectral dependence of the non-linear refractive indices (a) $n_2$, (b) $n_4$, (c) $n_6$, and (d) $n_8$ of O$_2$, N$_2$, air and Ar at atmospheric pressure[]{data-label="n_i_spectral"}](Figure1c.eps "fig:"){width="6cm"} ![Spectral dependence of the non-linear refractive indices (a) $n_2$, (b) $n_4$, (c) $n_6$, and (d) $n_8$ of O$_2$, N$_2$, air and Ar at atmospheric pressure[]{data-label="n_i_spectral"}](Figure1d.eps "fig:"){width="6cm"} Generalization to mixes and multiple resonance frequencies ========================================================== Up to now, we have only considered the case of a single type of oscillators. However, in actual media (*e.g.* in mixes like the air, where both N$_2$ and O$_2$ get polarized, or in crystals where several oscillation modes can be excited) several eigenfrequencies $\omega_{e,l}$ may contribute. In this case, the polarization $\vec{P}$ can be defined as $\sum{\vec{P}_l}$, where $l$ denotes the types of oscillators, or species. If the coupling between the oscillators can be neglected, the above derivation applies to each oscillator type independently, and the resulting refractive index will be given by the Lorentz-Lorenz model. For example, away from the resonances, the refractive index of air is given by the Sellmeier equation [@ZhangLW08]: $$10^8 (n_0 - 1) = 8015.514 + \frac{2368616}{128.7459 - 1/\lambda^2} + \frac{19085.73}{50.01974 - 1/\lambda^2} \label{air}$$ where the wavelength $\lambda$ is expressed in $\mathrm{\mu}$m. Obviously, the terms of Equation (\[air\]) respectively correspond to N$_2$, with a resonance at $1/\lambda^2 \sim 128.7 \mu$m$^{-2}$ (*i.e.* $\lambda = 88$ nm): $$10^8 (n_{0,N_2} - 1) = 8736.28 + \frac{2398095.2}{128.7 - 1/\lambda^2} \label{N2}\\$$ and to O$_2$, with $1/\lambda^2 \sim 50 \mu$m$^{-2}$ (*i.e.* $\lambda = 141$ nm) [@ZhangLW08]: $$10^8 (n_{0,O_2} - 1) = 15532.45 + \frac{456402.97}{50.0 - 1/\lambda^2} \label{O2}$$ In such cases, the generalized Miller formulæ of Equation (\[Miller\_general\]) cannot be applied to the material or the mix as a whole. Instead, they must be applied to each susceptibility-order of each oscillator type. The non-linear susceptibilities of the whole material will then be deduced from those of the individual oscillator types through the Lorentz-Lorenz model. Figure \[n\_i\_spectral\] displays the result of this treatment in the case of air. Conclusion ========== In conclusion, we have explicitly derived a generalization of the common form of Sellmeier equations, in both frequency-degenerate and non-degenerate systems. This generalization provides the spectral dependence of the non-linear susceptibility of any order, which is fully defined by the knowledge of the linear dispersion of the medium. As a consequence, the Miller formula (\[Miller\]) [@Miller64] can be generalized to any order of non-linearity and any tensor element of non-linear susceptibilities as soon as the material has at least K$\times$C$_2$ symmetry and negligible absorption. In particular, the spectral dependence of non-linear refractive indices of any order can be obtained from their value at one single frequency and the dispersion curve of the medium. Such knowledge is of particular value in nonlinear optics implying confined light leading to very-high intensities, as, *e.g.* in fiber optics [@DudleyGC06] filamentation [@ChinHLLTABKKS05; @BergeSNKW07; @CouaironM07; @KasparianW08], or photonic crystals [@Soljacic04]. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the Swiss NSF (contracts 200021-116198 and 200021-125315). [10]{} R. W. Boyd, *Nonlinear optics*, Academic Press, 2008 J.M. Dudley, G. Genty, S. Coen, “Supercontinuum generation in photonic crystal fiber”. Rev. Modern Phys. **78**, 1135-1150 (2006) M. Soljačić, J.D. Joannopoulos, “Enhancement of nonlinear effects using photonics crystals”. Nature Materials **3**, 211-219 (2004) V. Loriot, E. Hertz, O. Faucher, B. Lavorel, “Measurement of high-order Kerr refractive index of major air components” Opt. Express **17**, 13429-13434 (2009); Erratum in Opt. Express **18** 3011-3012 (2010) P. Béjot, J. Kasparian, S. Henin, V. Loriot, T. Vieillard, E. Hertz, O. Faucher, B. Lavorel, J.-P. Wolf, “Higher-order Kerr terms allow ionization-free filamentation in air”, to be published in Phys. Rev. Lett. A. Braun, G. Korn, X. Liu, D. Du, J. Squier, G. Mourou, “Self-channeling of high-peak-power femtosecond laser pulses in air”. Opt. Lett. **20**, 73-75 (1995). S. L. Chin, S. A. Hosseini, W. Liu, Q. Luo, F. Theberge, N. Aközbek, A. Becker, V. P. Kandidov, O. G. Kosareva, H. Schroeder. “The propagation of powerful femtosecond laser pulses in optical media: physics, applications, and new challenges”. [Can. J. Phys.]{} **83**, 863-905 (2005). L. Bergé, S. Skupin, R. Nuter, J. Kasparian, J.-P. Wolf. “Ultrashort filaments of light in weakly-ionized, optically-transparent media”, [Rep. Prog. Phys.]{} **70**, 1633-1713 (2007). A. Couairon A. Mysyrowicz. “Femtosecond filamentation in transparent media”, [Phys. Rep.]{}, **441** 47-189 (2007). J. Kasparian and J.-P. Wolf. “Physics and applications of atmospheric nonlinear optics and filamentation”, [Opt. Express]{} **16**, 466-493 (2008). R. C. Miller, “Optical second harmonic generation in piezoelectric crystals”, Appl. Phys. Lett. **5**, 17-19 (1964) S. Scandolo, F. Bassani, “Miller’s rule and the static limit for second-harmonic generation”, Phys. Rev. B **51**, 6928-6931 (1995) V. Mizrahi and D. P. Shelton, “Dispersion of Nonlinear Susceptibilities of Ar, $N_2$ , and $O_2$ Measured and Compared”, Phys. Rev. Lett. **55**, 696-699 (1985) A. Owyoung, Ph.D. thesis, California Institute of Technology, 1972 (unpublished). F. Bassani F. and V. Lucarini, “General properties of optical harmonic generation from a simple oscillator model” Il Nuovo Cimento D, **20**, 1117-1125 (1998) V. Lucarini, F. Bassani, K.-E. Peiponen, and J. J. Saarinen, “Dispersion theory and sum rules in linear and nonlinear optics” Riv. Nuovo Cimento **26**, 1-120 (2003) B. Boulanger, Y. Petit, P. Segonds, C. Félix, B. Ménaert, J. Zaccaro, G. Aka, “Absorption and fluorescence anisotropies of monoclinic crystals : the case of Nd:YCOB”, Opt. Express **16**, 7997-8002 (2008) J.A. Armstrong, N. Bloembergen, J. Ducuing, and P.S. Pershan, “Interactions between Light Waves in a Nonlinear Dielectric” Phys. Rev. **127**, 1918-1939 (1962) J. Zhang, Z. H. Lu, L. J. Wang, “Precision refractive index measurements of air, N$_2$, O$_2$, Ar, and CO$_2$ with a frequency comb”, Appl. Opt. **47**, 3143-3151 (2008) E. R. Peck and B. N. Khanna, ÒDispersion of nitrogen,Ó J. Opt. Soc. Am. **56**, 1059-1063 (1966) E. R. Peck and K. Reeder, “Dispersion of air”, J. Opt. Soc. Am **62**, 958-962 (1972) K.P. Birch and M. J. Downs, “Correction to the updated Elden Equation for the refractive index of air”, Metrologia, **31**, 315-316 (1994)
--- author: - 'Kassie Archer[^1], Virginia Germany, C. Marin King, and L.-K. Lauderdale' bibliography: - 'lambdarefs.bib' title: Involutions and the Gelfand character --- #### Keywords: involutions, enumeration, $\lambda$-unimodal permutations, descents, Gelfand character Introduction ============ A permutation is *unimodal* provided its one-line notation is increasing, then decreasing. Given any composition $\lambda$ of the positive integer $n$, we say that a permutation is $\lambda$*-unimodal* if it is comprised of contiguous unimodal segments whose lengths are determined by $\lambda$. These $\lambda$-unimodal permutations are the topic of research by numerous authors, although they are not often studied as purely combinatorial objects, and first appeared in the study of characters of the symmetric group; see for example [@APR08; @AR2015; @A2015; @ER2014; @R1998; @R1997]. In [@A2016], the first author of this article investigated $\lambda$-unimodal cycles and their application to a specific character of the symmetric group. Here, we investigate $\lambda$-unimodal involutions, i.e., those $\lambda$-unimodal permutations that are their own algebraic inverse, and we use them to compute the Gelfand character. In [@APR08; @AR2015], it is shown that these involutions have a direct relationship to the Gelfand character, $\chi^G$, which is the character associated to the representation of ${\mathcal{S}}_n$ obtained by taking the multiplicity-free direct sum of the irreducible representations of ${\mathcal{S}}_n$. For example, see [@APR08]. Specifically, if ${\mathcal{I}}^\lambda$ denotes the set of $\lambda$-unimodal involutions and $\operatorname{des}_\lambda(\pi)$ denotes the number of $\lambda$-descents of a permutation $\pi$ (defined in Section \[sec:background\]), then $$\label{eq:char} \chi_\lambda^G = \sum_{\pi\in{\mathcal{I}}^\lambda}(-1)^{\operatorname{des}_\lambda(\pi)}.$$ The bulk of this paper is dedicated to enumerating $\lambda$-unimodal involutions via a recursive generating function. This can be further refined to a generating function for $\lambda$-unimodal involutions with a given number of $\lambda$-descents, which in turn gives a generating function for the Gelfand character (see Theorem \[main theorem 3\]). This yields a new way of computing the Gelfand character other than the Murnaghan-Nakayama rule; see [@M-1937; @N1940; @B2004; @R1997] for more. This also gives us an approach to address an open question in [@R2014], in which Roichman comments on the desirability of combinatorial proofs to the given character formulas, such as Equation . In addition, we provide a combinatorial proof for a formula for the character of the regular representation of ${\mathcal{S}}_n$. This formula appears in [@AR2015]. In Section \[sec: regular\], we prove this character formula by showing that for $\lambda \vDash n$, $$\label{eq:reg char} \sum_{\pi\in{\mathcal{I}}^\lambda}(-1)^{\operatorname{des}_\lambda(\pi)} = \begin{cases}n! & \text{ if $\lambda= (1,1,\ldots,1)$} \\ 0 & \text{ otherwise,}\end{cases}$$ which coincides with the regular representation. Background and Notation {#sec:background} ======================= Let ${\mathcal{S}}_n$ be the set of permutations on $[n] = \{1,2,\ldots,n\}$, and write $\pi \in {\mathcal{S}}_n$ in its one-line notation as $\pi=\pi_1\pi_2\ldots\pi_n=\pi(1)\pi(2)\ldots \pi(n)$. A permutation $\pi \in {\mathcal{S}}_n$ is *unimodal* if there exists $i \in [n]$ such that $$\pi_1<\pi_2<\cdots<\pi_{i-1}<\pi_i>\pi_{i+1}>\cdots>\pi_{n-1}>\pi_n;$$ that is, $\pi$ is increasing then decreasing. Similarly, any sequence or segment is unimodal if it is increasing, then decreasing. A *composition* of the integer $n$, denoted $\lambda \vDash n$, is a sequence of positive integers $\lambda=(\lambda_1,\lambda_2, \ldots, \lambda_k)$ such that $\sum \lambda_i=n$. Given a composition $\lambda=(\lambda_1,\lambda_2, \ldots, \lambda_k)\vDash n$, we say that $\pi \in {\mathcal{S}}_n$ is $\lambda$*-unimodal* provided $\pi$ is composed of $k$ contiguous segments, where the $i$-th segment is unimodal of length $\lambda_i$ where $i\in[k]$. For example, the permutation $\pi = 129654873 \in {\mathcal{S}}_9$ is $(5,4)$-unimodal because the first five entries $12965$ and the last four entries $4873$ both form unimodal segments of $\pi$; the pictorial representation of this permutation can be seen in Figure \[fig:EXAMPLES\](a). The permutation $\pi \in {\mathcal{S}}_n$ has a *descent* at position $i$ if $\pi_i>\pi_{i+1}$. The *descent set* of $\pi$, denoted $\operatorname{Des}(\pi)$, is the set of descents and the *descent number* of $\pi$, denoted $\operatorname{des}(\pi)$, is the number of descents of $\pi$. If $\lambda=(\lambda_1,\lambda_2, \ldots, \lambda_k) \vDash n$, we say that $i$ is a $\lambda$*-descent* of $\pi$ if $i$ is a descent that occurs within a segment of length $\lambda_i$ for some $i \in [n]$. In other words, we define the set of $\lambda$-descents of $\pi$, denoted $\operatorname{Des}_\lambda(\pi)$, to be the set $$\operatorname{Des}_\lambda(\pi)=\operatorname{Des}(\pi)\setminus\{\lambda_1, \lambda_1+\lambda_2, \ldots, \lambda_1+\lambda_2+\cdots+\lambda_{k-1}\}.$$ We let $\operatorname{des}_\lambda(\pi)$ denote the number of $\lambda$-descents of $\pi$. For example, if $\lambda=(5,4)$ and $\pi = 129654873 \in {\mathcal{S}}_9$, then we have $\operatorname{des}(\pi)=5$ and $\operatorname{des}_\lambda(\pi)=4$. In this example, the descent at position 5 is not a $\lambda$-descent. Finally, $\pi \in {\mathcal{S}}_n$ is an *involution* if it is comprised of only disjoint transpositions and fixed points. Equivalently, every involution is it own inverse and in its pictorial representation, every involution is symmetric about the diagonal. The $(4,3,2)$-unimodal involution $476183259 \in {\mathcal{S}}_9$ is depicted in Figure \[fig:EXAMPLES\](b). Finally, let ${\mathcal{S}}^\lambda$ denote the set of $\lambda$-unimodal permutations, and let ${\mathcal{I}}^\lambda$ denote the set of $\lambda$-unimodal involutions. For example, $129654873\in S^{(5,4)}$ and $129654873 \in {\mathcal{I}}^{(5,4)}$; also $476183259 \in {\mathcal{I}}^{(4,3,2)}$. -- -- -- -- -- -- Character of the regular representation on ${\mathcal{S}}_n$ {#sec: regular} ============================================================ Let $\chi^R$ denote the character of the regular representation of ${\mathcal{S}}_n$, and let $\chi_\lambda^R$ denote the value that representation takes on conjugacy class $\lambda$. The following theorem appears in [@AR2015]. \[regular rep\] For $n\geq 1$ and $\lambda \vDash n$, $$\sum_{\pi\in{\mathcal{S}}^\lambda} (-1)^{\operatorname{des}_\lambda(\pi)} = \chi^R_\lambda.$$ In this section, we provide a combinatorial proof of this theorem using the following proposition concerning $\lambda$-unimodal permutations and $\lambda$-descents. If $\lambda =(\lambda_1, \lambda_2, \ldots, \lambda_k)$, let ${n \choose \lambda}$ denote the multinomial coefficient given by $${n \choose \lambda}=\frac{n!}{\lambda_1!\lambda_2!\cdots\lambda_k!}.$$ \[enumerate lambda descents\] The number of $\lambda$-unimodal permutations in ${\mathcal{S}}_n$ with $d$ $\lambda$-descents is $${n \choose \lambda}{n-k \choose d},$$ where $\lambda =(\lambda_1, \lambda_2, \ldots, \lambda_k)$. The multinomial coefficient ${n \choose \lambda}$ counts the partitions of $[n]$ into $k$ parts, the $i$-th of which is size $\lambda_i$. The $i$-th segment of the $\lambda$-unimodal permutation is unimodal and comprised of the $\lambda_i$ elements determined by this partition. Since it is unimodal, it is enough to say which elements lie to the right of the maximum. Let $M\subseteq[n]$ be the set of $k$ elements that are the maximum in their part. Choose $d$ of the elements in $[n]\setminus M$ to lie to the right of the maximum in each part. There are ${n-k \choose d}$ ways for this to be done. For example, suppose that $\lambda = (3,5,1)$ and the number of descents is $d=3$. If we take the partition of $[n]$ to be $\{\{1,6,8\},\{2,4,5,7,9\},\{3\}\}$, then $M=\{8,9,3\}$. If we choose our two elements from $[n]\setminus M$ to be $2,6,$ and 7, then we obtain the permutation $186459723$, which is a $(3,5,1)$-unimodal permutation with three $\lambda$-descents. The next corollary follows immediately from Proposition \[enumerate lambda descents\]. \[enumerate lambda\] For $\lambda=(\lambda_1,\lambda_2, \ldots, \lambda_k) \vDash n$, the number of $\lambda$-unimodal permutations in ${\mathcal{S}}^\lambda$ is $${n \choose \lambda_1,\lambda_2,\ldots,\lambda_k}2^{n-k}.$$ In our proof of Theorem \[regular rep\], we show that the number of $\lambda$-unimodal permutations with an even number of $\lambda$-descents minus the number of $\lambda$-unimodal permutations with an odd number of $\lambda$-descents coincides with the character of the regular representation on conjugacy class $\lambda$. First, notice that when $\lambda=(1,1,\ldots,1)$, there are always exactly zero $\lambda$-descents. Also, every permutation is trivially a $\lambda$-unimodal permutation, so in the case where $\lambda = (1,1,\ldots,1)$, we have $$\sum_{\pi\in{\mathcal{S}}^\lambda} (-1)^{\operatorname{des}_\lambda(\pi)} = \sum_{\pi\in{\mathcal{S}}_n} 1=n!.$$ Now if $\lambda\neq(1,1,\ldots,1)$, then $$\sum_{\pi\in{\mathcal{S}}^\lambda} (-1)^{\operatorname{des}_\lambda(\pi)} ={n \choose \lambda_1,\lambda_2,\ldots,\lambda_k} \sum_{d=0}^{n-k} (-1)^d{n-k \choose d}.$$ Since we have a sum of alternating binomial coefficients, we get zero as our sum. Thus, $$\sum_{\pi\in{\mathcal{S}}^\lambda} (-1)^{\operatorname{des}_\lambda(\pi)} =\begin{cases}n! & \text{ when } \lambda=(1,1,\ldots,1) \\ 0 & \text{ otherwise,} \end{cases}$$ which exactly coincides with the character of the regular representation. $\lambda$-unimodal involutions {#sec3} ============================== In this section, we let $\Lambda_k$ denote the set of integer compositions into $k$ positive integer parts and let $\lambda=(\lambda_1, \lambda_2, \ldots, \lambda_k)\in \Lambda_k$. Let ${\boldsymbol{x}}$ denote the set of indeterminates $\{x_1, x_2, \ldots, x_k\}$. If $F$ is a generating function on variables ${\boldsymbol{x}}\setminus\{x_{i_1}, x_{i_2}, \ldots, x_{i_j}\}$, we write $F({\boldsymbol{x}}; \hat{x}_{i_1}, \hat{x}_{i_2}, \ldots, \hat{x}_{i_j})$. For example, if $k=3$, then ${\boldsymbol{x}}=\{x_1, x_2,x_3\}$ and $F({\boldsymbol{x}}; \hat{x}_2)$ is a function on variables $x_1$ and $x_3$ only. Let $x^\lambda$ denote the monomial $x^\lambda=x_1^{\lambda_1}x_2^{\lambda_2}\cdots x_k^{\lambda_k}$. We first define three generating functions as follows. Recall that ${\mathcal{I}}^\lambda$ is the set of $\lambda$-unimodal involutions. Let ${\mathcal{I}}^\lambda_j$ be the set of $\lambda$-unimodal involutions, where either $\pi_1\leq \lambda_1+\cdots + \lambda_j$ and the first segment is decreasing or where $\pi_1>\lambda_1+\cdots + \lambda_j$. Finally, let ${\mathcal{D}}_i^\lambda$ be the set of $\lambda$-unimodal involutions, where $\pi_1\leq\lambda_1+\cdots + \lambda_i$ and the first segment is decreasing. We define the generating functions $L^k({\boldsymbol{x}}), L_j^k({\boldsymbol{x}}),$ and $D_i^k({\boldsymbol{x}})$ as follows: $$L^k({\boldsymbol{x}}) = \sum_{\lambda \in \Lambda_k}|{\mathcal{I}}^\lambda| x^\lambda, \, \, \, \, \, \, \, \, L_j^k({\boldsymbol{x}}) = \sum_{\lambda \in \Lambda_k}|{\mathcal{I}}^\lambda_j| x^\lambda, \quad \text{ and } \quad D_i^k({\boldsymbol{x}}) = \sum_{\lambda \in \Lambda_k}|{\mathcal{D}}^{\lambda}_i| x^\lambda.$$ We find recursive formulas for these generating functions below. \[main theorem\] We have $L^0({\boldsymbol{x}})=1$, $L^1({\boldsymbol{x}}) = \dfrac{x_1}{(1-x_1)^2}$ and for $k \geq 2$, $$\begin{aligned} L^k({\boldsymbol{x}}) = & \, \frac{1}{1-x_1}\bigg[x_1^2L^k_1({\boldsymbol{x}}) + (x_1+x_1^2)L^{k-1}({\boldsymbol{x}};\hat{x}_1) \\ & +\sum_{i=2}^k x_1x_i [2L^{k-1}({\boldsymbol{x}}; \hat{x}_1)+L^{k-1}_{i-1}({\boldsymbol{x}}; \hat{x}_i) + L^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i) + L_i^k({\boldsymbol{x}}) +L_{i-1}^k({\boldsymbol{x}})] \bigg];\end{aligned}$$ if $k \geq 1$, then $L_k^k({\boldsymbol{x}}) = D_k^k({\boldsymbol{x}})$ and for $1\leq j<k$, $$\begin{aligned} L_j^k({\boldsymbol{x}}) = & \, \frac{1}{1-x_1x_{j+1}}\bigg[D_j^k({\boldsymbol{x}}) + \sum_{i=j+2}^k x_1x_{i} L_{i-1}^k({\boldsymbol{x}}) \\ & + \sum_{i=j+1}^kx_1x_i[2L^{k-1}({\boldsymbol{x}}; \hat{x}_1) + L_{i-1}^{k-1}({\boldsymbol{x}}; \hat{x}_i)+L^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i) + L_i^k({\boldsymbol{x}})] \bigg];\end{aligned}$$ if $k\geq 1$, then $D_1^k({\boldsymbol{x}}) = \dfrac{x_1}{1-x_1}L^{k-1}({\boldsymbol{x}};\hat{x}_1)$ and for $1\leq i <k$, we have $$\begin{aligned} D_i^k({\boldsymbol{x}}) = & \, \, \frac{1}{1-x_1x_i}\bigg[D_{i-1}^k({\boldsymbol{x}}) +x_1x_i[D_{i-1}^k({\boldsymbol{x}}) +D_{i-1}^{k-1}({\boldsymbol{x}}; \hat{x}_i)+L^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i) + 2L^{k-1}({\boldsymbol{x}}; \hat{x}_1)]\bigg].\end{aligned}$$ To prove Theorem \[main theorem\], we start with a few lemmas to establish the base cases and recurrences. For $\pi \in {\mathcal{S}}_n$ with $\pi=\pi_1\pi_2\ldots\pi_n$ and $\sigma \in {\mathcal{S}}_m$ with $\sigma=\sigma_1\sigma_2\ldots\sigma_m$, we let $\pi \oplus \sigma \in {\mathcal{S}}_{n+m}$ such that $$\pi \oplus \sigma = \pi_1\pi_2\ldots\pi_n(\sigma_1+n)(\sigma_2+n)\ldots(\sigma_m+n).$$ For example, if $\pi=312 \in {\mathcal{S}}_3$ and $\sigma=635421\in {\mathcal{S}}_6$, then $\pi \oplus \sigma=312968754 \in {\mathcal{S}}_9$. \[lem:L1\] If $n\geq 1$, then there are $n$ unimodal involutions in ${\mathcal{S}}_n$. Consequently, the generating function $L^1(x) =\displaystyle \sum_{\pi\in{\mathcal{I}}^{(n)}}\hspace{-5pt} x^n$ is given by $$L^1(x) = \frac{x}{(1-x)^2}.$$ Clearly there is only one unimodal permutation of length 1 and it is an involution. We proceed by induction. For any $\pi\in {\mathcal{I}}^{(n)}$ with $n\geq 2$, either $\pi_1=1$ or $\pi_n=1$. Notice that if $\pi_1=1$, then $\pi\in {\mathcal{I}}^{(n)}$ if and only if $\pi=1\oplus \sigma$ with $\sigma \in {\mathcal{I}}^{(n-1)}$. If $\pi_n=1$, then necessarily $\pi_1=n$, and thus $\pi$ is the decreasing permutation that is indeed an involution. Therefore, $|{\mathcal{I}}^{(n)}|=|{\mathcal{I}}^{(n-1)}|+1$, which in turn implies that $|{\mathcal{I}}^{(n)}|=n$, and thus the result follows. Clearly, the number of unimodal involutions in ${\mathcal{S}}_n$ that are strictly decreasing is 1 for each $n$ and thus $D_1^1({\boldsymbol{x}})=\dfrac{x_1}{1-x_1}$. By the definition of $L_k^k({\boldsymbol{x}}),$ it is clear that we must have $L_k^k({\boldsymbol{x}}) = D_k^k({\boldsymbol{x}})$ for all $k\geq 1$. \[lem:D1k\] For any $\lambda = (\lambda_1, \lambda_2, \ldots, \lambda_k)\vDash n$, the number of $\lambda$-unimodal involutions for which the first $\lambda_1$ elements are decreasing and $\pi_1\leq \lambda_1$ is equal to the number of $\lambda'$-unimodal involutions, where $\lambda'=(\lambda_2, \ldots, \lambda_k)\vDash n-\lambda_1$. Consequently, the generating function for these permutations is given by $$D_1^k({\boldsymbol{x}}) = \dfrac{x_1}{1-x_1}L^{k-1}({\boldsymbol{x}};\hat{x}_1).$$ If the first $\lambda_1$ elements of the $\lambda$-unimodal involution $\pi$ form a decreasing sequence and $\pi_1\leq \lambda_1$, then $\pi_i=\lambda_1-i+1$ for all $i \in [\lambda_1]$. For any $\sigma \in {\mathcal{I}}^{\lambda'}$, we can obtain a $\lambda$-unimodal involution with the necessary property by taking $\pi = \delta_{\lambda_1} \oplus \sigma$, where $\delta_{\lambda_1}$ is the decreasing segment of $\pi$ with length $\lambda_1$. The result now follows. With the base cases of Theorem \[main theorem\] established, we can now prove the recurrences given in Theorem \[main theorem\]. For convenience, we establish the following notation. If $\lambda=(\lambda_1, \lambda_2, \ldots, \lambda_k) \vDash n$, then let $\bar{\lambda} = (\lambda_1-1, \lambda_2, \ldots, \lambda_k)$; when $\lambda_1 =1$, we implicitly assume that $\bar{\lambda} = (\lambda_2, \lambda_3, \ldots, \lambda_k)$. Also, let $\bar{\lambda}^1=(\lambda_1-2, \lambda_2, \ldots, \lambda_k)$, and for $i \in \{2,3,\ldots,k\}$, we let $$\bar{\lambda}^i = (\lambda_1-1, \lambda_2, \ldots, \lambda_{i-1}, \lambda_{i}-1, \lambda_{i+1},\ldots, \lambda_k),$$ where again if $\lambda_1=1$ or if $\lambda_i=1$, we omit these terms from $\bar{\lambda}^i$ altogether. For example, if $\lambda=(4,3,1,2)$, then $\bar{\lambda}=(3,3,1,2)$, $\bar{\lambda}^1 = (2,3,1,2)$, $\bar{\lambda}^2 = (3,2,1,2)$, and $\bar{\lambda}^3=(3,3,2)$. Finally, let $$s_\lambda^i = \lambda_1+\lambda_2+\cdots+\lambda_i.$$ For example, if $\lambda=(4,2,6,1)$, then $s_\lambda^1=4$, $s_\lambda^2=6$, $s_\lambda^3=12$, and $s_\lambda^4=13$. \[lem:Lk\] For $k\geq 2$, the generating function $L^k({\boldsymbol{x}})$ satisfies the recurrence $$\begin{aligned} L^k({\boldsymbol{x}}) = & \, \frac{1}{1-x_1}\bigg[x_1^2L^k_1({\boldsymbol{x}}) +(x_1+x_1^2)L^{k-1}({\boldsymbol{x}};\hat{x}_1) \\ & +\sum_{i=2}^k x_1x_i [2L^{k-1}({\boldsymbol{x}}; \hat{x}_1)+L^{k-1}_{i-1}({\boldsymbol{x}}; \hat{x}_i) + L^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i) + L_i^k({\boldsymbol{x}}) +L_{i-1}^k({\boldsymbol{x}})] \bigg].\end{aligned}$$ We will establish the following equivalent formula $$\begin{aligned} L^k({\boldsymbol{x}}) = & \,x_1L^k({\boldsymbol{x}}) +x_1^2L^k_1({\boldsymbol{x}})+ (x_1+x_1^2)L^{k-1}({\boldsymbol{x}};\hat{x}_1) \\ & +\sum_{i=2}^k x_1x_i [2L^{k-1}({\boldsymbol{x}}; \hat{x}_1)+L^{k-1}_{i-1}({\boldsymbol{x}}; \hat{x}_i) + L^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i) + L_i^k({\boldsymbol{x}}) +L_{i-1}^k({\boldsymbol{x}})].\end{aligned}$$ Suppose that $\pi \in {\mathcal{I}}^\lambda$, and consider the position of $1$ in the permutation $\pi$. Since $\pi$ is $\lambda$-unimodal, 1 must occur at the beginning or end of some unimodal segment. That is, if $\pi_j=1$, then $j \in \{\lambda_i : i\in [n]\} \cup\{\lambda_{i}+1 : i\in [n-1]\} \cup \{1\}$. Two cases follow. First, assume that $1$ lies in the first segment (of length $\lambda_1$) and consider the following four possible subcases, pictured in Figure \[fig:Lk first segment\]. If $\lambda_1=1$, then we have $\pi_1=1$ and $\pi = 1\oplus \sigma$, where $\sigma$ is a $\bar{\lambda}$-unimodal involution. This contributes $x_1L^{k-1}({\boldsymbol{x}}; \hat{x}_1)$ to the sum. If $\lambda_1>1$ and $\pi_1=1$, then $\pi=1\oplus \sigma$, where $\sigma$ is a $\bar\lambda$-unimodal involution, which contributes $x_1L^k({\boldsymbol{x}})$ to the sum. If $\lambda_1=2$ and $\pi_2=1$, then we necessarily have that $\pi_1=2$ since $\pi$ is an involution. Hence we must have $\pi = 21\oplus \sigma$, where $\sigma$ is a $\bar\lambda^1$-unimodal involution. This contributes $x_1^2L^{k-1}({\boldsymbol{x}}; \hat{x}_1)$ to the sum. Finally, if $\lambda_1>2$ and $\pi_{\lambda_1} = 1$, then we must have $\pi_1 = \lambda_1$ and in turn $\pi = \lambda_1\alpha1\beta$, where the permutation $\sigma=\alpha\beta$ is order-isomorphic to a $\bar{\lambda}^1$-unimodal involution with the added condition that $\sigma_1>\bar{\lambda}^1_1$ or $\sigma_1\leq \bar{\lambda}^1_1$ and $\sigma_1\ldots\sigma_{\bar{\lambda}^1_1}$ is decreasing. This contributes $x_1^2L^k_1({\boldsymbol{x}})$ to the sum. Now assume that $1$ lies in the $i$-th segment with $i>1$, and consider the six subcases pictured in Figure \[fig: Lk other segments\]. If $\lambda_1 =\lambda_i=1$ and $\pi({s_{\lambda}^i})=1$, then we must have $\pi = s_\lambda^i\alpha1\beta$, where the permutation $\sigma=\alpha\beta$ is order-isomorphic to a $\bar{\lambda}^i$-unimodal involution. This contributes $x_1x_i L^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i)$ to the sum for each $i>1$. If $\lambda_1>1$ and $\lambda_i=1$ (and thus $\pi({s_\lambda^i}) = 1$), then we must have that $\pi = s_\lambda^i\alpha1\beta$, where the permutation $\sigma=\alpha\beta$ is order-isomorphic to a $\bar{\lambda}^i$-unimodal involution with the added condition that $\sigma_1>s_{\bar{\lambda}^i}^{i-1}$ or $\sigma_1\leq s_{\bar{\lambda}^i}^{i-1}$ and $\sigma_1\ldots\sigma_{\bar{\lambda}^i_1}$ is decreasing. Thus, this contributes $x_1x_i L^{k-1}_{i-1}({\boldsymbol{x}}; \hat{x}_i)$ to the sum for each $i>1$. If $\lambda_1=1$ and $\lambda_i>1$, then either $\pi(s_\lambda^i) =1$ or $\pi(s_\lambda^{i-1}+1) = 1$. In the former case, we have that $\pi = s_\lambda^i\alpha1\beta$, where the permutation $\sigma=\alpha\beta$ is order-isomorphic to a $\bar{\lambda}^i$-unimodal involution and in the latter case, we must have that $\pi = (s_\lambda^{i-1}+1)\alpha1\beta$, where the permutation $\sigma=\alpha\beta$ is order-isomorphic to a $\bar{\lambda}^i$-unimodal involution. Together, these contribute $2x_1x_iL^{k-1}({\boldsymbol{x}}; \hat{x}_1)$ to the sum for each $i>1$. Finally, we have the subcase when $\lambda_1>1$ and $\lambda_i>1$. In this instance, we must have either $\pi(s_\lambda^{i-1}+1) = 1$ or $\pi(s_\lambda^i) =1$. If $\pi(s_\lambda^{i-1}+1) = 1$, then $\pi = (s_\lambda^{i-1}+1)\alpha1\beta$, where the permutation $\sigma=\alpha\beta$ is order-isomorphic to a $\bar{\lambda}^i$-unimodal involution with the added condition that $\sigma_1>s_{\bar{\lambda}^i}^{i-1}$ or $\sigma_1\leq s_{\bar{\lambda}^i}^{i-1}$ and $\sigma_1\ldots\sigma_{\bar{\lambda}^i_1}$ is decreasing. If $\pi(s_\lambda^i) =1$, then $\pi = s_\lambda^i\alpha1\beta$, where the permutation $\sigma=\alpha\beta$ is order-isomorphic to a $\bar{\lambda}^i$-unimodal involution with the added condition that $\sigma_1>s_{\bar{\lambda}^i}^{i}$ or $\sigma_1\leq s_{\bar{\lambda}^i}^{i}$ and $\sigma_1\ldots\sigma_{\bar{\lambda}^i_1}$ is decreasing. Together, these contribute $x_1x_i[L_{i-1}^k({\boldsymbol{x}}) + L_{i}^k({\boldsymbol{x}})]$ to the sum for each $i>1$. \[lem:Lkj\] For $1\leq j\leq k$, the generating function $L^k_j({\boldsymbol{x}})$ satisfies the recurrence $$\begin{aligned} L_j^k({\boldsymbol{x}}) = & \, \frac{1}{1-x_1x_{j+1}}\bigg[D_j^k({\boldsymbol{x}}) + \sum_{i=j+2}^k x_1x_{i} L_{i-1}^k({\boldsymbol{x}}) \\ & + \sum_{i=j+1}^kx_1x_i[L_i^k({\boldsymbol{x}}) + 2L^{k-1}({\boldsymbol{x}}; \hat{x}_1) + L_{i-1}^{k-1}({\boldsymbol{x}}; \hat{x}_i)+L^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i)] \bigg].\end{aligned}$$ We will establish the equivalent formula $$\begin{aligned} L_j^k({\boldsymbol{x}}) = & \, D_j^k({\boldsymbol{x}}) + \sum_{i=j+1}^k x_1x_{i}[ L_{i-1}^k({\boldsymbol{x}})+L_i^k({\boldsymbol{x}}) + 2L^{k-1}({\boldsymbol{x}}; \hat{x}_1) + L_{i-1}^{k-1}({\boldsymbol{x}}; \hat{x}_i)+L^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i)].\end{aligned}$$ The proof is similar to the one for Lemma \[lem:Lk\], so we omit some of the details; the associated figures are also very similar to those in Figure \[fig: Lk other segments\]. Let $\pi\in{\mathcal{I}}^\lambda$, where $\pi_1\leq s_\lambda^j$ and $\pi_1\ldots\pi_{\lambda_1}$ is decreasing or $\pi_1>s_\lambda^j$. In the first case, we get exactly those permutations enumerated using the generating function $D_j^k({\boldsymbol{x}})$. Otherwise, we must have that $\pi_1>s_\lambda^j$, which implies that if $\pi_k=1$, then $k>s_\lambda^j$. Thus for any $i\geq j+1$, we can add $i$ either to the beginning or end of the $i$-th unimodal segment. Again, there are six cases, as pictured in Figure \[fig: Lk other segments\]. The case when $\lambda_1=\lambda_i=1$ contributes $x_1x_iL^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i)$ to the sum for each $i\geq j+1$. In the case where $\lambda_1=1$ and $\lambda_i>1$, we can either have $\pi(s_\lambda^i)=1$ or $\pi(s_\lambda^{i-1}+1)=1$. This contributes $2x_1x_iL^{k-1}({\boldsymbol{x}}; \hat{x}_1)$ to the sum for each $i\geq j+1$. The case when $\lambda_1>1$ and $\lambda_i=1$ contributes $x_1x_iL_{i-1}^{k-1}({\boldsymbol{x}}; \hat{x}_i)$ for each $i\geq j+1$. In the case when $\lambda_1>1$ and $\lambda_i>1$, we can either have $\pi(s_\lambda^i)=1$ or $\pi(s_\lambda^{i-1}+1)=1$. This contributes $x_1x_{i}[ L_{i-1}^k({\boldsymbol{x}})+L_i^k({\boldsymbol{x}})]$ to the sum for each $i\geq j+1$. \[lem:Dki\] For $1<i\leq k$, the generating function $D_i^k({\boldsymbol{x}})$ satisfies the recurrence $$D_i^k({\boldsymbol{x}}) = \frac{1}{1-x_1x_i}\bigg[D_{i-1}^k({\boldsymbol{x}})+x_1x_i[D_{i-1}^k({\boldsymbol{x}}) +D_{i-1}^{k-1}({\boldsymbol{x}}; \hat{x}_i)+L^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i) + 2L^{k-1}({\boldsymbol{x}}; \hat{x}_1)]\bigg].$$ We will establish the equivalent formula $$D_i^k({\boldsymbol{x}}) = D_{i-1}^k({\boldsymbol{x}})+x_1x_i[D_i^k({\boldsymbol{x}}) +D_{i-1}^k({\boldsymbol{x}}) +D_{i-1}^{k-1}({\boldsymbol{x}}; \hat{x}_i)+L^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i) + 2L^{k-1}({\boldsymbol{x}}; \hat{x}_1)].$$ Again, this proof is very similar to the proof of Lemma \[lem:Lk\]. The associated figures can be found in Figure \[fig:Dki\]. Let $\pi\in{\mathcal{I}}^\lambda$, where $\pi_1\leq s_\lambda^i$ and $\pi_1\ldots\pi_{\lambda_1}$ is decreasing. In the case when $\pi_1\leq s_\lambda^{i-1}$, we have exactly those permutations enumerated by $D_{i-1}^k$. If $s_\lambda^{i-1}<\pi_1\leq s_\lambda^i$, then we must have that $\pi_1=s_\lambda^i$ or $\pi_1 =s_\lambda^{i-1}+1$ since $\pi$ is an involution and $\pi_{s_\lambda^{i-1}+1}\ldots\pi_{s_\lambda^i}$ is a unimodal segment. First suppose that $\lambda_1=\lambda_i=1$. Then $\pi = s_{\lambda}^i\alpha 1\beta$ where $\alpha\beta$ is order-isomorphic to a $\bar\lambda^i$-unimodal permutation, as seen in the top left-most pictures in Figure \[fig:Dki\]. Thus, this contributes $x_1x_iL^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i)$ to the sum. Now suppose $\lambda_1=1$ and $\lambda_i>1$. In this case, either $\pi_1=s_\lambda^i$ or $\pi_1 =s_\lambda^{i-1}+1$. In the first case we have that $\pi=s_\lambda^i\alpha1\beta$ where $\alpha\beta$ is order-isomorphic to a $\bar\lambda^i$-unimodal permutation and in the second case we have that $\pi=(s_\lambda^{i-1}+1)\alpha1\beta$ where $\alpha\beta$ is order-isomorphic to a $\bar\lambda^i$-unimodal permutation. These two cases can be seen in the second and third pictures of the first row of Figure \[fig:Dki\]. Therefore, this contributes $2x_1x_iL^{k-1}({\boldsymbol{x}}; \hat{x}_1)$ to the sum. Now consider the case when $\lambda_1>1$ and $\lambda_i=1$. We must have $\pi= s_{\lambda}^i\alpha 1\beta$ where $\alpha\beta$ is order-isomorphic to a $\bar\lambda^i$-unimodal permutation, $\pi'$, where the first $\bar\lambda^i_1$ elements are decreasing and $\pi_1'\leq\bar\lambda^i_{i-1}$. See the first picture in the second row of Figure \[fig:Dki\]. contributes $x_1x_iD_{i-1}^{k-1}({\boldsymbol{x}}; \hat{x}_i)$ to the sum. Finally, consider the case when $\lambda_1>1$ and $\lambda_i>1$. There are two possibilities. In this case, either $\pi_1=s_\lambda^i$ or $\pi_1 =s_\lambda^{i-1}+1$. In the first subcase we have that $\pi=s_\lambda^i\alpha1\beta$ where $\alpha\beta$ is order-isomorphic to a $\bar\lambda^i$-unimodal permutation and in the second subcase we have that $\pi=(s_\lambda^{i-1}+1)\alpha1\beta$ where $\alpha\beta$ is order-isomorphic to a $\bar\lambda^i$-unimodal permutation. In both cases, we must have that the first $\bar\lambda^i_1$ elements are decreasing. In addition, we must have that $\pi'\leq\bar\lambda^i_i$ in the first subcase and that $\pi'\leq\bar\lambda^i_i$ in the second subcase. See the second and third pictures in the second row of Figure \[fig:Dki\]. Thus this contributes $x_1x_i[D_i^k({\boldsymbol{x}}) +D_{i-1}^k({\boldsymbol{x}})]$ to the sum. $\lambda$-descents and involutions ================================== These results of Section \[sec3\] can quickly be extended to include $\lambda$-descents. Let ${\mathcal{I}}^\lambda(d)$ denote the set of permutations $\pi\in{\mathcal{I}}^\lambda$ such that $\operatorname{des}_\lambda(\pi) = d$. Similarly, let ${\mathcal{I}}_j^\lambda(d)$ denote the set of permutations $\pi\in{\mathcal{I}}_j^\lambda$ such that $\operatorname{des}_\lambda(\pi) = d$, and let $\mathcal{D}_i^\lambda(d)$ denote the set of permutations $\pi\in\mathcal{D}_i^\lambda$ such that $\operatorname{des}_\lambda(\pi) = d$. Define $$L^k({\boldsymbol{x}},t) = \sum_{\lambda \in \Lambda_k}\sum_{d\geq0}|{\mathcal{I}}^\lambda(d)| x^\lambda t^{d}, \quad \quad \, L_j^k({\boldsymbol{x}},t) = \sum_{\lambda \in \Lambda_k}\sum_{d\geq0}|{\mathcal{I}}_j^\lambda(d)| x^\lambda t^{d},$$ $$\text{ and } \quad D_i^k({\boldsymbol{x}},t) =\sum_{\lambda \in \Lambda_k}\sum_{d\geq0} |\mathcal{D}_i^\lambda(d)| x^\lambda t^{d}.$$ With this notation, we obtain the following theorem. \[main theorem 2\] We have $L^0({\boldsymbol{x}},t)=1$, $L^1({\boldsymbol{x}},t) = \dfrac{x_1}{(1-x_1)(1-x_1t)}$ and for $k \geq 2$, $$\begin{aligned} L^k({\boldsymbol{x}},t) = & \, \frac{1}{1-x_1}\bigg[(x_1+x_1^2t)L^{k-1}({\boldsymbol{x}},t;\hat{x}_1) +x_1^2t(L^k_1({\boldsymbol{x}},t) +(t-1)D_1^k({\boldsymbol{x}},t)) \\ & +\sum_{i=2}^k x_1x_i [(t+1)L^{k-1}({\boldsymbol{x}},t; \hat{x}_1)+ L_{i-1}^{k-1}({\boldsymbol{x}},t; \hat{x}_i)+(t-1)D_{i-1}^{k-1}({\boldsymbol{x}},t;\hat{x}_i) \\ &+ L^{k-2}({\boldsymbol{x}},t; \hat{x}_1, \hat{x}_i) + t(L_i^k({\boldsymbol{x}},t)+(t-1)D_i^k({\boldsymbol{x}},t)) +L_{i-1}^k({\boldsymbol{x}},t) +(t-1)D_{i-1}^k({\boldsymbol{x}},t)] \bigg];\end{aligned}$$ if $k \geq 1$, then $L_k^k({\boldsymbol{x}},t) = D_k^k({\boldsymbol{x}},t)$ and for $1\leq j<k$, $$\begin{aligned} L_j^k({\boldsymbol{x}},t) = & \, \frac{1}{1-x_1x_{j+1}}\bigg[D_j^k({\boldsymbol{x}},t) + \sum_{i=j+2}^k x_1x_{i} L_{i-1}^k({\boldsymbol{x}},t) \\ & + \sum_{i=j+1}^k x_1x_i [(t+1)L^{k-1}({\boldsymbol{x}},t; \hat{x}_1)+ L_{i-1}^{k-1}({\boldsymbol{x}},t; \hat{x}_i)+(t-1)D_{i-1}^{k-1}({\boldsymbol{x}},t;\hat{x}_i) \\ &+ L^{k-2}({\boldsymbol{x}},t; \hat{x}_1, \hat{x}_i) + t(L_i^k({\boldsymbol{x}},t)+(t-1)D_i^k({\boldsymbol{x}},t)) +(t-1)D_{i-1}^k({\boldsymbol{x}},t)] \bigg];\end{aligned}$$ if $k\geq 1$, then $D_1^k({\boldsymbol{x}},t) = \dfrac{x_1}{1-x_1t}L^{k-1}({\boldsymbol{x}},t;\hat{x}_1)$ and for $1\leq i <k$, we have $$\begin{aligned} D_i^k({\boldsymbol{x}},t) = & \, \frac{1}{1-x_1x_it^2}\bigg[D_{i-1}^k({\boldsymbol{x}},t)\\ & +x_1x_i[tD_{i-1}^k({\boldsymbol{x}},t) +tD_{i-1}^{k-1}({\boldsymbol{x}},t; \hat{x}_i)+L^{k-2}({\boldsymbol{x}},t; \hat{x}_1, \hat{x}_i) + (t+1)L^{k-1}({\boldsymbol{x}},t; \hat{x}_1)]\bigg].\end{aligned}$$ We can keep track of descents by considering where we add new terms in the proofs of Lemmas \[lem:L1\], \[lem:D1k\], \[lem:Lk\], \[lem:Lkj\], and \[lem:Dki\]. First consider Lemma \[lem:L1\] where the number of unimodal involutions are enumerated. In this case, if $\pi_1=1$, then $\pi=1\alpha$ provided $\alpha$ is a unimodal involution with the same number of $\lambda$-descents as $\pi$. If $\pi_n=1$, then there are $n-1$ descents. Thus $L^1({\boldsymbol{x}},t) = \dfrac{x_1}{(1-x_1)(1-x_1t)}$. Now consider Lemma \[lem:D1k\]. If we have a $\lambda$-unimodal involution where $\pi_1\pi_2\ldots \pi_{\lambda_1}$ is decreasing and $\pi_1\leq \lambda_1$, then we must have $\pi = \delta_{\lambda_1}\alpha$ where $\alpha$ is a $\lambda'$-unimodal permutation (with $\lambda'=(\lambda_2, \ldots, \lambda_k)$). In this case, $\pi$ has $\lambda_1-1$ more $\lambda$-descents than $\alpha$. Therefore, $D_1^k({\boldsymbol{x}},t) = \dfrac{x_1}{1-x_1t}L^{k-1}({\boldsymbol{x}},t;\hat{x}_1)$. Next, Lemma \[lem:Lk\] to compute $L^k({\boldsymbol{x}},t)$. It is clear from the pictures in Figures \[fig:Lk first segment\] and \[fig: Lk other segments\] where $\lambda$-descents are added. Consider Figure \[fig:Lk first segment\]. In the first three subcases, no $\lambda$-descents would be added. However, in the last subcase, we would get an extra two $\lambda$-descents if the first segment (of length $\lambda_1$) were decreasing (one descent in position 1 and one descent in position $\lambda_1-1$). When considering Figure \[fig: Lk other segments\]. Subcases 1 and 3 add no $\lambda$-descents; subcases 2, 3, and 4 add one $\lambda$-descent; and subcase 6 adds two $\lambda$-descents. This gives us the recurrence for $L^k({\boldsymbol{x}},t)$. For Lemma \[lem:Lkj\], to compute $L_j^k({\boldsymbol{x}},t)$, we need only consider Figure \[fig: Lk other segments\]. The result is similar to that for $L^k({\boldsymbol{x}},t)$. Finally, to compute $D_i^k({\boldsymbol{x}},t)$, consider the proof of Lemma \[lem:Dki\] and in particular, Figure \[fig:Dki\]. In subcases 1 and 3, no $\lambda$-descents would be added. In subcases 2, 4, and 6, one $\lambda$-descent would be added. Two $\lambda$-descents would be added in subcase 5. The Gelfand character ===================== In this section, we state the main theorem of the paper. Define $$G^k({\boldsymbol{x}}) = \sum_{\lambda\in \Lambda_k} \chi_\lambda^G x^\lambda.$$ where $\chi^G$ is the Gelfand character mentioned in the introduction, $\Lambda_k$ is the set of integer compositions of length $k$, and $\chi_\lambda^G$ is the value the character takes on the conjugacy class given by $\lambda$. Notice that by Equation , $$L^k({\boldsymbol{x}},-1) = \sum_{\lambda\in\Lambda_k} \chi_\lambda^G x^\lambda.$$ This provides us with a way of computing the generating function for the Gelfand character. In particular, $G^k({\boldsymbol{x}})$ can be computed recursively from itself and two other series, namely $G_j^k$ and $H_i^k$, both defined in the next theorem. \[main theorem 3\] We have $G^0({\boldsymbol{x}})=1$, $G^1({\boldsymbol{x}}) = \dfrac{x_1}{1-x_1^2}$ and for $k \geq 2$, $$\begin{aligned} G^k({\boldsymbol{x}}) = & \, \frac{1}{1-x_1}\bigg[(x_1-x_1^2)G^{k-1}({\boldsymbol{x}};\hat{x}_1) -x_1^2(G^k_1({\boldsymbol{x}}) -2H_1^k({\boldsymbol{x}})) +\sum_{i=2}^k x_1x_i [G^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i) \\ & +G_{i-1}^{k-1}({\boldsymbol{x}}; \hat{x}_i)-2H_{i-1}^{k-1}({\boldsymbol{x}};\hat{x}_i)-G_i^k({\boldsymbol{x}})+2H_i^k({\boldsymbol{x}}) +G_{i-1}^k({\boldsymbol{x}}) -2H_{i-1}^k({\boldsymbol{x}})] \bigg];\end{aligned}$$ if $k \geq 1$, then $G_k^k({\boldsymbol{x}},t) = H_k^k({\boldsymbol{x}},t)$ and for $1\leq j<k$, $$\begin{aligned} G_j^k({\boldsymbol{x}}) = & \, \frac{1}{1-x_1x_{j+1}}\bigg[H_j^k({\boldsymbol{x}}) + \sum_{i=j+2}^k x_1x_{i}G_{i-1}^k({\boldsymbol{x}}) + \sum_{i=j+1}^k x_1x_i [G^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i) \\ & +G_{i-1}^{k-1}({\boldsymbol{x}}; \hat{x}_i)-2H_{i-1}^{k-1}({\boldsymbol{x}};\hat{x}_i)-G_i^k({\boldsymbol{x}})+2H_i^k({\boldsymbol{x}}) -2H_{i-1}^k({\boldsymbol{x}})] \bigg];\end{aligned}$$ if $k\geq 1$, then $H_1^k({\boldsymbol{x}}) = \dfrac{x_1}{1+x_1}G^{k-1}({\boldsymbol{x}};\hat{x}_1)$ and for $1\leq i <k$, we have $$\begin{aligned} H_i^k({\boldsymbol{x}}) = & \, \frac{1}{1-x_1x_i}\bigg[H_{i-1}^k({\boldsymbol{x}})+x_1x_i[G^{k-2}({\boldsymbol{x}}; \hat{x}_1, \hat{x}_i)-H_{i-1}^k({\boldsymbol{x}}) -H_{i-1}^{k-1}({\boldsymbol{x}}; \hat{x}_i)]\bigg].\end{aligned}$$ This theorem follows immediately from Equation (\[eq:char\]) and Theorem \[main theorem 2\]. Below are the first few terms of $G^k({\boldsymbol{x}})$ for $k\in \{1,2,3\}$ as computed from the formulas in Theorem \[main theorem 2\]. $$\begin{aligned} G^1(x) =& \, \, x+x^3+x^5+x^7+ x^9 + x^{11} +x^{13} + x^{15}+ x^{17} + x^{19} +x^{21}+ x^{23} + x^{25} +\cdots \\ G^2(x,y) =& \, \, 2xy + xy^3 +2x^2y^2+ x^3y + xy^5 + 4x^3y^3 + x^5y + xy^7 + x^3y^5+4x^4y^4 +\cdots \\ G^3(x,y,z) =& \, \, 4xyz + 2xyz^3 + 2xy^2z^2 +2x^2yz^2 + 2x^2y^2z + 2x^3yz + 2xy^3z + 4z^3yz^3+ \cdots \\\end{aligned}$$ Notice that for each $k\geq1$, $G^k({\boldsymbol{x}})$ is symmetric in its $k$ variables. This must happen since Equation holds for any ordering of the composition $\lambda$. Therefore, if we would like to compute $\chi^G_{(1,1,3)}$, we can take either the coefficient of $xyz^3$, $xy^3z$, or $x^3yz$ in $G^k({\boldsymbol{x}})$ as our answer. In each case, we find that $\chi^G_{(1,1,3)}=2$. Discussion ========== It remains to determine the computational complexity for computing the Gelfand character using this method and to compare it with known methods (as in [@M-1937; @N1940; @R1997]). In addition, several other characters can be realized by studying certain properties of $\lambda$-unimodal permutations. In particular, a set $B(n)\subseteq \mathcal{S}_n$ is a *fine set* (see for example, [@AR2015]) if $$\chi_\lambda=\sum_{\pi \in B(n)\cap \mathcal{L}^\lambda} (-1)^{\operatorname{des}_\lambda(\pi)},$$ where $\chi$ is a character of some representation of $\mathcal{S}_n$ and $\mathcal{L}^\lambda$ is the set of $\lambda$-unimodal permutations. Fine sets include conjugacy classes and their unions, Knuth classes, Coxeter length, and more. It should be possible to employ techniques similar to the ones found in [@A2016; @GR1993] and this article to find combinatorial proofs for the formulas of these characters. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank the University of Texas at Tyler’s Office of Sponsored Research and Center for Excellence in Teaching and Learning for their support in conducting this research. The awards from these offices supported the research conducted for this paper by two faculty members, Kassie Archer and L.-K. Lauderdale, together with undergraduate student Marin King and graduate student Virginia Germany. We would also like to thank UT Tyler students Angela Gay, Thomas Lupo, and Francesca Rossi for their contributions to Proposition \[enumerate lambda descents\] as part of a class project. [^1]: email: `[email protected]`
‘=11 ‘=12 Physics of interacting planar electrons in a partially filled lowest Landau level (LLL) in a strong magnetic field $B$ is surprisingly rich. For example, at the filling factor $\nu=1/\tilde{\phi}$, depending on whether $\tilde{\phi}$ is an odd or even integer, the system may be either in an incompressible [@Lau; @Halp; @Hald] or compressible [@HLR] fluid state, exhibiting the quantized [@FQHE] or unquantized [@Half] Hall effect respectively. The corresponding ground states are known to be described by numerically well-tested [*trial*]{} wave functions, such as the Laughlin [@Lau] and Rezayi-Read [@RR] ones. Physically, these wave functions can be understood in terms of the composite fermion (CF) scenario [@Jain; @Read1]: Namely at fillings near $\nu=1/\tilde{\phi}$, with $\tilde{\phi}$ even, a quasiparticle is an electron bound to a $\tilde{\phi}$-fold [*vortex*]{} in the electron fluid [@HalWu], called a CF, and the strongly correlated fluid itself can be described as a collection of such [*weakly interacting*]{} quasiparticles in a [*weaker*]{} effective magnetic field $\Delta B= B-B_{\tilde{\phi}}$ with $B_{\tilde{\phi}}\equiv \tilde{\phi} n_{e} \phi_0$. (Here $n_{e}$ is the average electron density, $\phi_0$ the flux quantum.) For example, if the residue interactions are ignored, the ground state at $\nu=1/2$ corresponds to a Fermi sea of CF’s, while that at $\nu=1/3$ to a completely filled LLL of CF’s in $\Delta B$. In this Letter, we will report on a local field theoretical approach that allows a systematic improvement of these trial ground states, which are managed to appear as the [*mean-field*]{} ground states. Our approach is based on a full realization of the CF scenario, in which the CF has a [*finite vortex core*]{}, improving the usual Chern-Simons fermion (CSF) theory [@LF; @HLR; @KalZh] in which an infinitesimally thin $\tilde{\phi}$-flux is attached to each electron. The inclusion of a finite vortex core is implemented by a [*non-unitary*]{} transformation, which makes CF states with definite momenta [*non-orthogonal*]{} to each other and the mean-field and perturbed Hamiltonians [*non-Hermitian*]{}. Despite these seemingly troublesome features, it is shown possible to formulate a [*consistent*]{} perturbation theory, in which the mean-field ground state, a Fermi sea of CF’s, at $\nu=1/\tilde{\phi}$ is perturbatively stable. We have also computed the density-current response functions in the random phase approximation (RPA), and explicitly verified that for small wave vectors they indeed agree with those in the CSF theory[@HLR]. New physics due to the finite vortex core is expected to show up for larger wave vectors. Generalizing a recent composite-boson construction[@Raj], we introduce the CF field operators by (with $\tilde{\phi}$ [*even*]{}) $$\begin{aligned} \Phi(\vec{x}) =e^{-J_{\tilde\phi} (\vec{x})}\psi(\vec{x}), \;\;\; \Pi(\vec{x}) =\psi^\dagger(\vec{x}) e^{J_{\tilde\phi}(\vec{x})}, \label{PhiPi}\end{aligned}$$ where $\psi(\vec{x})$ is the (spinless) electron field operator, and $$J_{\tilde\phi}(\vec{x})= {\tilde\phi}\int d^2x' \; \rho(\vec{x}')\log(z-z') -|z|^2/4l^2_ {\tilde\phi}, \label{J}$$ with $l_{\tilde{\phi}}$ the magnetic length in $B_{\tilde{\phi}}$ and $z=x+iy$. The usual CSF transform[@HLR] contains only the imaginary part of $\tilde{\phi} \log(z-z')$, which describes the phases due to a vortex with vorticity $\tilde{\phi}$ bound to an electron at $z'$. We have included the real part too, incorporating a finite vortex core and making the transformation [*non-unitary*]{} [@MaZh]. Using $$\begin{aligned} e^{-J_{\tilde\phi}(\vec{x})}\psi(\vec{x}') &=&(z-z')^{\tilde\phi}\psi(\vec{x}') e^{-J_{\tilde\phi}(\vec{x})},\nonumber\\ \psi^\dagger(\vec{x}')e^{-J_{\tilde\phi}(\vec{x})} &=&(z-z')^{\tilde\phi}e^{-J_{\tilde\phi}(\vec{x})} \psi^\dagger(\vec{x}'), \label{2.12}\end{aligned}$$ it is easy to verify $\{\Phi(\vec{x}),\Phi(\vec{x}')\} =\{\Pi(\vec{x}),\Pi(\vec{x}')\}=0$, and $\{\Phi(\vec{x}), \Pi(\vec{x}')\}=\delta^{(2)}(\vec{x}-\vec{x}')$. Obviously $\Phi$ and $\Pi$ are not Hermitian conjugate: $\Pi=\Phi^\dagger e^{J_{\tilde\phi} +J_{\tilde\phi}\dagger}$. Notice that $\rho(\vec{x})= \psi^\dagger(\vec{x})\psi(\vec{x}) =\Pi(\vec{x})\Phi(\vec{x})$ and $[\int d^2x' \rho, \Pi(\vec{x})]=\Pi(\vec{x})$. Thus the CF density is the same as the electron density, and $\Pi$ creates a CF while $\Phi$ annihilates one. In terms of CF, the usual electron Hamiltonian reads $$\begin{aligned} H&=&-\frac{1}{2m_b} \int d^2x\, \Pi(\vec{x}) (\nabla+i{\vec A}-i{\vec v}_{\tilde\phi})^2 \, \Phi(\vec{x})\nonumber\\ &&\;\;\;+\,\frac{1}{2}\int d^2x d^2x'\, \delta\rho(\vec{x})\, V(\vec{x}-\vec{x}') \, \delta\rho(\vec{x}'), \label{CFHam}\end{aligned}$$ where $m_b$ is the electron band mass; $\delta\rho =\Pi\Phi - n_e$, and ${\vec v}_{\tilde\phi}(\vec{x}) \equiv i\nabla J_{\tilde\phi} ={\vec a}(\vec{x})+i\hat{n}\times{\vec a} (\vec{x})-i{\vec x}/2l_{\tilde\phi}^2$; $\hat{n}$ is a unit vector perpendicular to the plane and $\vec{a}$ is the usual Chern-Simons gauge field, given by $${\vec a}(\vec{x})= \tilde{\phi}\, \int d^2x'\, \rho(\vec{x}') \; (\hat{n}\times \vec{x})/ |\vec{x}|^2 \label{CS}$$ in the gauge $\nabla\cdot {\vec a}=0$, satisfying $b=\nabla\times {\vec a}=2\pi{\tilde\phi}\rho$. In deriving ${\vec v}_{\tilde\phi}$, we have used $\nabla({\rm Re}\log z)=\nabla ({\rm Im}\log z)\times \hat{n}$. The physical justification for including the real part of $\tilde{\phi}\log (z-z')$ in the CF transformation (\[J\]) lies in the fact that the resulting mean-field states give rise to numerically well-tested wave functions. To see this, we note that at the mean field level, one ignores the fluctuations of the Chern-Simons field $\vec{a}$ and, for fillings at or close to $1/\tilde{\phi}$, take $\vec{a}$ to be a classical field determined by eq. (\[CS\]) with a uniform density $n_{e}$, as in CSF theory: $$\begin{aligned} \bar{\rho}(\vec{x})=n_{e}, \;\; \bar{\vec a}(\vec{x})=(B_{\tilde{\phi}}/2)\hat{n}\times {\vec x}={\vec A_{\tilde{\phi}}}(\vec{x}). \label{MFsol}\end{aligned}$$ Here one has $\hat{n}\times\bar{\vec a}= {\vec x}/2l^2_{\tilde{\phi}}$, which results in $\bar J+\bar J^\dagger=0$. Substituting eq. (\[MFsol\]) into eq. (\[CFHam\]), we get the mean-field Hamiltonian describing free CF’s in an effective field $\Delta B$: $$H_{MF}=-\frac{1}{2m_b}\int d^2x\; \Pi(\vec{x}) (\nabla+i \Delta {\vec A})^2 \Phi(\vec{x}). \label{MFHam}$$ Once we get the CF wave function for a mean-field state, $\chi (\vec{x}_1,...,\vec{x}_N) \equiv \langle 0|\Phi(\vec{x}_1) \ldots{}\Phi(\vec{x}_N) |MF\rangle$, the corresponding electron wave function can be easily read off as $$\begin{aligned} &&\psi_{MF}(\vec{x}_1,...,\vec{x}_N)\equiv \langle 0|\psi(\vec{x}_1)\ldots{} \psi(\vec{x}_N)|MF\rangle\nonumber\\ &&\;\;\;=\langle 0|e^{J(\vec{x}_1)}\Phi(\vec{x}_1)\ldots{} e^{J(\vec{x}_N)}\Phi(\vec{x}_N)|MF\rangle\nonumber\\ &&\;\;\;= \prod_{i<j}(z_i-z_j)^{\tilde\phi} \exp[-\frac{1}{4l^2_{\tilde\phi}} \sum_i|z_i|^2] \; \chi (\vec{x}_1,...,\vec{x}_N)\, , \label{MFwf}\end{aligned}$$ recovering Jain’s rule for trial wave functions[@Jain]. Here we used the identity (\[2.12\]) to move all $e^{J(\vec{x}_i)}$ to the left, which act on the vacuum $\langle 0|$ yielding the Gaussian factor. The mean-field CSF theory missed the factor $\prod_{i<j} |z_i-z_j|^{\tilde\phi}$, whose presence in eq. (\[MFwf\]) is due to the inclusion of the real part of $\tilde{\phi} \log (z-z')$ in our CF transformation (\[J\]). At exactly $\nu=1/\tilde{\phi}$, CF’s are in zero effective magnetic field $\Delta B=0$. The mean-field Hamiltonian (\[MFHam\]) requires the ground state be a filled Fermi sea of CF’s, with Fermi wave vector $k_F=(2\pi\tilde\phi n_e)^{1/2} =1/l_{\tilde\phi}$. In terms of the CF field in momentum space, the Fermi-sea state ket is $$|G_0\rangle = \prod _{k<k_F} \Pi(\vec{k}) \; |0\rangle. \label{FSKet}$$ Thus, with $\chi_{0}= \langle 0|\Phi(\vec{x}_1) \ldots{}\Phi(\vec{x}_N)|G_{0}\rangle = \det (e^{i\vec{k}_i\cdot\vec{x}_j})$, eq. (\[MFwf\]) reproduces the (unprojected) Rezayi-Read trial wave function[@RR]. According to eq. (\[MFHam\]), the mean-field quantum Hall state at $\nu=p/(\tilde{\phi}p+1)$ is the state with CF’s completely filling $p$ Landau levels in $\Delta B$. For $p=1$, $$\chi_1(\vec{x}_1,\cdot,\vec{x}_N)=\prod_{i<j}(z_i-z_j) \exp[-\frac{1}{4l^2_{\Delta B}}\sum_i |z_i|^2]; \label{wfchi}$$ the mean-field electron wave function (\[MFwf\]) just gives the Laughlin trial wave function [@Lau] (since $l^{-2}_{\tilde{\phi}}+l^{-2}_{\Delta B} = l^{-2}_{B}$) $$\prod_{i<j}(z_i-z_j)^{\tilde{\phi}+1} \, \exp[-\frac{1}{4l^2_B}\sum_i|z_i|^2]. \label{3.15}$$ To systematically improve the mean-field theory, we do perturbation theory to include the effects of fluctuations of the Chern-Simons field $\vec{a}$. For definiteness, we restrict to the filling $\nu=1/\tilde{\phi}$, for which $\Delta A=0$. Instead of eq. (\[CFHam\]), we consider the following [*Hermitian*]{} Hamiltonian: $$\begin{aligned} H_{0}&+&H_{1}= \frac{-1}{2m_b}\int d^2x \,\Pi(\vec{x})[ \nabla - i(\delta{\vec a} +i\hat{n}\times \delta{\vec a})]^{2}\,\Phi(x)\nonumber\\ &&+\frac{1}{8\pi \tilde{\phi}^{2}} \int d^2x d^2x'\; \delta b (\vec{x})\, V(\vec{x}-\vec{x}')\, \delta b (\vec{x}'). \label{IntHam}\end{aligned}$$ with $H_0=H_{MF}$ and $\delta \vec{a} = \vec{a} - \bar{\vec{a}}$. We go from the Schrödinger to the interaction picture by a similar but [*non-unitary*]{} transformation ($H_0$ is not Hermitian): $$|\psi(t)\rangle_I =e^{iH_0t}|\psi(t)\rangle_S, \; \hat{O}_I(t)=e^{iH_0 t}\, \hat{O}_S\, e^{-iH_0t}. \label{IntPic}$$ The CF operators $\Pi_{I}(\vec{x},t)$ and $\Phi_{I}(\vec{x},t)$ satisfy the usual canonical equal-time anti-commutation relations. The evolution operator, $U(t,t')\equiv e^{iH_0 t}e^{-iH(t-t')} e^{-i H_0 t'}$, is no longer unitary, but we still have $U(t,t)=1$, and $U(t,t'')U(t'',t')=U(t,t')$, $U^{-1}(t,t')=U(t',t)$. Also the Schrödinger equation and the Dyson formula for $U(t,t')$ are formally the same as before: $$U(t',t)=T\,\exp\biggl(-i\int_t^{t'} d\tau H_1(\tau) \biggr), \label{Dyson}$$ where $H_1(t)\equiv e^{iH_0 t}H_1e^{-iH_0t}$, and $T$ does time-ordering. To proceed, first we need an appropriate basis to evaluate the matrix elements of eq. (\[Dyson\]). The base kets and bras for CF’s with definite momenta [@comm1] are given by $$|\{\vec{k}_i\}\rangle\equiv \prod_i \Pi (\vec{k}_i)\, |0\rangle, \;\;\; \tilde{\langle \{\vec{k}_i\}|} \equiv \langle 0|\, \prod_i \Phi (\vec{k}_i).$$ They are eigenvectors of $H_{0}$ and have orthonormal overlaps. (Note that the bras $\langle \{\vec{k}_i\}|$ do not have orthonormal overlaps with the kets $|\{\vec{k}_j\}\rangle$!) Thus, corresponding to the Fermi-sea ket (\[FSKet\]), the bra describing the CF Fermi sea, that has a unit overlap with it, is given by $$\langle \tilde{G}_0| =\langle 0| \prod _{k<k_F}\Phi (\vec{k}),\;\; \langle \tilde{G}_0|G_0\rangle=1 . \label{3.19}$$ Both the ket $|G_0\rangle$ and the bra $\langle \tilde{G}_0|$ are eigenvectors of $H_0$ with the same energy $\epsilon_0=\sum_{k<k_F}k^2/2m_b$, while $\langle G_0|$ is not. As a rule, corresponding to usual expectation values in the unperturbed ground state, we always consider the matrix elements between $\langle \tilde{G}_0|$ and $|G_0\rangle$. The free CF propogator is defined as $$G_0(\vec{x},t;\vec{x}',t')=-i\tilde{\langle G_0|} T(\Phi_{I}(\vec{x},t) \Pi_{I}(\vec{x}',t')) |G_0\rangle, \label{4.11}$$ whose Fourier transform is the same as a free electron $$\begin{aligned} G_0({\vec k},\omega)= \frac{\theta(k-k_F)}{\omega-\epsilon_{\vec{k}}+i0^+}+ \frac{\theta(k_F-k)}{\omega-\epsilon_{\vec{k}}-i0^+}, \label{4.13}\end{aligned}$$ where $\theta(k)$ is the step function and $\epsilon_{\vec{k}}=k^2/2m_b$. Introducing $a_{0}$ to implement the field-density constraint, a Lagrangian approach for the Chern-Simons propogator $$D^0_{\mu\nu}(\vec{x},t;0,0) =-i\langle \tilde{G}_0|T(\delta a_\mu(x,t) \delta a_\nu(0,0))|G_0\rangle\, ;$$ leads to the one in usual CSF theory $$\begin{aligned} D^0_{\mu\nu}({\vec q, \omega})= \delta (\omega) U_{\mu\nu}(\vec{q}), \begin{array}{ccc} U=\left(\begin{array}{cc}{v({\vec q})}&{\displaystyle \frac{2\pi i\tilde\phi}{q}}\\ {\displaystyle-\frac{2\pi i\tilde\phi}{q}}&{0} \end{array}\right). \end{array} \label{CSPro}\end{aligned}$$ Here we have adopted the $2\times 2$ matrix formalism[@HLR] with $\mu,\nu = 0,1$; $0$ stands for the time component, $1$ the space component transverse to $\vec{q}$. Eq. (\[IntHam\]) implies a [*complex*]{} CF-Chern-Simons coupling $\rho \delta a_{0}+ (\vec{j}-i\hat{n}\times \vec{j})\cdot \delta \vec{a}$, with $\vec{j} = (-i/2m_{b})[(\nabla \Pi) \Phi -\Pi \nabla \Phi]$, resulting in the CF-CF-$\delta a$ vertex (Fig. 1a) $$g_\mu (\vec{k}+\vec{q},\vec{k})=\biggl(1, \frac{(2\vec{k}+\vec{q})\times{\hat q} +i(2\vec{k}+\vec{q})\cdot{\hat q} }{2m_b}\biggr), \label{4.16}$$ where $\hat{q}=\vec{q}/q$ is a unit vector. Note that the second term in the $\mu=1$ component is absent in the CSF theory. Moreover, in view of $(\delta {\vec a} +i\hat{n}\times\delta {\vec a})^2=0$, there is no CF-CF-$\delta a$-$\delta a$ vertex (Fig. 1b) in our Feynman rules. This makes the structure of Feynman diagrams more like a theory with two-body potential than usual gauge theory. Using the Wick theorem and the above Feynman rules, one can calculate as usual the matrix elements of the evolution operator (\[Dyson\]) between free CF states. To make connection to physics, we have managed to prove a generalized Gell-Mann-Low theorem, which relates these matrix elements to correlation functions: $$\begin{aligned} && \langle G|T O_{1,H} (\vec{x}_1,t_1) \cdots O_{n,H} (\vec{x}_n,t_n) |G\rangle \nonumber\\ = && \frac{\langle \tilde{G_0}|T \prod_{i} O_{i,I} (\vec{x}_i,t_i) U (\infty, -\infty) |G_0 \rangle} {\langle \tilde{G_0}| U (\infty, -\infty) |G_0 \rangle}\, , \label{GGL}\end{aligned}$$ where $|G\rangle$ is the true ground state, $O_{i,H} (\vec{x}_i,t_i)$ are local operators in terms of $\Phi_H$ and $\Pi_H$ in the Heisenberg picture. The result, central to our paper, implies a consistent perturbation theory with usual diagrammatic techniques, despite nonunitarity of our CF transformation (\[PhiPi\]). The proof follows the same steps as in usual many-body perturbation theory [@Noz], but we have to be careful about problems due to non-Hermiticity of $H_0$ and $H_1$. The basis of the theorem (\[GGL\]) lies in the following lemma: The state obtained from the free CF Fermi-sea $|G_0\rangle$ by adiabatically switching on $H_{1}(t)$, $$|G\rangle\equiv C \lim_{\eta\to 0+} \frac{U_{\eta}(0,-\infty)\, |G_0\rangle} {\langle\tilde{G_{0}}|U_{\eta}(0,-\infty)|G_0\rangle} \label{GPhys}$$ is an eigenstate of $H_{0}+H_{1}$ which, by the adiabatic hypothesis, is assumed to be the true ground state of the system. Here $C$ is a normalization constant and $U_{\eta}(t,t')$ the operator (\[Dyson\]) with $H_{1}(\tau) \to e^{-\eta|\tau|}H_{1}(\tau)$. To make sense, the limit in the right side of eq. (\[GPhys\]) has to exist. We first note that, by purely combinatoric considerations as usual, $U_{\eta}=U_{\eta L}\exp(U_{\eta 0c})$, where $U_{\eta L}$ is the linked part of $U_{\eta}$ , while $U_{\eta 0 c}$ the sum of the contributions to $\langle \tilde{G}_0|U_{\eta}|G_0\rangle$ from all unlinked connected diagrams. It can be shown that as $\eta \to 0+$, $U_{\eta L}(0,-\infty)$ is regular, while $U_{\eta 0c}(0, -\infty)$ diverges as $1/\eta$ due to the integration of $e^{\eta t_i}$ in eq. (\[Dyson\]): $$\begin{aligned} U_{\eta 0c}(0,-\infty)=iA/\eta+\ln C, \label{4.20}\end{aligned}$$ where $A$ is $\eta$-independent. The potential problem lies in the possibility that $A$ may have a non-zero imaginary part; then the state $U_\eta(0,-\infty)|G_0\rangle$ would have either zero or divergent norm as $\eta\to 0+$, making the limit in eq. (\[GPhys\]) nonsense. Usually $U_{\eta}$ is unitary, so $A$ is real. However, this argument does not apply in our case, since our $H_{1}$ contains a complex Chern-Simons coupling for CF’s. We have managed to check explicitly up to three loops, and to give an argument for any number of loops, that diagram by diagram the contribution to $A$ is real. Thus eq. (\[4.20\]) gives rise to a harmless divergent phase factor to the numerator in eq. (\[GPhys\]), that is cancelled by the denominator, and ensures the perturbative stability of the mean-field Fermi-sea ground state $|G_0\rangle$. Moreover, we have been careful to make sure that the usual proof for the state (\[GPhys\]) to be an eigenstate of $H_{0}+H_{1}$ goes through in our case: Only the commutation relations are needed here; whether $H_0$ and $H_1$ are Hermitian is irrelevant. Similarly, by adiabatically switching off $H_1(t)$, we get $$\langle G| \equiv C \lim_{\eta\to 0+} \frac{\langle\tilde{G_0}|\, U_{\eta}(\infty,0)} {\langle\tilde{G_{0}}|U_{\eta}(\infty,0)|G_0\rangle}\, , \label{GPhys2}$$ and $U_\eta (\infty, -\infty)\, |G_0\rangle = \exp (i 2A/\eta) |G_0\rangle $, again ensuring the perturbative stability of $|G_0\rangle$. With these results, one easily arrives at the generalized Gell-Mann-Low theorem. In addition to the formal development, we have calculated the density-current response functions, $K_{\mu\nu} (\vec{q},\omega)$, in the RPA in appropriate limits, to explicitly check that our formalism indeed gives reasonable results. The relevant Feynman diagrams are given in Fig. 2. We note that our Feynman rules, compared to usual CSF theory, have modified the Chern-Simons but not the electromagnetic couplings. Besides, the RPA equation for our $K_{\mu\nu}$ is different from usual CSF theory. Explicit calculations shows that the effects of these two differences cancel for $K_{\mu\nu}$ both in the static ($\omega=0$) and high-frequency ($\omega \gg qk_F/m_b$) limits for small wavevector $q\ll k_F$. We may summarize the differences between our theory and usual CSF theory by rewriting the RPA equation for our $K_{\mu\nu}$ in the following form: $$K=K^0-K^0[K^0+\Delta K -U^{-1}]^{-1}K^0, \label{Diff}$$ where $K^0$ is the response function of the noninteracting CF system (see Fig. 2), governed by $H_0$; the matrix $\Delta K$, absent in usual CSF theory, is a diagonal $2\times 2$ matrix $$\Delta K = {\rm diag}\biggl(0,\hat{K}^0_{11} -K^0_{11}-(\hat{K}^0_{01})^2/\hat{K}^0_{00}\biggr).$$ Here $\hat{K}^0$ is the response function of non-interacting CF’s with two Chern-Simons vertices (Fig. 2); in deriving eq. (\[Diff\]) we exploited relations between the explicit expressions of the one-loop blocks in Fig. 2. Straightforward calculation shows that both in the static ($\omega=0$) and high-frequency ($\omega \gg qk_F/m_b$) limits for small wavevector $q\ll k_F$, $\Delta K$ just happens to vanish: $$\hat{K}^0_{11}- (\hat{K}^0_{01})^2/\hat{K}^0_{00}= K^0_{11}. \label{NoDiff}$$ Therefore, for long-wavelength fluctuations either in the static or high-frequency limit, there is no physical difference between our theory and CSF theory for the linear-response functions. This provides a consistency check of our perturbation theory: On one hand, the size of the vortex core in a CF is of the order of the magnetic length, so a probe with wavelength much bigger than that should not be able to see the core; on the other hand, the high-frequency behavior of the response function should be constrained by Kohn’s theorem[@Kohn] (with mass $m_b$), which is indeed satisfied by our RPA results. Thus, the experimental predictions for probes with long wavelength discussed by Halperin, Lee and Read[@HLR] remain unchanged. Furthermore, when higher-order contributions are included, the “Fermi-liquid corrections” [@SSH] are needed and can be done as well. The details will be given elsewhere[@WuYu]. Nevertheless, there is no reason to believe that eq. (\[NoDiff\]) could be generally true. To uncover the new physics due to the finite vortex core in a CF, which is expected to show up for shorter wavelengths or larger wave vectors, one needs to evaluate the one-loop integrals with parameters in certain intermediate range. Work is in progress. To conclude, several comments are in order. First, our perturbation formalism can be easily generalized to study the incompressible fractional quantum Hall fluids either from the CF scenario or from the composite boson scenario; in the latter the integer $\tilde{\phi}$ in eq. (\[J\]) is taken to be odd, resulting a canonical boson pair $\Phi$ and $\Pi$ [@Raj]. Here we have considered an infinite, homogeneous system without boundary. It would be interesting to generalize the present formulation to a finite compact geometry, say a sphere or a torus, which may be helpful for clarifying some consequences of the non-orthogonality of CF states, such as fractional and mutual exclusion statistics between quasiholes and quasielectrons [@Hald2; @Wu]. Also it is worth to study the effects due to a finite vortex core for quasiparticles on the edge of a finite system with boundary. Y.S.W. would like to thank F.D.M. Haldane for discussions. The work was supported in part by grant NSF PHY-9309458. R. B. Laughlin, Phys. Rev. Lett. [**50**]{}, 1395 (1983); B.I. Halperin, Helv. Phys. Acta [**56**]{}, 75 (1983); F.D.M. Haldane, Phys. Rev. Lett. [**51**]{}, 605 (1983). B.I. Halperin, P.A. Lee and N. Read, Phys. Rev. [**B47**]{}, 7312 (1993). D.C. Tsui, H.L. Stormer and A.C. Gossard, Phys. Rev. Lett. [**48**]{}, 1559. R. L. Willett, J. P. Eisenstein, H. L. Stromer, D. C. Tsui, A. C. Gossard, and J. H. English, Phys. Rev. Lett. [**59**]{}, 1776 (1987). E.I. Rezayi and N. Read, Phys. Rev. Lett. [**71**]{}, 900 (1994). J. K. Jain, Phys. Rev. Lett. [**63**]{}, 199 (1989); Phys. Rev. [**B40**]{}, 8079 (1989); [**B41**]{}, 7653 (1990). A lucid account is given in N. Read, Semicon. Sci. Tech. [**9**]{}, 1859 (1994). The vortex interpretation of the Laughlin quasiparticle wave function was noticed, e.g. , in F.D.M. Haldane and Y.S. Wu, Phys. Rev. Lett. [**55**]{}, 2887 (1985). A. Lopez and E. Fradkin, Phys. Rev. [**B44**]{}, 1297 (1991); Phys. Rev. [**B47**]{}, 7080 (1993). V. Kalmeyer and S. C. Zhang, Phys. Rev. [**B46**]{}, 9889 (1992). R. Rajaraman and S. L. Sondhi, cond-mat/9601125 (to be published). A similar, non-unitary transformation has been proposed before at the first-quantized level by M. Ma and F. C. Zhang, Phys. Rev. Lett. [**66**]{}, 1769 (1991). In this paper we consider an infinite, homogeneous bulk system without disorder. See, e.g., P. Nozieres, [*Theory of Interacting Fermi Systems*]{}, (W. A. Benjamin, Inc., New York, 1964). W. Kohn, Phys. Rev. [**123**]{}, 1242 (1961). A. Stern and B. I. Halperin, Phys. Rev. [**B52**]{}, 5890 (1995); S. H. Simon and B. I. Halperin, Phys. Rev. [**B50**]{}, 1807 (1994); S. H. Simon, A. Stern and B. I. Halperin, cond-mat/9604103 (to be published). Y.S. Wu and Yue Yu, in preparation. F.D.M. Haldane, Phys. Rev. Lett. [**67**]{}, 937 (1991). Y.S. Wu, Phys. Rev. Lett. [**73**]{}, 922 (1994).
--- author: - 'V. Lora' - 'A. C. Raga' - 'A. Esquivel' date: 'Received date / Accepted date' title: The angular momentum of condensations within elephant trunks --- Introduction ============ The radiation from newly born stars photoionises and erodes the parental cloud, producing structures such as the so-called elephant trunks. At the head of an elephant trunk, the interaction of the shock (driven by the photoevaporation process) with previously existing density perturbations leads to the formation of dense clumps. Some of these clumps might have enough mass to be self-gravitating, and will eventually form young stars that eject bipolar outflows. We describe observed examples of this kind of configuration. Bally & Reipurth ($2003$) discovered HH objects in the molecular cloud associated with the Pelican Nebula, including . This outflow emerges from the tip of a long elephant trunk, providing direct evidence of ongoing star formation in this region. The outflow axis of is approximately perpendicular to the elephant trunk, which is aligned with the direction to the photoionising source. Another example of this kind of configuration is the outflow in the Carina nebula. also emerges from close to the tip of an elephant trunk, and its axis is almost perpendicular to the direction towards $\eta$ Carinae (Smith, Bally & Brooks 2004). An HST image of this region (Bally, Reipurth & Davis 2007), shows a second jet emerging from a nearby elephant trunk, with a direction almost parallel to the outflow. A final example is provided by . This jet emerges from the tip of an elephant trunk within the complex Trifid nebula (). It is a single-sided jet with measured radial velocities (Rosado et al. 1999) and proper motions (Yusef-Zadeh et al. 2005) that indicate it has the kinematical properties of a standard HH jet. Again has an outflow direction approximately perpendicular to the direction to the ionising source. Reach et al. (2009) presented observations of the elephant trunk of globule . They detected outflow activity from a number of young stars in the region. However, it is impossible to determine the outflow axes from these observations. Even though the number of four outflows (two in the region, see above) observed to be emerging from tips of elephant trunks is quite small, the alignment approximately perpendicular to the direction to the ionising photon source might be indicative of a systematic alignment. This alignment implies that the angular momenta of the low mass star+disk systems producing outflows from stellar sources in the tip of elephant trunks are more or less perpendicular to the direction of the ionising photon field (produced by the massive stars giving rise to the photoionised nebulae and elephant trunk structures). These angular momenta presumably preserve the direction of the rotation axes of the dense clumps that collapsed to form the outflow sources. In the present paper, we explore the interaction between an ionising photon field and an environment with density perturbations. This interaction produces elongated structures reminiscent of elephant trunks, with dense, embedded clumps. In particular, we focus on whether or not these dense clumps have angular momenta preferentially oriented perpendicular to the direction towards the photoionising source. Mellema et al. (2006) carried out 3D, radiation gasdynamic simulations of an H II region expanding in an ISM with power-law density perturbations. They find that this configuration naturally leads to the formation of dense, radially elongated structures, which resemble elephant trunks. Also, Gahm et al. (2006) study the role of magnetic fields in the formation of elephant trunks. Finally, Gritschneder et al. (2009) carried out a simulation of an initially plane ionising front travelling into a structured medium. Our work emulates the approach of Mellema et al. (2006) and Gritschneder et al. (2009). We focus on a small region of the edge of an expanding H II region, and carry out a 3D radiation gasdynamic simulation (including the self-gravity of the gas) of the formation of a dense, neutral structure. We then identify high density clumps within this “elephant trunk”, and compute their angular momenta. Finally, we study the mass distribution of the clumps, and the distributions of the orientation and magnitude of their angular momenta. The paper is organized as follows. In Sect. $2$, we describe the gasdynamic code and the parameters used for the numerical simulation. The results from the simulation and the clump statistics are presented in Sect. $3$. Finally, our results are summarised in Sect. 4. Code & settings =============== Code ---- We carried out a 3D simulation with a code that solves the 3D gasdynamic equations, the Poisson equation for the gravitational field, and a rate equation for neutral hydrogen, including the transfer of ionising photons at the Lyman limit. The gas is initially atomic, and the models do not consider the photodissociation of molecular material because of the presence of a FUV radiation field. This code was described by Raga et al. (2008). We modified the code of Raga et al. (2008) to include the “two temperature” equation of state described by Esquivel & Raga (2007, hereafter E07). This equation of state assigns temperatures between 10 K (for neutral gas) and $10^4$ K (for gas with fully ionised H) with a linear dependence on the H ionisation fraction. Therefore, instead of solving an energy equation with the appropriate heating and cooling terms (see Raga et al. 2008), we replace it with this two-temperature equation of state. We also included the self-gravity of the gas. We use a successive over relaxation (SOR) method to solve the Poisson equation for the gravitational potential, and then include the gravitational force in the momentum and energy equations. We do not include a treatment of the diffuse, ionising photon field. Settings -------- The computational domain has a size of $(3.0,1.5,1.5)\times 10^{18}$ cm (along the $x$-, $y$- and $z$-axes, respectively), which is resolved with a uniform grid of $256\times 128\times 128$ grid points. We impose transmision boundaries in the $x$-direction and periodic boundaries along the $y$ and $z$-directions. The periodic conditions are imposed in the gasdynamic equations, in Poisson’s equation (for the gravitational field), and in the radiative transfer equations. We start with an inhomogeneous density structure with a power-law power-spectrum index of $-11/3$ (i.e. $P[k]\propto k^{-11/3}$, where $k$ is the wave-number), as described in Esquivel et al. (2003). The initial density structure does not have any motion. To simulate the edge of an H[II]{} region, the computational domain is divided into two portions with a dividing line at $x=4\times 10^\mathrm{17}$ cm from the left edge of the domain. The portion to the left is filled with an ionised medium (with a temperature of $10^4~\mathrm{K}$, and the portion to the right is filled with a neutral medium (with a temperature of $10~\mathrm{K}$). The average density in the neutral medium is a factor of $100$ higher than the one in the ionised medium, and the transition between the two (also in terms of temperature and ionisation fraction) follows a $\tanh$ profile with a width of $\sim$ 10 pixels. The resulting neutral structure has a mass of $228~\mathrm{M}_{\odot}$. To calculate the gravitational field, we only consider the gravitational force resulting from the density perturbations. In other words, we subtract a density $\rho_0=3.51\times10^{-24}$ g cm$^{-3}$ (corresponding to the lower density regions in the initial distribution of neutral material) from the density used in Poisson’s equation. In this way, we avoid a generalized collapse of the dense slab structure that fills the computational domain. We also run a simulation in which the gravitational force was “turned off” to illustrate the effect of the self-gravity of the gas. A plane-parallel ionising photon field $F_0=8.8\times10^{10}$cm$^{2}$s$^{-1}$ is incident on the computational domain along the $x$-axis. This photon flux corresponds to a star with an ionising photon rate $S_*=10^{48}$ s$^{-1}$, located at a distance $D=9.5\times10^{17}$ cm from the edge of the computational domain in the $-x$ direction. ![Time evolution of the $xy$ mid-plane density stratification without self-gravity. The three frames are labeled with the corresponding elapsed times. The density stratifications are shown with the logarithmic greyscale given (in g cm$^{-3}$) by the top right bar. In the three frames we also show the contour corresponding to an H ionisation fraction of 50%, which indicates the position of the ionisation front. The $x$ and $y$-axes are labeled in cm.](12238fg1.eps){width="8cm"} ![Same as Fig. 1, but for the simulation that includes the self-gravity of the gas.](12238fg2.eps){width="8cm"} ![Number of neutral clumps as a function of time, obtained for three different density cuttoffs. The results correspond to the simulation which includes self-gravity (see Fig. 2).](12238fg3.eps){width="6cm"} ![Fraction of the clumps (obtained for a cutoff density $\rho_c~=3\times10^{-18}$g cm$^{-3}$) with different orientations $\alpha$ (between the $x$-axis and the $xy$-projections of the angular momenta of the clumps). The panels are labeled with the elapsed time corresponding to the time-frames from which the three angular distributions were obtained. The results correspond to the simulation that includes self-gravity (see Fig. 2).](12238fg4.eps){width="5cm"} ![The top panel shows the specific angular momentum $(L/M)$ of each clump as a function of the clump’s mass. The black line shows the angular momentum associated with the outer orbit of an accretion disc radius of $r_{D}=50$ AU around a $M=2M_{\odot}$ star ($L/M\sim5\times10^{10}$ km$^{2}s^{-1}$, see the text). The bottom panel shows the orientation $\alpha$ (between the $x$-axis and the $xy$-projection of the angular momentum) for each clump, as a function of its mass. The triangles show the clumps found for an elapsed time of $95$ kyr and the crosses for an elapsed time of $195$ kyr. The clumps were obtained using a $\rho_c~=3\times 10^{-18}$g cm$^{-3}$ density cutoff, using the results from the simulation of Fig. 2.](12238fg5.eps){width="6cm"} ![Time evolution in the most massive, neutral clump obtained from the simulation with self-gravity (see Fig. 2). The top panel shows the mass, and the central panel shows the ratio of the clump mass to the Jeans mass as a function of time. The bottom panel shows the time evolution in orientation $\alpha$ of the angular momentum of this clump (where $\alpha$ is the angle between the $x$-axis and the $xy$-plane projection of the angular momentum).](12238fg6.eps){width="5.5cm"} ![Density stratification and flow field in a region around the centre of mass of the most massive clump at $t=200$ kyr (see Fig. 6). The top panel shows the flow on a $xy$-cut and the bottom panel the flow on a $xz$-cut. The origin of the coordinate system coincides with the centre of mass of the clump (obtained with a $\rho_c=3\times 10^{-19}$ g cm$^{-3}$ density cutoff, which corresponds to a number density of $1.4\times 10^5$ cm$^{-3}$). The colour scale shows the logarithmic density distribution (given in cm$^{-3}$ by the bar on the top right). The arrows show the flow velocities on the planes of the two cuts for densities $\rho>10^{-19}$ g cm$^{-3}$. An arrow with a length equal to the separation between successive arrows corresponds to a flow velocity of 0.5 km s$^{-1}$.](12238fg7.eps){width="5.5cm"} Results ======= We allowed the model to run from the initial conditions described in Sect. 2, to a $t=200$ kyr evolutionary time. Figure 1 shows the time evolution in the mid-plane density stratification without including the self gravity of the gas. Figure 2 shows the same simulation but adding the force that arises from self-gravity. From both Figs., it is evident that the ionisation front becomes highly corrugated with dense condensations at the tip of a number of protruding “fingers”. At $t=100$ kyr, the effect of self-gravity is only to produce denser condensations at the tip of the fingers. At $t=200$ kyr, however, the density structures obtained without (Fig. 1) and with self-gravity (Fig. 2) are quite different. In the self-gravitating simulation, a dense, central structure (detached from the ionisation front and absent in the non-gravitating simulation) is produced. Given the important differences found when including self-gravity, we present an analysis of clump formation only for the flow obtained from the self-gravitating simulation. Interestingly, if one repeats the analysis for the non-gravitating simulation, similar results are found (these results are not shown in the present paper). To quantify the number of clumps produced, following E07 we calculate the number of spatially connected neutral structures with densities above a specified cutoff density $\rho_c$. In particular, we choose cutoff values of $\rho_c=3\times$, $10^{-18}$, $10^{-19}$ and $10^{-20}$ g cm$^{-3}$. The number of clumps (fragments) obtained for different density cutoffs, is shown as a function of time in Fig. 3. To determine the number of clumps, we consider time intervals of 20 kyr (corresponding to the width of the bins in the histograms of Fig. 3). We then calculate the number of clumps in 8 outputs within each of these time intervals, and then compute an average number of clumps for each time interval. For the lowest cutoff density ($\rho_c=10^{-20}$), the initial density distribution has $\sim 80$ clumps, and the number of clumps first decreases with time, stabilizes at $\sim 5$ for $40<t<170$ kyr, and then continues to increase a little at $t>170$ kyr (see Fig. 3). For the intermediate cutoff density ($\rho_c=10^{-19}$), the initial distribution has $\sim 500$ clumps and the number of clumps first decreases and then remains approximately constant as a function of time (with a value of $\sim 20$). The initial density distribution has no clumps with densities above the highest chosen cutoff density, $\rho_c=3\times 10^{-18}$ g cm$^{-3}$ (see above). Interconnected structures of sufficiently density only start to appear at $t\approx 30$ kyr, and their number grows monotonically with time, stabilizing at a number of $\sim 20$ for $t>110$ kyr (see Fig. 3). For each of the detected clumps, we first compute the position of the centre of mass : $$\textbf{R}_{CM}={1\over M}\int_V \rho \textbf{r}~d^{3}\textbf{x}\,, \label{r}$$ where $V$ is the contiguous volume of the clump and $$M=\int_V \rho~d^{3}\textbf{x}\,,$$ is its mass. We then compute the angular momentum with respect to the centre of mass of each clump $$\textbf{L}=\frac{1}{M}\int \rho~\left(\textbf{r}-\textbf{R}_{CM}\right) \times \textbf{v}~d^{3}\textbf{x}\,. \label{l}$$ We assume that we observe the computed flow along the $z$-axis (i. e., that the $xy$-plane of the computational domain is parallel to the plane of the sky). The angle $$\alpha = |\arctan{L_Y/L_X}|\,, \label{al}$$ (with $\vec{L}$ given by equation \[l\]) then corresponds to the orientation angle (with respect to the direction of the ionising photon field) of the angular momentum of the clumps projected onto the plane of the sky. As discussed in Sect. 1, these directions correspond to the directions in which bipolar outflows will eventually be ejected when (and if) the clumps form star+accretion disk systems. In Fig. 4, we show histograms indicating the fraction of clumps (obtained for a cutoff density $\rho_c=3\times 10^{-18}$g cm$^{-3}$) with different orientations $\alpha$, for the three different elapsed times ($t=95$, 145, and 200 kyr). For early times, we find that the $\alpha$ values of the clumps are randomly distributed (between $\sim 40$ and $180^\circ$). For $t=200$ kyr, $\approx 36$% of the clumps have $70^\circ < \alpha < 100^\circ$, and more than $\approx 55\%$ of the clumps have $60^\circ < \alpha<100^\circ$. From this result, we conclude that the dense clumps being formed have angular momenta preferentially aligned in directions perpendicular to the direction of the incident ionising photon field (which is parallel to the $x$-axis). The bottom panel of Fig. 5 shows the projected orientation $\alpha$ of the angular momentum as a function of clump mass for all of the clumps obtained with the $\rho_c=3\times 10^{-18}$g cm$^{-3}$ cutoff density, for elapsed times $t=95$ and 200 kyr. We see that at $t=95$ kyr most of the clumps (triangles) have masses $M_c<0.3$M$_\odot$ and angular momenta with all $\alpha$ orientations. For $t=200$ kyr (crosses, see bottom panel of Fig. 5), we see that all of the clumps with $1$M$_\odot<M_c<60$M$_\odot$ have angular momenta with orientation angles $70^\circ<\alpha< 120^\circ$. The lower mass clumps (with $M_c<0.3$M$_\odot$) have angular momenta with more widely distributed orientations. We computed the moduli of the specific angular momenta (i. e., the momentum per unit mass) of the clumps. The values of $L/M$ for $t=95$ and 195 kyr are shown as a function of the clump mass $M$ in the top panel of Fig. 5. For $t=95$ kyr, we see that the clumps with $M_c<0.3$M$_\odot$ have $M/L<6\times 10^{10}$ km$^{2}$s$^{-1}$. The more massive clumps (with $M_c>1$M$_\odot$ ) have $M/L>6\times 10^{10}$ km$^{2}$s$^{-1}$. For $t=200$ kyr, we see that the $M_c<0.02$M$_\odot$ clumps have $M/L<6\times 10^{10}$ km$^{2}$s$^{-1}$, while the clumps with $M_c>0.02$M$_\odot$ have $M/L>6\times 10^{10}$ km$^{2}$s$^{-1}$. We now evaluate whether or not these specific angular momenta have values comparable to those observed in young star systems. Typical T Tauri stars have masses $M\approx 2M_\odot$ and accretion disks with radii $r_D\approx 50$ AU. The outer Keplerian orbit of the disk then has a specific angular momentum $(L/M)_D\approx 5\times 10^{10}$ km$^{2}$s$^{-1}$. This outer orbit is determined by the material of the highest angular momentum in the core from which the star+disk system was formed (see, e. g., Ulrich 1976). This value of $(L/M)_D$ is shown with a horizontal line in the top panel of Fig. 5. It is clear that many of the clumps formed in our simulation have specific angular momenta that are substantially higher than deduced from the radius of disks around T Tauri stars. From this, we conclude that the angular momenta of the clumps generated in our simulation are substantial. A relevant question is whether the clumps obtained in our simulations are resolved well enough for the calculations of angular momenta to be meaningful. As an example, we consider the clumps found for the $\rho_c=3\times 10^{-18}$ g cm$^{-3}$ cutoff density at time $t=200$ kyr. The lower mass clumps (see Fig. 5), of $M\approx 0.01$ M$_\odot$, are resolved with $\sim 5$ grid points. The clumps of $M\approx 0.1$ M$_\odot$ are resolved with $\sim 50$ grid points. The clumps with $1<M<100$ M$_\odot$ are resolved with $\sim 500$ to 5000 grid points. Therefore, for clumps with $M> 0.1$ M$_\odot$, the resolution of the internal structure of the clumps (with 50 grid points, corresponding to $\sim 4$ grid points along each axis) appears to be appropriate for obtaining a meaningful estimate of the angular momentum. From the number of grid points $N_c$ within the clumps, we can estimate the characteristic radii of the clumps to be $R_c=0.012\,{\rm pc}\, \times N_c^{1/3}/2$, where 0.012 pc is the resolution of the computational cells in our numerical simulation. From the values of $N_c$ given in the previous paragraph, we then see that the clumps obtained from our simulation have characteristic radii $R_c\approx 0.01$, 0.02, 0.05, and 0.1 pc, for clump masses of $M\approx 0.01$, 0.1, 1, and 100 M$_\odot$, respectively. Finally, we study the evolution in the most massive, neutral clump (detected with $\rho_c=3\times 10^{-18}$ g cm$^{-3}$). As seen in Fig. 6, this clump has a mass that grows monotonically from 0.6 M$_\odot$ at $t=38$ kyr, to 60 M$_\odot$ at $t=200$ kyr. The orientation angle $\alpha$ (on the plane of the sky) of its angular momentum stabilizes rapidly at $\alpha \approx 90^\circ$ for $t>70$ kyr. We compute the Jeans mass of this clump to $$M_{J}=\frac{1}{6} \pi \bar{\rho} \left( \frac{\pi c_{s}^{2}}{G\bar{\rho}}\right) ^{\frac{3}{2}}\,,$$ (see, e. g., E07) where $G$ is Newton’s constant, $c_s$ is the sound speed of the neutral medium (see Sect. 2), and $\bar{\rho}=M_c/V^{1/3}$. We show the ratio $M_c/M_J$ for the most massive clump as a function of time in the central panel of Fig. 6. It is clear that this clump is Jeans unstable for $\sim 160$ kyr, which is a long enough timescale for the formation of a low mass star. Figure 7 shows the density and flow velocity distributions in the $xy$- and $xz$-planes, within a ($2\times 10^{17}$ cm)$^2$ region centred on the centre of mass of the most massive clump at $t=200$ kyr. In the two cuts that are shown, we see that the region with densities higher than the $\rho_c=3\times 10^{-19}$ g cm$^{-3}$ density cutoff (which corresponds to a number density of $1.4\times 10^5$ cm$^{-3}$) has a number of density maxima, none of which coincides with the centre of mass of the structure. The $xz$-plane (bottom panel) shows the velocity field that gives rise to the angular momentum of the clump. Conclusions =========== We have presented the results of numerical simulations of a neutral structure with power-law density perturbations that is photoevaporated by an incident, plane-parallel, ionising photon field, with and without the self gravity of the gas. In this interaction, a number of dense, neutral clumps are produced. Our simulations are similar to those presented by Gritschneder et al. (2009). The main difference is that while they started their simulations in a medium with turbulent motions, our simulations begin in a stationary medium with density perturbations. In our simulations, the velocity field that develops is therefore mainly the result of the interaction with the ionising photon field. Defining clumps as contiguous structures above a cutoff density $\rho_c$, we compute the statistics of the number of clumps as a function of elapsed time (for different values of $\rho_c$). We then fix the cutoff density at $\rho_c=3\times 10^{-18}$ g cm$^{-3}$, to focus on the denser clumps appearing at later elapsed times. For these clumps, we compute the vector angular momenta, from which we obtain the direction of the rotation axes (projected on the plane of the sky) and the specific angular momenta. We find that as a function of evolutionary time we obtain orientations that are aligned increasingly perpendicular to the direction of the incident, ionising photon field. For the most massive clump, we find that it has a mass that increases across the range $\approx 0.6$-60 M$_\odot$ (during the $t=38\to 200$ kyr period), and that the orientation angle $\alpha$ of its angular momentum eventually stabilizes at $\alpha\approx 90^\circ$ (i. e., the direction perpendicular to the direction of the incident photon field). We use an estimate of the Jeans mass of the clump to show that it is Jeans unstable throughout the $t=38\to 200$ kyr period. This timespan is long enough for a low mass star to form within the most massive clump. However, at the resolution of our simulation (with a grid spacing of $\approx 800$ AU), we naturally do not succeed in form a star+disk system. If we analyse our non-gravitating simulation (see Fig. 1), we obtain qualitatively similar results. Regardless of whether we consider the self-gravity of the gas or not, we produce clumps with angular momenta preferentially aligned perpendicular to the direction of the incident ionising photon field. Even though it is impossible to provide a full explanation of this alignment, it is possible to provide a qualitative explanation. During the interaction of the ionising photon field with a perturbed density structure, a corrugated ionisation front is produced. This ionisation front pushes a shock into the neutral gas, producing a sheared velocity field that is preferentially aligned with the $x$-axis (i. e., with the direction of the ionising photon field). This sheared velocity field eventually produces vortical motions that are perpendicular to both the $x$-axis and to the direction of the shear. This motion is seen in the $xz$-plane velocity field around the most massive clump in the $t=200$ kyr frame shown in the bottom frame of Fig. 7. We have shown that the dense clumps that form as the result of the photoevaporation of a dense, neutral structure in the ISM have angular momenta preferentially aligned in a direction perpendicular to the external ionising photon field. This result provides a natural explanation of the orientations observed in the (Bally & Reipurth 2003), (Smith, Bally & Brooks 2004), and (Rosado et al. 1999; Yusef-Zadeh et al. 2005) outflows, which emerge from elephant trunks in directions approximately perpendicular to the body of the trunks. Future observations of HH flows emerging from externally photoionised, neutral structures will show whether or not this kind of orientation is a general property of these outflows. We note again that we have simulated an ionisation front travelling into an initially steady, neutral medium with density perturbations. In this way, our simulations follow the dynamics produced by the propagating ionisation front and associated shock waves, which result in the production of clumps with angular momenta preferentially aligned perpendicular to the direction of the ionising photon source. In the real ISM, a medium with density perturbations also has associated motions, and an initial vorticity field that will influence the angular momenta of clumps that might form (e. g., in the interaction with an ionisation front). If the initial vorticity field is strong enough, it will probably hide the effect of the vorticity generated by the shocks associated with the ionisation front, and the angular momentum alignment effect described in this paper will not be present. An evaluation of whether or not the vorticity generated by the ionisation front will be hidden by the initial vorticity field of the cloud (present before the perturbations associated with the approaching ionisation front) can be completed on the basis of observations of the rotation of dense clumps in molecular clouds. For example, Ohashi et al. (1997) observed the kinematics of a number of NH$_3$ cores in IRAS 04169+2702 and computed their specific angular momenta. They find that cores with radii in the $0.02\to 0.1$ pc range have specific angular momenta $L/M\sim (0.3\to 3)\times 10^{11}$ km$^2$s$^{-1}$ (clumps with larger radii having specific angular momenta up to an order of magnitude higher for a $\sim 1$ pc clump radius). In our simulation, the clumps with radii in the $0.02\to 0.1$ pc range (corresponding to clump masses in the $0.1\to 100$ M$_\odot$ range, see Sect. 3), have angular momenta $L/M\sim (0.4\to 20)\times 10^{11}$ km$^2$s$^{-1}$ (see Fig. 5). Therefore, our clumps have angular momenta with values ranging from the lower $L/M$ values of the cores observed by Ohashi et al. (1997), up to a factor of $\sim 10$ times higher than the observed values. This result indicates that if the initial specific vorticity of the structure in the cloud were comparable to that of IRAS 04169+2702, the passage of an ionisation front would generate clumps of considerably higher specific vorticity, and therefore the angular momentum alignment effect described in this paper would indeed be present (at least for the more massive, higher angular momentum clumps). As a final point, we note that in the simulations presented in this paper we consider only the photoionisation of a neutral structure. In the case of the interaction of the radiation of an O star with a molecular cloud, it is unavoidable that the region outside the ionisation front will be affected by the FUV radiation from the star, which at least partially photodissociates the initially molecular material. Gorti & Hollenbach (2002) computed models of the photodissociation of dense clumps, and concluded that clumps with central column densities $<2\times 10^{22}$ cm$^{-2}$ (for an assumed cold-to-dissociated gas sound speed ratio of $\sim 1/3$) will be rapidly photodissociated, and disappear as local density enhancements. In our simulations, the clumps that are produced have central column densities of $\sim (4,9,23,47)\times 10^{22}$ cm$^{-2}$ for clump masses of 0.01, 0.1, 1, and 100 M$_\odot$, respectively (these central column densities are estimated by multiplying the clump radii given in Sect. 3 by the cutoff density of $\sim 1.5\times 10^6$ cm$^{-3}$). Therefore, in all cases the clumps have high enough column densities to avoid their dissipation by the incident FUV field. From the results of Gorti & Hollenbach (2002), we therefore conclude that the photodissociation caused by the FUV field will not destroy the clumps produced in our simulations. However, the early evolution of the flow (in which high density structures have not yet formed) might indeed be modified by the presence of a FUV field. It will therefore be interesting to carry out a future exploration of the formation of clumps within elephant trunks in the presence of both a photodissociating and a photoionising photon field. We acknowledge support from the CONACyT grant 61547. VL acknowledges the CONACyT scholarship 194595 and Stu group. We thank an anonymous referee for helpful suggestions. We thank Malcolm Walmsley for pointing out that the observations of angular momenta of cores are relevant for the present work (giving rise to the four last paragraphs of Sect. 4). [99]{} Bally, J., Reipurth, B., Davis, C. J. 2007, Protostars and Planets V, eds. B. Reipurth, D. Jewitt and K. Keil (Univ. of Arizona Press), p. 215-230 Bally,J., Reipurth,B. 2003, AJ, 126, 893 Esquivel, A., Lazarian, A., Pogosyan, D., Cho, J. 2003, MNRAS, 342, 325 Esquivel, A., Raga, A. C. 2007, MNRAS, 377, 383 (E07) Gahm, G. F., Carlqvist, P., Johansson, L. B., Nikolić, S. 2006, A&A, 454, 201 Gorti, U., Hollenbach, D. 2002, ApJ,573, 215 Gritschneder, M., Naab, T., Walch, S., Burkert, A., Heitsch, F. 2009, ApJ, 694, L26 Mellema, G., Arthur, S. J., Henney, W. J. et. al. 2006, ApJ, 647, 397 Ohashi, N., Hayashi, M., Ho, P. T. P., Momose, M., Tamura, M., Hirano, N., Sargent, A. 1997, ApJ, 488, 317 Raga, A. C., Henney, W., Vasconcelos, J., Cerqueira, A., Esquivel, A., Rodríguez-González, A. 2008, MNRAS, in press Reach, W. T., Faied, D., Rho, J.,Boogert, A., Tappe, A., Jarrett, T., Morris, P., Cambrésy, L., Palla, F., Valdettaro, R. 2009, ApJ, 690, 683 Rosado, M., Esteban, C., Lefloch, B., Cernicharo, J., García López, R. J. 1999, AJ, 118, 2962 Smith, N., Bally, J., Brooks, K. 2003, AJ, 127, 2793 Ulrich, R. K. 1976, ApJ, 210, 377 Yusef-Zadeh, F., Biretta, J., Wardle, M. 2005, ApJ, 624, 246 \[lastpage\]
--- abstract: 'In this work, we present an application of domain randomization and generative adversarial networks (GAN) to train a near real-time object detector for industrial electric parts, entirely in a simulated environment. Large scale availability of labelled real world data is typically rare and difficult to obtain in many industrial settings. As such here, only a few hundred of unlabelled real images are used to train a Cyclic-GAN network, in combination with various degree of domain randomization procedures. We demonstrate that this enables robust translation of synthetic images to the real world domain. We show that a combination of the original synthetic (simulation) and GAN translated images, when used for training a Mask-RCNN object detection network achieves greater than 0.95 mean average precision in detecting and classifying a collection of industrial electric parts. We evaluate the performance across different combinations of training data.' author: - | Fernando Camaro NoguesAndrew HuieSakyasingha Dasgupta\ Ascent Robotics, Inc. Japan\ [{fernando, andrew, sakya}@ascent.ai]{} bibliography: - 'egbib.bib' title: Object Detection using Domain Randomization and Generative Adversarial Refinement of Synthetic Images --- Introduction ============ The successful examples of deep learning require a large number of manually annotated data, which can be prohibitive for most applications, even if they start from a pre-trained model in another domain and only require a fine-tuning phase in the target domain. An effective way to eliminate the cost of the expensive annotation is to train the model within a simulated environment where the annotations can be also automatically generated. However, the problem with this approach is that the generated samples (in our case images) may not follow the same distribution as the real domain, resulting in what is known as the reality-gap. Several approaches exist that try to reduce this apparent gap. One such method is domain randomization ([@domain_rand], [@sadeghi2017]). In this, several rendering parameters of the scene can be randomized, like the color of objects, textures, lights, etc, thus effectively enabling the model to see a very wide distribution during training, and seeing the real distribution as one variation in it. Another approach that directly tries to minimize this reality-gap is to refine the synthetic images so that they look more realistic. One possible way to build such a refiner is by using a generative adversarial training framework [@Shrivastava2017]. An alternative and more indirect approach to reduce the negative effect of this reality-gap is to use again the GAN framework, but in this case, directly on the features of some of the last layers of the network being trained for the specific target task [@Ganin2016].\ In this work we present an experimental use case of an object detector in a real industrial application setting, which is trained with different combinations of synthetic images and refined synthetic images (synthetic images refined to look more realistic). We evaluate our method robustly across various combinations of training data. Synthetic Image Generation with Domain Randomization ==================================================== The architecture to produce the synthetic images for our experiments is composed of two main parts. First, the physics simulation engine, Bullet[^1] is used to place the objects in a physically consistent configuration after letting them fall from a random position. Second, the ray tracing rendering library POV-Ray[^2] is used to render an image based on this configuration. In POV-Ray we introduce domain randomization, by randomizing several parameters, namely, the number of lights and their color, the color and texture of each part of the target objects and the scene floor plane, as well as the camera position. The camera position is drawn from a uniform distribution in a rectangular prism that is 10cm above the floor plane, with a squared base of side 20cm and 10cm height. Although the location of the camera was uniform, the camera was always pointing to the global coordinates origin with no rolling angle. This variation of the camera position was intended to achieve robustness against different positions of the camera in the real world. Refinement of synthetic images by adversarial training ====================================================== An alternative way we consider to reduce the reality-gap is to use the GAN framework to refine the synthetic images to look more realistic. Here, we selected the Cyclic-GAN [@cyclicgan] architecture since it only requires two sets of unpaired examples, one for each domain, the synthetic and the real one. The original synthetic images of size 1024x768 were too large for the training of our Cyclic-GAN model, as such, instead of resizing the image, we opted for training on random crops of size 256x256. This way we can train in the original pixel density and exploit the fact that our generators are fully convolutional networks, such that during the inference phase we can still input the original full-size image. ![Left: example of synthetic image. Right: corresponding synthetic image after translation to real domain. The USB socket has gained a more realistic reflection, and the switch has gained a realistic surface texture and color.[]{data-label="fig:sim2real"}](myfiles/sim.png "fig:"){width="0.45\linewidth"} ![Left: example of synthetic image. Right: corresponding synthetic image after translation to real domain. The USB socket has gained a more realistic reflection, and the switch has gained a realistic surface texture and color.[]{data-label="fig:sim2real"}](myfiles/sim2real.png "fig:"){width="0.45\linewidth"} \ We notice that after training, one particular target object lost its color and turned gray, while the remaining objects were refined in a realistic manner without loosing their original color. We think that this was mainly due to the particular architecture of the discriminators. The discriminator model final layer consisted of a spatial grid of discriminator neurons whose receptive field with respect to the input image was too small to capture that object. In order to solve this we added more convolutional layers to the discriminator models. This effectively increased the receptive field size. Furthermore, instead of substituting one grid of discriminators by another, we preferred to maintain both, one with small receptive field intended to discriminate details of the objects, and another with large receptive field, that can understand the objects as a whole (Fig. \[fig:2disclayers\] in Appendix). The final loss was computed as the mean of all individual discriminator units for both of these two layers. This small modification enabled us to maintain the color of all the objects. The Cyclic-GAN model was trained using 10K synthetic images and 256 real images. Fig.\[fig:sim2real\] shows an example of the resulting image with our model that translates from synthetic domain to the real domain; see Fig. \[fig:moresim2real\] in Appendix for more examples. Experiments =========== In this section we compare different combinations of training data and its impact on the mAP for object detection with a Mask-RCNN model [@maskrcnn]. As a test dataset we have used 100 real images.\ The different types of datasets used for training were: $S_{fix}$ : synthetic images with fixed object colors without texture, and white background. $S_{fix\ \rightarrow real}$: translated images from $S_{fix}$ to the real domain. $S_{rand-tex}$: synthetic images with objects and background with randomized colors but without texture. $S_{rand+tex}$: synthetic images with objects and background with randomized colors and texture. See Fig. \[fig:training\_arch\] in the appendix for a general overview of the training architecture and Fig. \[fig:typesimages\] for some examples of different types of images employed.\ The target objects to be detected, consisted of 12 tiny electronic parts for which accurate 3D CAD models were available (Fig. \[fig:parts\]). In all the experiments we used 10K training samples, the same number of training iterations and the same hyperparameters. The object detection performance for the different combinations of datasets used in the experiments are presented in Table \[table:results\] . Using a training set made purely of one type of data resulted in a mAP below 0.9 in most cases, with the exception of the case with $S_{rand+ tex}$. Overall, the best detection results were obtained when the refined synthetic images set ($S_{fix\rightarrow real}$) was combined with high variation randomized data ($S_{rand+tex}$). The results indicate that neither domain randomization or GAN based refinement is enough on its own to get sufficient performance. In combination, they reduce the reality-gap effectively, resulting in a significant boost in performance (*see the real-time object detection video at <https://youtu.be/Q-WeXSSnZ0U>*). Refer to Fig. \[fig:training\_curves\] for the training curves associated with the different experiments, and to Fig. \[fig:detection\_result\] for some detection result images. Training data mAP (0.5 IoU) ------------------------------------------------------ --------------- 100% $S_{fix}$ 0.812 100% $S_{fix\rightarrow real}$ 0.874 100% $S_{rand-tex}$ 0.867 100% $S_{rand+tex}$ 0.911 20% $S_{fix}$ and 80% $S_{rand+tex}$ 0.914 20% $S_{fix\rightarrow real}$ and 80% $S_{rand+tex}$ 0.955 50% $S_{fix\rightarrow real}$ and 50% $S_{rand+tex}$ 0.950 : Performance of the Mask-RCNN network for the different training datasets.[]{data-label="table:results"} [9]{} Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, Pieter Abbeel. *Domain Randomization for Transferring Deep Neural Networks from Simulation*. In the proceedings of the 30th IEEE/RSJ International Conference on Intelligent RObots and Systems (IROS), Vancouver, Canada, October 2017 Sadeghi, Fereshteh and Levine, Sergey. *CAD2RL: Real Single-Image Flight without a Single Real Image*, Robotics: Science and Systems(RSS), 2017. Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Joshua Susskind, Wenda Wang and Russell Webb. *Learning from Simulated and Unsupervised Images through Adversarial Training*. 2017 [IEEE]{} Conference on Computer Vision and Pattern Recognition, [CVPR]{} 2017, Honolulu, HI, USA, July 21-26, 2017 Ganin, Yaroslav et al. *Domain-adversarial Training of Neural Networks*. The Journal of Machine Learning Research, 2016. Kaiming He, Georgia Gkioxari, Piotr Doll[á]{}r and Ross B. Girshick. *Mask R-CNN*. 2017 IEEE International Conference on Computer Vision (ICCV), 2017. Jun[-]{}Yan Zhu, Taesung Park, Phillip Isola and Alexei A. Efros. *Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks*. arXiv preprint arXiv:1703.10593, 2017 2017 IEEE International Conference on Computer Vision (ICCV), 2017. Appendix {#appendix .unnumbered} ======== In Fig. \[fig:training\_arch\] we provide a schematic overview of the object detection training data generation pipeline. ![General architecture for training the object detector.[]{data-label="fig:training_arch"}](myfiles/general_architecture_final.pdf) ![Discriminator network with two grid layers of discriminator cells, one with small receptive field and the other with bigger receptive field.[]{data-label="fig:2disclayers"}](myfiles/2disc.pdf){width="1\linewidth"} -- -- -- -- -- -- -- -- -- -- -- -- -- -- ![image](myfiles/maskrcnn/all_experiments_training_loss.pdf){width="0.8\linewidth"} ![image](myfiles/example_of_detections.png){width="0.8\linewidth"} [^1]: https://pybullet.org/wordpress/ [^2]: http://www.povray.org/
--- abstract: 'We compute the complete $RO(G)$-graded coefficients of “ordinary" cohomology with coefficients in $\Z/2$ for $G=(\Z/2)^n$.' author: - John Holler and Igor Kriz title: 'On $RO(G)$-graded equivariant “ordinary" cohomology where $G$ is a power of $\Z/2$' --- Introduction ============ The notion of a cohomology theory graded by elements of the real representation ring ($RO(G)$-graded cohomology) is a key concept of equivariant stable homotopy theory of a finite or compact Lie group $G$. Like much of stable homotopy theory, perhaps one of the first known example was K-theory. Atiyah and Singer [@as] introduced equivariant K-theory of a compact Lie group $G$ and proved that it is naturally $RO(G)$-graded. In fact, Bott periodicity identifies many of the “dimensions“ in $RO(G)$, and relates others to ”twistings" (see Karoubi [@kar] and, for a more recent treatment, Freed, Hopkins and Teleman [@fht]). Pioneered by Adams and Greenlees [@green], the general $RO(G)$-graded stable homotopy theory theory found firm foundations in the fundamental book of Lewis, May and Steinberger [@lms]. Despite the clear importance of the concept, beyond K-theory, calculations of $RO(G)$-graded cohomology are few and far in between. Perhaps the most striking case is “ordinary" $RO(G)$-graded cohomology. Bredon [@bredon] discovered $\Z$-graded $G$-equivariant cohomology associated with a [*coefficient system*]{} which is “ordinary" in the sense that the cohomology of a point is concentrated in a single dimension. It was later discovered ([@lmm]) that such a theory becomes $RO(G)$-graded when the coefficient system enjoys the structure of a [*Mackey functor*]{} [@dress], which means that it allows building in an appropriate concept of [*transfer*]{}. Strikingly, the $RO(G)$-graded coefficients were not known in any single non-trivial case. Complete calculations of $RO(\Z/2)$-graded coefficients, however, are important in Real-oriented stable homotopy theory, because they exhibit the analogy with the complex-oriented case. Real orientation was, once again, discovered first by Atiyah in the case of $K$-theory [@atiyah], and was subsequently extended to cobordism by Landweber [@land]. $RO(\Z/2)$-graded cohomology with coefficients in the Burnside ring Mackey functor was calculated by Stong [@sgl]. A systematic pursuit of real-oriented homotopy theory was started by Araki [@araki], and developed further by Hu and Kriz [@hk] with many calculations, including a complete calculation of the $RO(G)$-graded coefficients of Landweber’s Real cobordism spectrum. In the process, [@hk] also calculated the $RO(\Z/2)$-graded ordinary cohomology of the “constant" Mackey functors $\Z$ and $\Z/2$ (i.e. the Mackey functors uniquely extending the constant coefficient systems). A major development was the work of Hill, Hopkins and Ravenel [@hhr], who partially extended the calculations of [@hk] to $\Z/(2^k)$ (with special interest in $k=3$), and applied this to solving the Kervaire-Milnor problem by showing the non-existence of manifolds of Kervaire invariant $1$ in dimensions $>126$. A still more complete calculation of $RO(G)$-graded ordinary equivariant cohomology of the constant Mackey functors for $G=\Z/(2^k)$ was more recently given in [@hkem]. Still, no calculations of $RO(G)$-graded cohomology beyond K-theory were known for groups other than where $G$ is a primary cyclic group. In a spin-off [@hkherm] of their joint solution with Ormsby [@hko] of Thomason’s homotopy limit problem for Hermitian K-theory, Hu and Kriz computed the $RO(G)$-graded coefficients of topological Hermitian cobordism, which has $G=\Z/2\times \Z/2$. However, this is a rather special case, where many periodicities occur. The purpose of the present paper is to calculate the $RO(G)$-graded coefficients of the ordinary equivariant cohomology of the “constant" $\Z/2$ Mackey functor for $G=(\Z/2)^n$. There are several reasons to focus on this case. The group $(\Z/2)^n$ has an exceptionally simply described real representation ring, thus eliminating the need to handle representation-theoretical exceptions such as distinguishing between real and complex (let alone, quaternionic) representations. The coefficients $\Z/2$ are more convenient than $\Z$, since they eliminate the need to consider extensions. Despite all this, the complete answer is complicated, however, and in general, we are only able to present it in the form of the cohomology of an $n$-stage chain complex. Our method is based on [*isotropy separation*]{}, a term coined by Greenlees and May [@gmsur], to mean considering separately the contributions of subgroups of $G$. An isotropy separation spectral sequence was developed in [@abk], but we use a different spectral sequence here. The reason is that in [@abk], we are not concerned with $RO(G)$-graded coefficients, but rather with computing the complete $\Z$-graded coefficients of equivariant complex cobordism of a finite abelian group $G$ as a ring. Based on generalizing the method of [@kriz] in the case of $G=\Z/p$, in the case of $\Z$-graded equivariant complex cobordism, one can set up a spectral sequence of rings which collapses to $E^2$ in a single filtration degree. This means that the complete ring structure can be recovered, which is a special property of complex cobordism. It is worth mentioning that the spectral sequence of [@abk] contains many “completed” (i.e., for example, uncountable) terms. The case of ordinary $RO(G)$-graded equivariant cohomology is quite different, however, in that the spectral sequence fails to collapse to a single degree. Even for $G=\Z/p$, we observe that a part of the coefficients are in filtration degree $0$ and a part in filtration degree $-1$ (graded homologically). This caused us to give up, at least for now, calculating the complete ring structure, and use a spectral sequence which is more amenable to calculations instead. Another key ingredient in our computation is the concept of [*geometric fixed points*]{} of an $RO(G)$-graded equivariant cohomology theory. This concept was introduced (using a different terminology) by tom Dieck [@td], who calculated the geometric fixed points of equivariant complex cobordism. As far as we know, the term geometric fixed points was coined by Greenlees and May, and is recorded in [@lms]. Unlike actual fixed points, the geometric fixed point coefficients are [*periodic*]{} with respect to all non-trivial irreducible real representations of $G$. Thus, instead of $RO(G)$-graded, the geometric fixed points are, again, only $\Z$-graded. This is a big advantage in expressing the answer. Note that the ring $RO((\Z/2)^n)$ is huge: it is the free abelian group on $2^n$ generators! On the downside, the term “geometric" fails to carry the expected implications in the case of ordinary equivariant cohomology: we know of no geometry that would help calculating them. Still, in the case $G=(\Z/2)^n$, a complete calculation of the geometric fixed point ring of $H\Z/2$ is possible using spectral sequence methods. This is our Theorem \[t1\]. The main method of this paper is, basically, setting up another spectral sequence which enables the calculation of the coefficients of $H\Z/2_{(\Z/2)^n}$ by investigating how they differ from the coefficients of the geometric fixed points. There results a spectral sequence, which, in a fairly substantial range of $RO(G)$-graded dimensions, collapses to $E^2$ in degree $0$. More precisely, the range is, graded homologically, suspensions by elements of $RO(G)$ where summands of non-trivial irreducible representations occur with [*non-positive*]{} coefficients. Alternately, graded cohomologically, this is the range of suspensions by actual representations, possibly minus a trivial representation. (As it turns out, however, in this case, when the trivial representation has a negative coefficient, the cohomology group is $0$.) In this case, we can both recover the complete ring structure, since the ring embeds into the ring of geometric fixed points tensored with $RO(G)$. We also have a nice concise formula for the Poincare series in this case (Theorem \[t2\]). In the case of completely general $RO(G)$-dimension with $G=(\Z/2)^n$, we are only able to give a spectral sequence in $n$ filtration degrees, which collapses to $E^2$ and calculates the $RO(G)$-graded coefficient group of $H\Z/2_G$. Thus, this gives an algebraically defined chain complex whose homology are the desired groups (Theorem \[t3\]). We give an example of a complete calculation of the Poincare series of the $RO(G)$-graded coefficients of $H\Z/2_{\Z/2\times\Z/2}$ (the case $n=2$), which clearly shows that the answer gets complicated, and additional complications arise for $n\geq 3$. The present paper is organized as follows: In Section \[s2\], we introduce the necessary conventions and notation. In Section \[s3\], we compute the geometric fixed points. In Section \[s4\], we compute the coefficients in dimensions involving elements of $RO(G)$ where non-trivial irreducible representations have non-positive coefficients (graded homologically). In Section \[s5\], we calculate the chain complex computing the complete $RO(G)$-graded coefficients of $H\Z/2_G$ for $G=(\Z/2)^n$. In Section \[s6\], we treat the example of $n=2$. The authors apologize to the readers for not stating their theorems in the Introduction. Even in the prettiest cases, the theorems involve quite a lot of notation and technical prerequisites. We prefer to state them properly in the text. [**Recent developments: Odd primes, and hyperplane arrangements.**]{} While this paper was under review, several developments took place. A generalization of the present result to $(\Z/p)^n$ for $p$ an odd prime was found by Holler. The authors also found out that the ring described in Theorem \[t1\] is a previously known object in algebraic geometry, related to a certain compactification of complements of hyperplane arrangements referred to as [*the reciprocal plane*]{}. More concretely, for a set $S=\{z_\alpha\}$ of equations of hyperplanes through $0$ in an affine space $Spec(F[u_1,\dots,u_n])$ of a field $F$, one considers the subring $R_S$ of [$$\label{efff1}(\prod_{\alpha\in S} z_\alpha)^{-1}F[u_1,\dots,u_n]$$]{} generated by the elements $z_\alpha^{-1}$ (which correspond to our elements $x_\alpha$). The ring was first described by Terao [@terao], and a particularly nice presentation was found by Proudfoot and Speyer [@ps]. In the case of an odd prime $p$, one deals analogously with the subring $\Xi_S$ of [$$\label{efff2}(\prod_{\alpha\in S} z_\alpha)^{-1}F[u_1,\dots,u_n]\otimes_F \Lambda(du_1,\dots,du_n)$$]{} generated by $z_\alpha^{-1}$ and $d\log(z_\alpha)$, which are topologically in dimensions $2$ and $1$, respectively. The analogues of the constructions of [@ps; @terao] in this graded-commutative case, and the reciprocal plane compactification, were recently worked out by S. Kriz [@sk]. Our emphasis is quite different form the authors of [@ps; @terao], who, doing classical algebraic geometry, were mostly interested in characteristic $0$. Their arguments, however, work in general. The ring described in Theorem \[t1\] (and its $\Z/p$ analogue discovered by Holler, i.e. the geometric fixed point ring of $H\Z/p_{G}$ where $G=(\Z/p)^n$) is related to the hyperplane arrangement of [*all*]{} hyperplanes through $0$ in the the $n$-dimensional affine space over $\Z/p$. It follows, however, from the description of [@ps; @terao; @sk] that for a subset $S^\prime$ of a hyperplane arrangement $S$, the ring $R_{S^\prime}$ (resp. $\Xi_{S^\prime}$) is a subring of $R_{S}$ (resp. $\Xi_{S}$). It follows in turn that for [*every*]{} hyperplane arrangement in $G=(\Z/p)^n$, the $\Z$-graded part of the coefficient ring of the spectrum $$\bigwedge_{\alpha\in S} S^{\infty\alpha}\wedge H\Z/p_G$$ is $R_S$ for $p=2$, and $\Xi_S$ for and odd prime $p$. Conventions and notation {#s2} ======================== Throughought this paper, let $G=(\Z/2)^n$. Then the real representation ring of $G$ is canonically identified as $$RO(G)=\Z[G^*]$$ where $G^*=Hom(G,\Z/2)$. Recall [@lms] that for $H\subseteq G$, we have the family $\mathcal{F}[H]$ consisting of all subgroups $K\subset G$ with $H\nsubseteq K$. (In the case of $H=G$, $\mathcal{F}[G]$ is simply the family $\mathcal{P}$ of proper subgroups of $G$.) Recall further that for any family $\mathcal{F}$ (a set of subgroups of $G$ closed under subconjugation, which is the same as closed under subgroups, as $G$ is commutative), we have a cofibration sequence $$E\mathcal{F}_+\r S^0 \r \widetilde{E\mathcal{F}}$$ where $E\mathcal{F}$ is a $G$-CW-complex whose $K$-fixed point set is contractible when $K\in\mathcal{F}$ and empty otherwise. For our choice of $G$, we may then choose a model [$$\label{egeom+}\widetilde{E\mathcal{F}[H]}=\bigwedge_{\alpha\in G^*, \;\alpha|H\neq 0}S^{\infty\alpha}.$$]{} Here $S^{\infty\alpha}$ is the direct limit of $S^{n\alpha}$ with respect to the inclusions [$$\label{egeom*}S^0\r S^\alpha$$]{} given by sending the non-base point to $0$. The other construction we use is the family $\mathcal{F}(H)$ of all subgroups of a subgroup $H\subseteq G$. We will write simply $$EG/H=E\mathcal{F}(H).$$ The cardinality of a finite set $S$ will be denoted by $|S|$. We will also adopt a convention from [@hk] where, for an $RO(G)$-graded spectrum $E$, $E_*$ denotes the $\Z$-indexed coefficients (=homotopy groups) of $E$, while the $RO(G)$-indexed coefficients will be denoted by $E_\star$. As is customary, we will also denote by $S(V)$ the unit sphere of a representation $V$, while by $S^V$ we denote the $1$-point compactification of $V$. The $RO(G)$-graded dimension of a homogeneous element $x\in E_\star$ will be denoted by $|x|$. The geometric fixed points {#s3} ========================== In this section, we compute the coefficients of the geometric fixed point spectrum $\Phi^GH\Z/2$. We have [$$\label{egeom1+}\Phi^GH\Z/2=(\widetilde{E\mathcal{F}[G]}\wedge H\Z/2)^G.$$]{} By [(\[egeom+\])]{}, suspension of $H\Z/2$ by any non-trivial irreducible real representation of $G$ gives an isomorphism on coefficients, so the coefficients $(\Phi^G\Sigma^?H\Z/2)_*$ are only $\Z$-graded, not $RO(G)$-graded. More specifically, we have a cofibration sequence [$$\label{egeom++}EG/Ker(\alpha)_+\r S^0\r S^{\infty\alpha},$$]{} so smashing over all non-trivial $1$-dimensional representations $\alpha$, using [(\[egeom+\])]{}, we may represent $$\widetilde{E\mathcal{F}[G]}\wedge H\Z/2$$ as the iterated cofiber of a $2^n-1$-dimensional cube of the form [$$\label{egeom1}H\Z/2\wedge\bigwedge_{0\neq \alpha\in G^*}(EG/Ker(\alpha)_+\r S^0).$$]{} Taking coefficients in [(\[egeom1\])]{} then gives a spectral sequence converging to $\Phi^GH\Z/2_*$. Now also note that [$$\label{egeom2}EG/H_1\times\dots \times EG/H_k\simeq EG/(H_1\cap\dots\cap H_k).$$]{} From this, we can calculate the spectral sequence associated with the iterated cofiber of the cube [(\[egeom1\])]{}. Let us grade the spectral sequence homologically, so the term $H\Z/2_*=\Z/2$ is in $E^{1}_{0,0}$. The rest of the $E^1$-term is then given as [$$\label{egeom2a}E^{1}_{p,*}=\bigoplus_{S\in \mathcal{S}_p} Sym_{\Z/2}((G/\cap\{Ker (\alpha)\mid \alpha\in S\})^*)\cdot y_S$$]{} where $\mathcal{S}_p$ is the set of all subsets of $G^*\smallsetminus\{0\}$ of cardinality $p$. (The last factor $y_S$ of [(\[egeom2a\])]{} is only a generator written to distinguish the summands.) Now the $E^2$-term can also be calculated using the following \[l1\] Consider the differential $\partial$ on $$Q_n=\Z/2\{y_S\mid S\subseteq (\Z/2)^n\smallsetminus \{0\}\}$$ given by [$$\label{epartial}\partial(y_S)=\sum_{s\in S,\;\langle {S\smallsetminus\{s\}}\rangle=\langle S\rangle} y_{S\smallsetminus \{s\}}.$$]{} Then the homology is the $\Z/2$-vector space (freely) generated by a set $F_n$ described inductively as follows: $$F_1=\{y_\emptyset, y_{\{(1)\}}\},$$ $$F_n=F_{n-1}\cup \{y_{S\cup \{x\}}\mid S\in F_{n-1}, x\in (\Z/2)^{n-1}\times\{1\}\}.$$ In other words, $F_n$ consists of the basis elements $y_S$ where $S$ are all the $\Z/2$-linearly independent (in $G^*$) subsets in (not necessarily reduced) row echelon form with respect to reversed order of columns (so the first pivot is in the last possible column etc.). Consider a differential on $Q_n$ given by [$$\label{eddd1}d(y_S)=\sum_{s\in S} y_{S\smallsetminus \{s\}}.$$]{} Then the homology is $0$ for $n>0$ and $\Z/2$ for $n=0$. Now consider an increasing filtration on $Q_n$ by making the filtration degree $\gamma(S)$ of a basis element $y_S$ equal to $rank\langle S\rangle$, the rank of the $\Z/2$-vector space generated by $S$. Then the $E^1$-term is what we are trying to calculate. On the other hand, in the answer $C=\Z/2(F_n)$ suggested in the statement of the Lemma (which, note, consists of elements of $E^1$), the formula for $d^1$ is the same as the formula [(\[eddd1\])]{} for $d$. We claim that [$$\label{epartial1}H_*(C,d)=0.$$]{} To see this, note that for any fixed non-empty set $S$ in row echelon form, the subcomplex $C_S$ generated by $y_{S^\prime}$ subsets of $S^\prime\subseteq S$ is just a tensor product of copies of [$$\label{egeom2b}\diagram \Z/2\rto^\cong & \Z/2,\enddiagram$$]{} and hence satisfies $$H_*(C_S,d)=0.$$ On the other hand, $C$ for $n>0$ is a sum of the complexes $C_S$ where $S$ ranges over [*maximal*]{} linearly independent subsets of $(\Z/2)^n$ in REF (i.e. those which have exactly $n$ elements), while the intersection of any subset of those complexes $$C_{S_1\cap\dots \cap S_k}=C_{S_1}\cap\dots\cap C_{S_k}$$ has zero homology because $$(1,0,\dots,0)\in S_1\cap\dots \cap S_k$$ and hence $S_1\cap\dots\cap S_k\neq \emptyset$. This implies [(\[epartial1\])]{}. Now the statement follows by induction on $n$ using comparison theorems for spectral sequences. More concretely, if we denote by $C^\prime\subset C$ the subcomplex generated by linearly independent subsets $S$ with $|S|<n$, and $Q^\prime\subset Q_n$ the subcomplex generated by sets $S$ which span a subspace of dimension $<n$, then the induction hypothesis (given that an intersection of vector subspaces is a vector subspace), shows that the embedding $C\subset Q_n$ restricts to a quasi-isomorphism [$$\label{eccc}C^\prime \subset Q^\prime.$$]{} Since the homologies of both $C$ and $Q_n$ are $0$, we see that the homomorphism on degree $n$ subcomplexes must induce an isomorphism on homology, thus implying that the degree $n$ part of the $E^1$ term of our spectral sequence for $Q_n$ is just the degree $n$ part of $C$ (which is, of course, isomorphic to $\Z/2$). Now by Lemma \[l1\], the $E^2$-term of the spectral sequence of the cube [(\[egeom1\])]{} is [$$\label{egeom2aa}E^2=\bigoplus_{S\in F_n}Sym_{\Z/2}((G/\cap\{Ker (\alpha)\mid \alpha\in S\})^*)\cdot y_S$$]{} (where we identify $G^*\cong (\Z/2)^n$). Now consider, for $0\neq \alpha:G\r\Z/2$, the map [$$\label{egeommap1}f_\alpha:\Phi^{G/Ker(\alpha)}H\Z/2_*=\Phi^{G/Ker(\alpha)}(H\Z/2)^{Ker(\alpha)}_{*} \r \Phi^G H\Z/2_*.$$]{} It is fairly obvious that for $n=1$ the spectral sequence associated with the ($1$-dimensional) cube [(\[egeom1\])]{} collapses to $E^1$ and that in fact [$$\label{egeommap2}\Phi^{G/Ker\alpha}H\Z/2_*=\Z/2[x_\alpha]$$]{} where in the spectral sequence, the element $x_\alpha$ is filtration degree $1$ and is represented by the set $\{(1)\}$ if we identify $G/Ker(\alpha)\cong \Z/2$. We will also denote the image under [(\[egeommap1\])]{} $$f_\alpha(x_\alpha)\in \Phi^G H\Z/2$$ by $x_\alpha$. \[t1\] We have [$$\label{ephi1}\begin{array}{l}\Phi^{G}_{*}H\Z/2 = \\ \Z/2[x_\alpha\mid \alpha\in G^*\smallsetminus \{0\}]/ (x_\alpha x_\beta+x_\alpha x_\gamma+x_\beta x_\gamma\mid \alpha+\beta+\gamma=0) \end{array}$$]{} where the classes $x_\alpha$ are in dimension $1$. Before proving the theorem, it is useful to record the following algebraic fact: \[p1\] Let $\{\alpha_1,\dots,\alpha_k\}$ be a minimal $\Z/2$-linearly dependent subset of $G^*\smallsetminus \{0\}$, $k\geq 3$. Then the ring $R_n$ on the right hand side of [(\[ephi1\])]{} satisfies [$$\label{esymm1}\sigma_{k-1}(x_{\alpha_1},\dots,x_{\alpha_k})=0.$$]{} (Here $\sigma_i$ denotes the $i$’th elementary symmetric polynomial.) We will proceed by induction on $k$. For $k=3$, this is by definition. Suppose $k>3$ and suppose the statement is true with $k$ replaced by $k-1$. Compute in $R_n$, denoting $\beta=\alpha_{k-1}+\alpha_k$: [$$\label{ecomputrn}\begin{array}{l} \sigma_{k-1}(x_{\alpha_1},\dots,x_{\alpha_k})=\\ (x_{\alpha_k}+x_{\alpha_{k-1}})(x_{\alpha_1}\cdot\dots\cdot x_{\alpha_{k-2}})+x_{\alpha_k}x_{\alpha_{k-1}} \sigma_{k-3}(x_{\alpha_1},\dots,x_{\alpha_{k-2}})=\\ (x_{\alpha_k}+x_{\alpha_{k-1}})(x_{\alpha_1}\cdot\dots\cdot x_{\alpha_{k-2}}) +(x_{\alpha_k}+x_{\alpha_{k-1}})x_\beta \sigma_{k-3}(x_{\alpha_1},\dots,x_{\alpha_{k-2}})=\\ (x_{\alpha_k}+x_{\alpha_{k-1}})\sigma_{k-2}(x_\beta,x_{\alpha_1},\dots,x_{\alpha_{k-2}}) \end{array}$$]{} Now $\{\beta,\alpha_1,\dots,\alpha_{k-2}\}$ is also a minimal linearly dependent set (note that minimality is equivalent to the statement that $\alpha_1,\dots,\alpha_{k-1}$ are linearly independent and $\alpha_1+\dots +\alpha_k=0$). Therefore, the right hand side of [(\[ecomputrn\])]{} is $0$ in $R_n$ by the induction hypothesis. [*Proof of Theorem \[t1\]:*]{} We know that $\Phi^{G}_{*}H\Z/2 $ is a ring, since $\Phi^G H\Z/2$ is an $E_\infty$-ring spectrum. By [(\[egeommap1\])]{}, we know that the $x_\alpha$’s represent elements of $\Phi^{G}_{*}H\Z/2 $, and hence polynomials in the $x_\alpha$’s do as well. Now it is important to note that [(\[egeom2aa\])]{} is not a spectral sequence of rings. However, there are maps arising from smashing $n$ cubes [(\[egeom1\])]{} (over $H\Z/2$) for $n=1$, and from this, it is not difficult to deduce that for $S$ linearly independent, a monomial of the form [$$\label{emonom1}\prod_{s\in S}x_{s}^{r_s}, \; r_s\geq 1$$]{} is represented in [(\[egeom2aa\])]{} by [$$\label{emonom2}S\cdot \prod_{s\in S} x_{s}^{r_s-1}.$$]{} (Note that by Lemma \[l1\], for $S$ not linearly independent, [(\[emonom2\])]{} does not survive to $E^2$.) By Lemma \[l1\], we know that such elements generate the $E^2$-term as a $\Z/2$-module, so we have already proved that the spectral sequence associated with the cube [(\[egeom1\])]{} collapses to $E^2$. Now counting basis elements in filtration degree $2$ shows that $\Phi^{G}_{*}H\Z/2 $ must have a quadratic relation among the elements $x_\alpha$, $x_\beta$, $x_\gamma$ when $$\alpha+\beta+\gamma=0.$$ (It suffices to consider $n=2$.) The relation must be symmetric and homogeneous for reasons of dimensions, so the possible candidates for the relation are [$$\label{erel1}x_\alpha x_\beta +x_\alpha x_\gamma +x_\beta x_\gamma=0$$]{} or [$$\label{erel1alt}x_\alpha x_\beta +x_\alpha x_\gamma +x_\beta x_\gamma+x_{\alpha}^2+x_{\beta}^{2} +x_{\gamma}^{2}=0.$$]{} We will prove the theorem by finding a basis of the monomials [(\[emonom1\])]{} of the ring on the right hand side of [(\[ephi1\])]{} and matching them, in the form [(\[emonom2\])]{}, with the $E^2$-term [(\[egeom2aa\])]{}. Before determining which of the relations [(\[erel1\])]{}, [(\[erel1alt\])]{} is correct, we observe (by induction) that the ring $R_n$ given by the relation [(\[erel1\])]{} satisfies (identifying $G^*\cong (\Z/2)^n$) [$$\label{egen1}R_n=R_{n-1}\otimes \Z/2[x_{(0,\dots,0,1)}] + \sum_{\alpha\in ((\Z/2)^{n-1}\smallsetminus\{0\})\times\{1\}} R_{n-1}\otimes x_\alpha\cdot\Z/2[x_\alpha]$$]{} and that the ring $R_{n}^{\prime}$ obtained from the relations [(\[erel1alt\])]{} satisfies a completely analogous statement with $R_i$ replaced by $R_{i}^{\prime}$. By the identification between [(\[emonom1\])]{} and [(\[emonom2\])]{}, we see that we obtain a $\Z/2$-module of the same rank as the $E^2$ term of the spectral sequence of [(\[egeom1\])]{} in each dimension if and only if the sum [(\[egen1\])]{} for each $n$ is a direct sum (and similarly for the case of $R_{n}^\prime$). Since we already know that the spectral sequence collapses to $E_2$, we know that this direct sum must occur for whichever relation [(\[erel1\])]{} or [(\[erel1alt\])]{} is correct, and also that the “winning" relation [(\[erel1\])]{} (resp. [(\[erel1alt\])]{}), ranging over all applicable choices of $\alpha$, $\beta$ and $\gamma$ generates all the relations in $\Phi^{G}_{*}H\Z/2 $. We will complete the proof by showing that [(\[erel1alt\])]{} generates a spurious relation, and hence is eliminated. This cannot be done for $n=2$, as we actually have $R_2\cong R_{2}^{\prime}$ via the (non-functorial isomorphism) replacing the generators $x_\alpha, x_\beta,x_\gamma$ with $x_\alpha+x_\beta$, $x_\alpha+x_\gamma$, $x_\beta+x_\gamma$. We therefore must resort to $n=3$. Let $\alpha_1=(1,0,0)$, $\alpha_2=(0,1,0)$, $\alpha_3=(0,0,1)$, $\alpha_4=(1,1,1)$. Applying the computation [(\[ecomputrn\])]{} in the proof of Proposition \[p1\] to compute $\sigma_3(x_{\alpha_1},x_{\alpha_2},x_{\alpha_3},x_{\alpha_4})$ in the ring $R_{3}^{\prime}$, we obtain $$\begin{array}{l}\sigma_3(x_{\alpha_1},x_{\alpha_2},x_{\alpha_3},x_{\alpha_4})=\\ (x_{\alpha_1}+x_{\alpha_2})(x_{\alpha_3}^{2}+x_{\alpha_4}^{2}+x_{\beta}^2) +(x_{\alpha_3}+x_{\alpha_4})(x_{\alpha_1}^{2}+x_{\alpha_2}^{2}+x_{\beta}^{2}). \end{array}$$ As this is clearly not symmetrical in $x_{\alpha_1}$, $x_{\alpha_2}$, $x_{\alpha_3}$, $x_{\alpha_4}$, by permuting (say, using a $4$-cycle) and adding both relations, we obtain a spurious relation in dimension $3$ and filtration degree $2$, which shows that the analog of [(\[egen1\])]{} with $R_i$ replaced by $R_{i}^{\prime}$ fails to be a direct sum for $n=3$, thereby excluding the relation [(\[erel1alt\])]{}, and completing the proof. From the fact that [(\[egen1\])]{} is a direct sum, we obtain the following \[cor1\] The Poincare series of the ring $R_n$ is $$\frac{1}{(1-x)^n}\prod_{i=1}^{n}(1+(2^{i-1}-1)x).$$ The coefficients of $H\Z/2$ suspended by a $G$-representation {#s4} ============================================================= In this section, we will compute explicitly the coefficients of $H\Z/2$ suspended by [$$\label{erefv}V=\sum_{\alpha\in G^*\smallsetminus \{0\}} m_\alpha \alpha$$]{} with $m_\alpha\geq 0$. \[t2\] 1. For $m_\alpha\geq 0$, $G^*\cong (\Z/2)^n$, recalling [(\[erefv\])]{}, the Poincare series of $${\Sigma}^{V}H\Z/2_*$$ is [$$\label{eserr1} \frac{1}{(1-x)^n}\left( \sum_{(\Z/2)^k\cong H\subseteq G^*} (-1)^k\left(\prod_{i=1}^{n-k}(1+(2^{i-1}-1)x)\right) x^{\displaystyle k+\sum_{\alpha\in H\smallsetminus \{0\}}m_\alpha}\right).$$]{} 2. For $m_\alpha\geq 0$, the canonical map $${\Sigma}^{V}H\Z/2\r\widetilde{E\mathcal{F}[G]} \wedge H\Z/2$$ (given by the smash product of the inclusions $S^{m_\alpha \alpha}\r S^{\infty \alpha}$) induces an injective map on $\Z$-graded homotopy groups. We need the following purely combinatorial result. Let $$\left[\begin{array}{c}n\\k\end{array} \right]=\frac{(2^n-1)\cdot (2^{n-1}-1)\cdot\dots\cdot (2^{n-k+1}-1)}{ (2^k-1)\cdot (2^{k-1}-1)\cdot \dots\cdot (2^1-1)}.$$ Note that this is the number of $k$-dimensional $\Z/2$-vector subspaces of $(\Z/2)^n$. The following statement amounts to part 1. of Theorem \[t2\] for $m_\alpha=0$. \[lcombo\] We have $$\sum_{k=0}^{n}(-1)^k \left[\begin{array}{c}n\\k\end{array} \right] x^k\prod_{i=1}^{n-k}(1+(2^{i-1}-1)x)=(1-x)^n.$$ Induction on $n$. We have $$\left[\begin{array}{c}n\\k\end{array} \right]= \left[\begin{array}{c}n-1\\k\end{array} \right]+2^{n-k} \left[\begin{array}{c}n-1\\k-1\end{array} \right],$$ so by the induction hypothesis, $$\begin{array}{l} \displaystyle\sum_{k=0}^{n}(-1)^k \left[\begin{array}{c}n\\k\end{array} \right] x^k\prod_{i=1}^{n-k}(1+(2^{i-1}-1)x)=\\[3ex] \displaystyle\sum_{k=0}^{n}(-1)^k \left(\left[\begin{array}{c}n-1\\k\end{array} \right]+2^{n-k} \left[\begin{array}{c}n-1\\k-1\end{array} \right]\right) x^k\prod_{i=1}^{n-k}(1+(2^{i-1}-1)x). \end{array}$$ Splitting the right hand side into two sums, we get $$\begin{array}{l} \displaystyle\sum_{k=0}^{n-1}(-1)^k \left[\begin{array}{c}n-1\\k\end{array} \right] x^k\prod_{i=1}^{n-k}(1+(2^{i-1}-1)x)+\\[3ex] \displaystyle\sum_{k=1}^{n}(-1)^k2^{n-k} \left[\begin{array}{c}n-1\\k-1\end{array} \right] x^k\prod_{i=1}^{n-k}(1+(2^{i-1}-1)x)=\\[3ex] \displaystyle (1-x)^n+\sum_{k=0}^{n-1}(-1)^k \left[\begin{array}{c}n-1\\k\end{array} \right] x^k\prod_{i=1}^{n-k-1}(1+(2^{i-1}-1)x)2^{n-k-1}+\\[3ex] \displaystyle\sum_{k=1}^{n}(-1)^k2^{n-k} \left[\begin{array}{c}n-1\\k-1\end{array} \right] x^k\prod_{i=1}^{n-k}(1+(2^{i-1}-1)x)=(1-x)^n. \end{array}$$ [*Proof of Theorem \[t2\]:*]{} We will proceed by induction on $n$. Assume 1. and 2. are true for lower values of $n$. Then, for the given $n$, we proceed by induction on $$\ell=|\{\alpha\in G^*\mid m_\alpha>0\}|.$$ For $\ell=0$, 1. follows from Lemma \[lcombo\] and 2. is obvious (by ring structure of $\Phi^GH\Z/2$). Suppose $\ell\geq 1$ and 1., 2. are true for lower values of $\ell$. Setting [$$\label{erefvl}V_\ell=\sum_{i=1}^{\ell} m_{\alpha_i} \alpha_i,$$]{} we will study the effect on coefficients $(?)_*$ of the cofibration sequence [$$\label{e1t2}\diagram S(m_\ell\alpha_\ell)_+\wedge {\Sigma}^{V_{\ell-1}}H\Z/2\dto\\ {\Sigma}^{V_{\ell-1}}H\Z/2\dto\\ {\Sigma}^{V_\ell}H\Z/2. \enddiagram$$]{} First, we observed that the first map factors through the top row of the diagram [$$\label{e2t2}\diagram (EG/Ker \alpha_\ell)_+\wedge {\Sigma}^{V_{\ell-1}}H\Z/2\rto\dto & {\Sigma}^{V_{\ell-1}}H\Z/2\dto\\ (EG/Ker\alpha_\ell)_+\wedge \widetilde{E\mathcal{F}[G]}\wedge H\Z/2\rto& \widetilde{E\mathcal{F}[G]}\wedge H\Z/2. \enddiagram$$]{} Next, the right column of [(\[e2t2\])]{} is injective on $(?)_*$ by 2. for $\ell-1$, and hence the top row, and hence also the first map [(\[e1t2\])]{}, is $0$ on $(?)_*$. Now the Poincare series of [$$\label{e3t2}(S(m_\ell\alpha_\ell)_+\wedge {\Sigma}^{V_{\ell-1}}H\Z/2)^{G}_{*}$$]{} is $$\frac{1-x^{m_\ell}}{1-x}$$ times the Poincare series of [$$\label{e4t2} ({\Sigma}^{V_{\ell-1}}H\Z/2)^{Ker\alpha_\ell}_{*},$$]{} which, when multiplied by $x$ and added to the Poincare series of $$( {\Sigma}^{V_{\ell-1}}H\Z/2)^{G}_{*},$$ is [(\[eserr1\])]{} by the induction hypothesis. This proves 1. To prove 2., we observe that the elements of [(\[e3t2\])]{} are generated by powers of $x_{\alpha_\ell}$ multiplied by elements of [(\[e4t2\])]{}, so again, we are done by the induction hypothesis. The complex calculating $RO(G)$-graded coefficients {#s5} =================================================== To calculate the $RO(G)$-graded coefficients of $H\Z/2_G$ in dimensions given by virtual representations, we introduce another spectral sequence. In fact, we will again use the cofibration sequence [(\[egeom++\])]{}, but we will rewrite it as [$$\label{egeom+++}S^0\r S^{\infty\alpha}\r \Sigma EG/Ker(\alpha)_+.$$]{} We will smash the second maps of [(\[egeom+++\])]{} over all $\alpha\in G^*\smallsetminus\{0\}$, to obtain a cube [$$\label{egeom**}\bigwedge_{\alpha\in G^*\smallsetminus \{0\}}(S^{\infty\alpha}\r \Sigma EG/Ker(\alpha)_+)$$]{} whose iterated fiber is $S^0$. Our method is to smash with $H\Z/2_G$ and take $RO(G)$-graded coefficients: [$$\label{egeom**1}(\bigwedge_{\alpha\in G^*\smallsetminus \{0\}}(S^{\infty\alpha}\r \Sigma EG/Ker(\alpha)_+)\wedge H\Z/2)_\star,$$]{} thus yielding a spectral sequence calculating $H\Z/2_\star$. However, there is a key point to notice which drastically simplifies this calculation. Namely, smashing [(\[egeom++\])]{} with $EG/Ker(\alpha)+$, the first morphism becomes an equivalence, thus showing that [$$\label{ezero1}EG/Ker(\alpha)_+\wedge S^{\infty\alpha} \simeq *.$$]{} Together with [(\[egeom2\])]{}, this shows that the only vertices of the cube [(\[egeom\*\*\])]{} which are non-zero are actually those of the form where all the $\alpha$’s for which we take the term $S^{\infty\alpha}$ in [(\[egeom\*\*\])]{} are those [*not vanishing*]{} on some subgroup $A\subseteq G$, while those $\alpha$’s for which we take the term $\Sigma EG/Ker(\alpha)_+$ are those non-zero elements of $G^*$ which [*do vanish*]{} on $A$, i.e. non-zero elements of $(G/A)^*$. The corresponding vertex of [(\[egeom\*\*\])]{} is then a suspension of [$$\label{evert1}gr_A(S^0)=EG/A_+\wedge \widetilde{E\mathcal{F}[A]}.$$]{} We also put $$gr_A(H\Z/2)=gr_A(S^0)\wedge H\Z/2.$$ Because of the high number of zero terms, the spectral sequence may be regraded by $rank_{\Z/2}(A)$, thus having only $n$, instead of $2^n-1$, filtration degrees. (Note that the cube [(\[egeom\*\*\])]{} may be reinterpreted as a “filtration" of the spectrum $S^0$; from this point of view, we have simply observed that many of the filtered parts coincide.) It is now important, however, to discuss the grading seriously. Since we index coefficients homologically, we will write the spectral sequence in homological indexing. Additionally, we want the term $gr_G(S^0)$ be in filtration degree $0$ (since that is where the unit is). Thus, the (homologically indexed) filtration degree of [(\[evert1\])]{} is $$p=rank(A)-n$$ (a non-positive number). Thus, $$\pi_k({\sum}^{\sum_{\alpha\in G^*\smallsetminus \{0\}}m_\alpha \alpha}gr_A(H\Z/2)\subseteq E^{1}_{rank(A)-n,k+n-rank(A)}$$ or, put differently, for a given choice of the $m_\alpha$’s, [$$\label{estare1}E^{1}_{p,q}=\bigoplus_{rank(A)=n+p}\pi_{q+p-\sum m_\alpha \alpha}gr_A(H\Z/2),\; p=-n,\dots,0.$$]{} We will next describe explicitly the differential [$$\label{estard1}d^1:E^{1}_{p,q}\r E^{1}_{p-1,q}.$$]{} Let us first introduce some notation. To this end, we need to start out by describing the $E^1$-term more explicitly. In effect, we can calculate $gr_A(H\Z/2)_\star$ by taking first the $A$-fixed points using Theorem \[t1\] with $G$ replaced by $A$, and then applying the Borel homology spectral sequence for $G/A$. This spectral sequence collapses because there exists a splitting [$$\label{esplitting}\diagram A\rto^\subseteq\drto_= & G\dto^r\\ &A. \enddiagram$$]{} However, the splitting is not canonical, and this is reflected by the choice of generators we observe. More explicitly, the splitting determines for each representation $$0\neq \beta:A\r \Z/2$$ an extension $$\widetilde{\beta}:G\r\Z/2.$$ One difficulty with describing Borel homology is that it does not naturally form a ring. Because of that, it is more convenient to describe first the coefficients of [$$\label{egamma1}\gamma_{A}(H\Z/2):=F(EG/A_+,\widetilde{E\mathcal{F}[A]})\wedge H\Z/2.$$]{} This is an ($E_\infty$-) ring spectrum, and its ring of coefficients is given by [$$\label{estargamma}\begin{array}{l}\gamma_A(H\Z/2)_\star=\\ (\Z/2[x_{\widetilde{\beta}},u^{\pm1}_{\widetilde{\beta}}, u^{\pm1}_{\widetilde{\beta}+\alpha} \mid \beta\in A^*\smallsetminus \{0\},\; \alpha\in (G/A)^*\smallsetminus \{0\}]/\\ (x_{\widetilde{\alpha}}x_{\widetilde{\beta}}+x_{\widetilde{\alpha}}x_{\widetilde{\gamma}} +x_{\widetilde{\beta}}x_{\widetilde{\gamma}}\mid \alpha+\beta+\gamma=0)) [(y_\alpha u^{-1}_{\alpha})^{\pm 1}]\\ {}[[y_\alpha\mid \alpha\in (G/A)^*\smallsetminus \{0\}]]/ (y_{\alpha+\alpha^\prime}-y_{\alpha}-y_{\alpha^\prime}) \end{array}$$]{} where the $RO(G)$-graded dimensions of the generators are $$|u_\gamma|=-\gamma,\; |x_\gamma|=1, \;|y_\gamma|=-1.$$ We may then describe $gr_A(H\Z/2)_\star$ as the $dim_{\Z/2}(G/A)$’th (=only non-trivial) local cohomology module of the ring $\gamma_A(H\Z/2)$ with respect to the ideal generated by the $y_\alpha$’s. Note that after taking $A$-fixed points first, this is the usual computation of $G/A$-Borel homology from the corresponding Borel cohomology. Recall that $H^*_I(R)$ for a finitely generated ideal $I$ of a commutative ring $R$ is obtained by choosing finitely many generators $y_1,\dots, y_\ell$ of $I$, tensoring, over $R$, the cochain complexes $$R\r y_i^{-1}R$$ (with $R$ in degree $0$) and taking cohomology. It is, canonically, independent of the choice of generators. In the present case, we are simply dealing with the power series ring $R$ in $dim_{\Z/2}(G/A)$ generators over a $\Z/2$-algebra, and the augmentation ideal. Taking the defining generators of the power series ring, we see immediately that only the top local cohomology group survives. We note that the basic philosophy of our notation is [$$\label{ephil}``y_\alpha = x_{\alpha}^{-1}".$$]{} As a first demonstration of this philosophy, let us investigate the effect of a change of the splitting [(\[esplitting\])]{}. Writing metaphorically [$$\label{emeta1}x_{\widetilde{\beta}+\alpha}x_{\widetilde{\beta}}+ x_{\widetilde{\beta}+\alpha}x_{{\alpha}}+x_{\widetilde{\beta}} x_\alpha=0,$$]{} we get [$$\label{emeta2}x_{\widetilde{\beta}+\alpha}x_{\widetilde{\beta}}y_\alpha+ x_{\widetilde{\beta}+\alpha}+x_{\widetilde{\beta}}=0,$$]{} from which we calculate [$$\label{emeta3}x_{\widetilde{\beta}+\alpha}=x_{\widetilde{\beta}}(1+x_{\widetilde{\beta}}y_\alpha)^{-1} =\sum_{k=0}^{\infty} x_{\widetilde{\beta}}^{k+1}y_{\alpha}^{k}.$$]{} This formula is correct in $\gamma_A(H\Z/2)_\star$ and hence can also be used in the module $gr_A(H\Z/2)_\star$. Next, we will describe the $d^1$ of [(\[estare1\])]{}. These connecting maps will be the sums of maps of degree $-1$ of the form [$$\label{emetad1}d^{AB}:gr_A(H\Z/2)_\star\r gr_B(H\Z/2)_\star$$]{} where $B\subset A$ is a subgroup with quotient isomorphic to $\Z/2$. Let $\beta:A\r\Z/2$ be the unique non-trivial representation which vanishes when restricted to $A$. The key point is to observe that the canonical map [$$\label{emetad11}\diagram EG/A_+\wedge \widetilde{E\mathcal{F}[B]}\wedge S^{\infty\widetilde{\beta}} \rto^(.55)\sim & EG/A_+\wedge \widetilde{E\mathcal{F}[A]} \enddiagram$$]{} is an equivalence, and hence [(\[emetad1\])]{} can be calculated by smashing with $H\Z/2$ the connecting map [$$\label{emetad12}EG/A_+\wedge \widetilde{E\mathcal{F}[B]}\wedge S^{\infty\widetilde{\beta}} \r \Sigma EG/B_+\wedge \widetilde{E\mathcal{F}[B]}.$$]{} Consequently, [(\[emetad1\])]{} is a homomorphism of $\gamma_AH\Z/2_\star$-modules, and is computed, just like in dimension $1$, by replacing $$x_{\widetilde{\beta}}\mapsto y_{\widetilde{\beta}}^{-1}$$ and multiplying by $y_{\widetilde{\beta}}$. (Note that independence of the splitting $\widetilde{\beta}$ at this point follows from topology; it is a non-trivial fact to verify purely algebraically. We have thereby finished describing the $d^1$ of the spectral sequence [(\[estare1\])]{}. The main result of the present section is the following \[t3\] The spectral sequence [(\[estare1\])]{} collapses to $E^2$. We will first prove some auxiliary results. \[t3l1\] The Borel homology spectral sequence of any cell $H\Z/2_G$-module with cells [$$\label{et3+}\Sigma^?G_+\wedge H\Z/2$$]{} collapses to $E^2$. Taking $G$-fixed point, we obtain a cell $H\Z/2$-module with one cell for each cell [(\[et3+\])]{}. Now the homotopy category of $H\Z/2$-modules is equivalent to the derived category of $\mathbb{F}_2$-vector spaces, and a chain complex of $\mathbb{F}_2$-modules is isomorphic to a sum of an acyclic module and suspensions of $\mathbb{F}_2$. \[ltensor\] Let $G$,$H$ be finite groups and let $X$ be an $G$-cell spectrum and let $Y$ be an $H$-cell spectrum (all indexed over the complete universe). Then $$(H\Z/2_{G\times H} \wedge i_\sharp X\wedge j_\sharp Y)^{G\times H} \simeq (H\Z/2_G\wedge X)^G\wedge_{H\Z/2} (H\Z/2_H\wedge Y)^H.$$ Here on the left hand side, $i_\sharp$ is the functor introducing trivial $H$-action on a $G$-spectrum and pushing forward to the complete universe, while $j_\sharp$ is the functor introducing trivial $G$-action on an $H$-spectrum and pushing forward to the complete universe. First consider $Y=S^0$. Then we have the forgetful map $$(i_\sharp X\wedge H\Z/2)^{G\times H}\r (X\wedge H\Z/2)^G$$ which is an equivalence because it is true on cells. In general, we have a map $$Z^\Gamma\wedge T^\Gamma \r (Z\wedge T)^\Gamma,$$ so take the composition $$\begin{array}{l} (X\wedge H\Z/2)^G\wedge (Y\wedge H\Z/2)^H=\\ (i_\sharp X\wedge H\Z/2)^{G\times H}\wedge (j_\sharp Y\wedge H\Z/2)^{G\times H}\r\\ (i_\sharp H\Z/2\wedge \j_\sharp Y\wedge H\Z/2)^{G\times H}\r\\ (i_\sharp X\wedge j_\sharp Y)^{G\times H} \end{array}$$ (the last map coming from the ring structure on $H\Z/2$). Then again this map is an equivalence on cells, and hence an equivalence. \[t3l2\] Recalling again the notation [(\[erefv\])]{}, (a) the spectral sequence [(\[egeom\*\*1\])]{} for [$$\label{et3l21*}\pi_*{\Sigma}^{V}H\Z/2$$]{} with all $m_\alpha\geq 0$ collapses to the $E^2$-term in filtration degree $0$. \(b) Let $m_\alpha\leq 0$ for all $\alpha$ and let $$S=\{\alpha\in G^*\smallsetminus\{0\}\mid m_\alpha\neq 0\}.$$ Suppose the subgroup of $G^*$ spanned by $S$ has rank $m$. Then the spectral sequence [(\[egeom\*\*1\])]{} for [(\[et3l21\*\])]{} collapses to $E^2$ in filtration degree $-m$. Recall the notation [(\[erefvl\])]{}. Let $G^*\smallsetminus \{0\}=\{\alpha_1,\dots,\alpha_{2^n-1}\}$. When $\alpha_k$ is linearly independent of $\alpha_1,\dots,\alpha_{k-1}$, we have [$$\label{et3l21} \begin{array}{l} \displaystyle \pi_*{\Sigma}^{V_k}H\Z/2\cong\\[4ex] \displaystyle \pi_*\left({\Sigma}^{V_{k-1}}H\Z/2\right)^G \otimes \pi_*(\Sigma^{m_{\alpha_k} \gamma} H\Z/2)^{\Z/2} \end{array}$$]{} where $\gamma$ is the sign representation of $\Z/2$m by Lemma \[ltensor\]. Note that in the case (b), we may, without loss of generality, assume $m=n$ (i.e. that $S$ spans $G^*$) and that what we just said occurs for $k=1,\dots,n$ and additionally that $m_{\alpha_i}<0$ for $i=1,\dots,n$. When $\alpha_k$ is a linear combination of $\alpha_1,\dots,\alpha_{k-1}$, and $m_{\alpha_k}\neq 0$, we use the cofibration sequence [$$\label{et3l22}\begin{array}{l}\displaystyle S(m_{\alpha_k}\alpha_k)_+\wedge {\Sigma}^{V_{k-1}}H\Z/2\r\\[4ex]\displaystyle {\Sigma}^{V_{k-1}}H\Z/2\r {\Sigma}^{ V_k}H\Z/2 \end{array}$$]{} in the case (a) and [$$\label{et3l23}\begin{array}{l}\displaystyle {\Sigma}^{V_k}H\Z/2\r\\[4ex]\displaystyle {\Sigma}^{V_{k-1}}H\Z/2\r DS(-m_{\alpha_k}\alpha_k)_+\wedge {\Sigma}^{ V_{k-1}}H\Z/2 \end{array}$$]{} in the case (b). If we denote each of these cofibration sequences symbolically as $$A\r B\r C,$$ then in the case (a), [(\[et3l22\])]{} gives a short exact sequence of the form [$$\label{et3l22a}0\r E^1 A\r E^1 B\r E^1C\r 0$$]{} of the spectral sequence of [(\[egeom\*\*1\])]{} where in the $A$-term, we replace $G$ by $Ker (\alpha_k)$ and $H\Z/2$ by $S(m_k\alpha_k)_+\wedge H\Z/2$. By the induction hypothesis, however, the homology of $E^1A$ is concentrated in the top filtration degree, which is $-1$ from the point of view of $G$, and the homology of $E^1B$ is concentrated in filtration degree $0$, so the long exact sequence in homology gives [$$\label{et3l22b}0\r E^2\r E^2C\r \Sigma E^2A\r 0$$]{} which is all in filtration degree $0$, so our statement follows. In the case (b), by our assumptions, we have $k>n$. Additionally, [(\[et3l23\])]{} gives a short exact sequence [$$\label{et3l23a}0\r \Sigma^{-1}E^1C\r E^1 A\r E^1 B\r 0,$$]{} but by the induction hypothesis (using the fact that a set of generators of $G^*$ projects to a set of generators of the factor group $Ker(\alpha_k)^*$), the homology of the first and last term is concentrated in filtration degree $-n$, so [(\[et3l23a\])]{} translates to the same short exact sequence with $E^1$ replaced by $E^2$, which is entirely in filtration degree $-n$, and the statement follows. To continue the proof of Theorem \[t3\], let again $$G^*\smallsetminus \{0\}=\{\alpha_1,\dots,\alpha_{2^n-1}\}.$$ Consider [$$\label{et3p*}{\Sigma}^{V_{2^n-1}} H\Z/2$$]{} and let, this time, without loss of generality, $$m_{\alpha_1},\dots,m_{\alpha_q}<0,$$ $$m_{\alpha_{q+1}},\dots,m_{\alpha_{2^n-1}}\geq 0.$$ Let $A=Ker(\alpha_1)\cap\dots\cap Ker(\alpha_q).$ We will consider the sequence of cofibrations [(\[et3l22\])]{} with $q\leq k< 2^n-1$. Resolving this recursively, we may consider this as a cell object construction in the category of $H\Z/2_G$-modules, with “cells” of the form of suspensions (by an integer) of [$$\label{et3p+}\begin{array}{l} \displaystyle G/(Ker(\alpha_{j_1})\cap\dots\cap Ker(\alpha_{j_p})_+\wedge {\Sigma}^{V_q} H\Z/2,\\[2ex] q<j_1<\dots <j_p\leq 2^n-1. \end{array}$$]{} By the [*degree*]{} of a cell $c$, we shall mean the number $$deg(c)=n-rank(Ker(\alpha_{j_1})\cap\dots\cap Ker(\alpha_{j_p})),$$ and by the [*$A$-relative degree*]{} of $c$, we shall mean $$\begin{array}{l}deg_A(c)=rank(G/A)-\\ rank(Ker(\alpha_{j_1})\cap\dots\cap Ker(\alpha_{j_p})/Ker(\alpha_{j_1})\cap\dots\cap Ker(\alpha_{j_p})\cap A). \end{array}$$ We see easily from the construction that cells of a given degree are attached to cells of strictly lower degree, and that cells of a given $A$-relative degree are attached to cells of lesser or equal $A$-relative degree. (Roughly speaking, “more free” cells are attached to “less free” ones.) \[t3l3\] The spectral sequence arising from the cube [(\[egeom\*\*1\])]{} with $H\Z/2$ replaced by the complex formed by our “cells” of $A$-relative degree $d$ collapses to $E^2$ concentrated in filtration degree $d-rank(G/A)$. Within a given $A$-relative degree $d$, attaching cells of each consecutive degree results in a short exact sequence of the form [(\[et3l22a\])]{} where the first two terms collapse to $E^2$ in filtration degree $d-rank(G/A)-1$ and $d-rank(G/A)$, respectively. Thus, there results a short exact sequence of the form [(\[et3l22b\])]{} in filtration degree $d-rank(G/A)$, as claimed. [*(The rest of) the proof of Theorem \[t3\]:*]{} Filtering cells of [(\[et3p\*\])]{} by $A$-relative degree, we obtain a spectral sequence $\mathcal{E}$ converging to $E^2$ of the spectral sequence of the cube [(\[egeom\*\*1\])]{} for [(\[et3p\*\])]{}. By Lemma \[t3l3\], all the terms will be of the same [(\[egeom\*\*1\])]{}-filtration degree $-rank(G/A)$, which is the complementary degree of $\mathcal{E}$. (Note that in this discussion, we completely ignore the original topological degree.) Thus, being concentrated in one complementary degree, $\mathcal{E}$ collapses to $E^2$ in that complementary degree. However, by precisely the same arguments, we can write a variant $\widetilde{\mathcal{E}}$ of the spectral sequence $\mathcal{E}$ in homotopy groups (rather than [(\[egeom\*\*1\])]{} $E^1$-terms) of the filtered pieces of [(\[et3p\*\])]{} by $A$-relative degree. By Lemma \[t3l3\], $\widetilde{\mathcal{E}}^1\cong \mathcal{E}^1$, and $d_{\widetilde{\mathcal{E}}}^{1}$, $d_{\mathcal{E}}^{1}$ have the same rank (since they are computed by the same formula). It follows that $\widetilde{\mathcal{E}}^2\cong \mathcal{E}^2$, both collapsing to a single complementary degree. Therefore, it follows that $E^2$ (of the spectral sequence associated with [(\[egeom\*\*1\])]{} for [(\[et3p\*\])]{}) is isomorphic to the homotopy of [(\[et3p\*\])]{}, and hence the spectral sequence collapses to $E^2$ by a counting argument. Example: $n=2$ {#s6} ============== In the case $n=2$, there are only three sign representations $\alpha$, $\beta$, $\gamma$ which play a symmetrical role and satisfy [$$\label{eex+}\alpha+\beta+\gamma=0\in G^*,$$]{} which means that the Poincare series of the homotopy [$$\label{eex*}\pi_*(\Sigma^{k\alpha+\ell\beta+m\gamma}H\Z/2)$$]{} can be written down explicitly. First recall that by Theorem \[t2\], for $k,\ell,m\geq 0$, the Poincare series is [$$\label{eex1}\frac{1}{(1-x)^2}(1+x-x^{1+k}-x^{1+\ell}-x^{1+m}+x^{2+k+\ell+m}).$$]{} If $k,\ell<0,m\leq 0$, by the proof of Lemma \[t3l2\], the formula [(\[eex1\])]{} is still valid when multiplied by $x^{-2}$ (since all the homotopy classes are in filtration degree $-2$). If $k,\ell<0,m>0$, in the proof of Theorem \[t3\], $A=0$, so the $A$-relative degree and the degree coincide. Further, by [(\[eex+\])]{} and our formula for the differential $d^1$ of the spectral sequence of [(\[egeom\*\*1\])]{}, the differential $d^{1}_{\mathcal{E}}$ has maximal possible rank (i.e. “everything that can cancel dimension-wise will”). We conclude that the $E^2$ is concentrated in filtration degrees $-1$ and $-2$. By the cancellation principle we just mentioned, the Poincare series can still be recovered from the formula [(\[eex1\])]{}. If we write the expression [(\[eex1\])]{} as [$$\label{eex**}P_+(x)-P_-(x)$$]{} where $P_+(x)$ (resp. $-P_-(x)$) is the sum of monomial summands with a positive coefficient (resp. with a negative coefficient) then the correct Poincare series in this case is $$x^{-2}P_+(x)+x^{-1}P_-(x),$$ the two summands of which represent classes in filtration degree $-2$ and $-1$, respectively. Similarly, one shows that if $k,\ell \geq 0$, $m<0$, the $E^2$ collapses to filtration degrees $0$ and $-1$, and the Poincare series in this case is $$P_+(x)+x^{-1}P_-(x).$$ All other cases are related to these by a symmetry of $(\Z/2)^2$. [**Remark:**]{} It might seem natural to conjecture that the classes of different filtration degrees in $E^2$ may be of different dimensions, with a gap between them (evoking the “gap condition” which was proved for $\Z/2$ in [@hk], and made famous for the group $\Z/8$ by the Hill-Hopkins-Ravenel [@hhr] work on the Kervaire invariant $1$ problem). However, one easily sees that for $n\geq 3$, classes of different filtration degrees may occur in the same dimension. For example, by Lemma \[ltensor\] and by what we just proved, such a situation always occurs for $\pi_*\Sigma^{4\alpha+4\beta-2\gamma+4\delta}H\Z/2$ where $\alpha, \beta,\gamma$ are the three sign representations of $\Z/2\times\Z/2\times \Z/2$ factoring through the projections to the first two copies of $\Z/2$, and $\delta$ is the sign representation which factors through the projection onto the last $\Z/2$. [99]{} W. Abram and I. Kriz: The equivariant complex cobordism ring of a finite abelian group, preprint, 2012 S. Araki: Orientations in t-cohomology theories, [*Japan J. Math*]{} 16 (1) (1978) 363-416 M.F. Atiyah: K-Theory and Reality, [*Quar. J. Math*]{} (1966) 367-386 M.F.Atiyah, I.M.Singer: The index of elliptic operators. I, [*Ann. of Math.*]{} (2) 87 1968 484–53 G. Bredon: [*Equivariant cohomology theories*]{}, Springer Lecture Notes in Mathematics (1967), no. 34, P. Donovan, M. Karoubi: Graded Brauer groups and K-theory with local coefficients, [*Publ. Math. IHES*]{} 38 (1970) 5-25 A.W.M. Dress: Notes on the theory of representations of finite groups. Part I: The Burnside ring of a finite group and some AGN-applications D.S.Freed, M.J.Hopkins, C.Teleman: Loop groups and twisted K-theory I, [*J. Topol.*]{} 4 (2011), no. 4, 737-798 J.P.C.Greenlees: [*Adams spectral sequences in equivariant topology*]{}, Thesis, Cambridge University (1985) J.P.C.Greenlees, J.P. May: Equivariant stable homotopy theory, [*Handbook of algebraic topology*]{}, 277-323, North-Holland, Amsterdam, 1995 M.Hill, M.J.Hopkins, D.Ravenel: On the non-existence of elements of Kervaire invariant one, arXiv:0908.3724 (2009) P. Hu, I. Kriz: Real-oriented homotopy theory and an analogue of the Adams-Novikov spectral sequence, [*Topology*]{} 40 (2001), no.2, 317-399 P. Hu, I. Kriz, Topological Hermitian Cobordism, arXiv:1110.5608, to appear in [*J. Homotopy Re. Str.*]{} P. Hu, I. Kriz, Coefficients of the constant Mackey functor over cyclic p-groups, Preprint, 2010 P. Hu, I. Kriz, K. Ormsby: The homotopy limit problem for Hermitian K-theory, equivariant homotopy theory and motivic real cobordism, [*Adv. Math.*]{} 228 (2011), no. 1, 434-480 I.Kriz: The $\Z/p$-equivariant complex cobordism ring, Homotopy invariant algebraic structures, [*Contemp. Math.*]{}, 239, Amer. Math. Soc. (1999) 217-223 S.Kriz: Equivariant cohomology and the super reciprocal plane of a hyperplane arrangement, preprint, 2015 P.Landweber:Conjugations on complex manifolds and equivariant homotopy of MU, [*Bull. AMS*]{} 74 (1968) 271-274 G.L.Lewis: The $RO(G)$-graded equivariant ordinary cohomology of complex projective spaces with linear $\Z/p$ actions, [*Algebraic topology and transformation groups*]{} (Göttingen, 1987), 53-122, Lecture Notes in Math., 1361, Springer, Berlin, 1988 G.Lewis, J.P.May, J. McClure: Ordinary RO(G)-graded cohomology, [*Bull. Amer. Math. Soc.*]{} (N.S.) 4 (1981), no. 2, 208-212 L.G. Lewis, J.P. May, M. Steinberger, J.E. McClure: [*Equivariant stable homotopy theory*]{}, Lecture Notes in Mathematics, 1213 (1986) N. J. Proudfoot and D. Speyer: A broken circuit ring, [*Beiträge Algebra Geom.*]{} 47 (2006), no. 1, 161-166 H.Terao: Algebras generated by reciprocals of linear forms, [*J. Algebra*]{} 250 (2002), no. 2, 549–558 T. tom Dieck: Bordism of $G$-manifolds and integrability theorems, [*Topology 9*]{} (1970) 345-358
--- abstract: 'We revisit the additive model learning literature and adapt a *penalized spline* formulation due to Eilers and Marx [@eilers2002generalized], to train additive classifiers efficiently. We also propose two new embeddings based two classes of *orthogonal basis with orthogonal derivatives*, which can also be used to efficiently learn additive classifiers. This paper follows the popular theme in the current literature where kernel SVMs are learned much more efficiently using a approximate embedding and linear machine. In this paper we show that spline basis are especially well suited for learning additive models because of their sparsity structure and the ease of computing the embedding which enables one to train these models in an online manner, without incurring the memory overhead of precomputing the storing the embeddings. We show interesting connections between B-Spline basis and histogram intersection kernel and show that for a particular choice of regularization and degree of the B-Splines, our proposed learning algorithm closely approximates the histogram intersection kernel SVM. This enables one to learn additive models with *almost no memory overhead* compared to fast a linear solver, such as LIBLINEAR, while being only $5-6\times$ slower on average. On two large scale image classification datasets, `MNIST` and Daimler Chrysler pedestrians, the proposed additive classifiers are as accurate as the kernel SVM, while being two orders of magnitude faster to train.' author: - | Subhransu Maji\ Department of Computer Science\ University of California at Berkeley\ `[email protected]` bibliography: - 'embeddings.bib' title: Linearized Additive Classifiers --- Introduction ============ Non parametric models for classification have become attractive since the introduction of kernel methods like the Support Vector Machines (SVMs) [@boser1992training]. The complexity of the learned models scale with the data, which gives them desirable asymptotic properties. However from an estimation point of view, parametric models can offer significant statistical and computational advantages. Recent years has seen a shift of focus from non-parametric models to semi-parametric for learning classifiers. This includes the work of Rahimi and Recht [@rahimi2008random], where they compute an approximate feature map $\phi$, for shift invariant kernels $K(|x-y|) \sim \phi(x)'\phi(y)$, and solve the kernel problem approximately using a linear problem. This line of work has become extremely attractive, with the advent of several algorithms for training linear classifiers efficiently (for e.g. `LIBLINEAR` [@fan2008liblinear], `PEGASOS` [@shalev2007pegasos]), including online variants which have very low memory overhead. Additive models, i.e., functions that decompose over dimensions $\left( f(x) = \sum_i f_i(x_i) \right)$, are a natural extension to linear models, and arise naturally in many settings. In particular if the kernel is additive, i.e. $K(x,y) = \sum_i K_i(x_i,y_i)$ then the learned SVM classifier is also additive. A large number of useful kernels used in computer vision are based on comparing distributions of low level features on images and are additive, for e.g., histogram intersection and $\chi^2$ kernel [@maji2009max]. This one dimensional decomposition allows one to compute approximate feature maps independently for each dimension, leading to very compact feature maps, making estimation efficient. This line of work has been explored by Maji and Berg [@maji2009max] where they construct approximate feature maps for the $\min$ kernel, and learn piecewise linear functions in each dimension. For $\gamma$-homogenous additive kernels, Vedaldi and Zisserman [@vedaldi2010efficient] propose to use the closed form features of Hein and Bousquet [@hein2005hilbertian] to construct approximate feature maps. Smoothing splines [@wahba1990spline] are another way of estimating additive models, and are well known in the statistical community. Ever since Generalized Additive Models (GAMs) were introduced by Hastie and Tibshirani [@hastie1990generalized], many practical approaches to training such models for regression have emerged, for example the P-Spline formulation of Eilers and Marx [@eilers2002generalized]. However these algorithms do not scale to extremely large datasets and high-dimensional features typical of image or text classification datasets. In this work we show that the spline framework can used to derive embeddings to train additive classifiers efficiently as well. We propose two families of embeddings which have the property that the underlying additive classifier can be learned directly by estimating a linear classifier in the embedded space. The first family of embeddings are based on the Penalized Spline (“P-Spline") formulation of additive models (Eilers and Marx [@eilers2002generalized]) where the function in each dimension is represented using a uniformly spaced spline basis and the regularization penalizes the difference between adjacent spline coefficients. The second class of embeddings are based on a generalized Fourier expansion of the function in each dimension. This work ties the literature of additive model regression and linear SVMs to develop algorithms to train additive models in the classification setting. We discuss how our additive embeddings are related to additive kernels in Section \[sec:add\_rkhs\]. In particular our representations include those of [@maji2009max] as a special case arising from a particular choice of B-Spline basis and regularization. An advantage of our representations is that it allows explicit control of the smoothness of the functions and the choice of basis functions, which may be desirable in certain situations. Moreover the sparsity of some of our representations lead to efficient training algorithms for smooth fits of functions. We summarize the previous work in the next section. Previous Work ============= The history of learning additive models goes back to [@hastie1990generalized], who proposed the “backfitting algorithm" to estimate additive models. Since then many practical approaches have emerged, the most prominent of which is the Penalized Spline formulation (“P-Spline") proposed by [@eilers2002generalized], which consists of modeling the one dimensional functions using a large number of uniformly spaced B-Spline basis. Smoothness is ensured by penalizing the differences between adjacent spline coefficients. We describe the formulation in detail in Section \[sec:p-spline\]. A key advantage of this formulation was the whole problem could be solved using a linear system. Given data $(x^k,y^k), k = 1,\ldots,m$ with $x^k \in R^D$ and $y^k \in \{-1,+1\}$, discriminative training of functions often involve an optimization such as: $$\min_{f \in F} \sum_k l\left(y^k, f(x^k)\right) +\lambda R(f)$$ where, $l$ is a loss function and $R(f)$ is a regularization term. In the classification setting a commonly used loss function $l$ is the hinge loss function: $$l\left(y^k,f(x^k)\right) = \max\left (0, 1 - y^k f(x^k)\right)$$ For various kernel SVMs the regularization penalizes the norm of the function in the implicit Reproducing Kernel Hilbert Space (RKHS), of the kernel. Approximating the RKHS of these additive kernels provides way of training additive kernel SVM classifiers efficiently. For shift invariant kernels, Rahimi and Recht [@rahimi2008random] derive features based on Boshner’s theorem. Vedaldi and Zisserman [@vedaldi2010efficient] propose to use the closed form features of Hein and Bousquet [@hein2005hilbertian] to train additive kernel SVMs efficiently for many commonly used additive kernels which are $\gamma$-homogenous. For the $\min$ kernel, Maji and Berg [@maji2009max] propose an approximation and an efficient learning algorithm, and our work is closely related to this. In the additive modeling setting, a typical regularization is to penalize the norm the $d$th order derivatives of the function in each dimension, $i.e.$, $R(f) = \sum_i \int_{-\infty}^{\infty}f_i^{d}(t)^2$. Our features are based on encodings that enable efficient evaluation and computation of this regularization. For further discussion we assume that the features $x^k$ are one dimensional. Once the embeddings are derived for the one dimensional case, we note that the overall embedding is concatenation of the embeddings in each dimension as the classifiers are additive. Spline Embeddings {#sec:p-spline} ================= Eilers and Marx [@eilers2002generalized] proposed a practical modeling approach for GAMs. The idea is based on the representing the functions in each dimension using a relatively large number of uniformly spaced B-Spline basis. The smoothness of these functions is ensured by penalizing the first or second order differences between the adjacent spline coefficients. Let $\boldsymbol \phi(x^k) $ denote the vector with entries $\phi_i(x^k)$, the projection of $x^k$ on to the $i$th basis function. The P-Spline optimization problem for the classification setting with the hinge loss function consists of minimizing $c(\mathbf{w})$: $$c(w) = \frac{\lambda}{2} \mathbf{w'}D_d'D_d\mathbf{w} + \frac{1}{n}\sum_k \max \left(0, 1 - y^k\left(\mathbf{w}'\boldsymbol \phi(x^k)\right)\right)$$ The matrix $D_d$ constructs the $d$th order differences of $\boldsymbol \alpha$: $$D_d\boldsymbol \alpha = \Delta^d \boldsymbol \alpha$$ The first difference of $\boldsymbol \alpha$, $\Delta^1 \boldsymbol \alpha$ is a vector of elements $\alpha_i - \alpha_{i+1}$. Higher order difference matrices can be computed by repeating the differencing. For a $n$ dimensional basis, the difference matrix $D_1$ is a $(n-1)\times n$ matrix with $d_{i,i} = 1$, $d_{i,i+1}=-1$ and zero everywhere else. The matrices $D_1$ and $D_2$ are as follows: $$D_1 = \left( \begin{array}{rrrrr} 1 & -1 & & & \\ & 1 & -1 & & \\ & & \ldots & & \\ & & & 1 & -1 \\ \end{array} \right) ; D_2 = \left( \begin{array}{rrrrrr} 1 & -2 &1 & & &\\ & 1 & -2 & 1& &\\ & & \ldots & & & \\ & & & 1 & -2 & 1 \\ \end{array} \right)$$ *To enable a reduction to the linear case we propose a slightly different difference matrix $D_1$. We let $D_1$ be a $n\times n$ matrix with $d_{i,i} = 1, d_{i,i-1} = -1$*. This is same as the first order difference matrix proposed by Eilers and Marx, with one more row added on top. The resulting difference matrices $D_1$ and $D_2 = D_1^2$ are both $n\times n$ matrices: $$D_1 = \left( \begin{array}{rrrrr} 1 & & & & \\ -1 & 1 & & & \\ & -1& 1 & & \\ & & \ldots & & \\ & & -1& 1 & \\ & & & -1 & 1 \\ \end{array} \right); D_2 = \left( \begin{array}{rrrrrr} 1 & & & & &\\ -2 & 1 & & & &\\ & 1& -2 &1 & & \\ & & 1 &-2 &1 & \\ & & & \ldots & & \\ & & & 1& -2 & 1 \\ \end{array} \right)$$ The first row in $D_1$ has the effect of penalizing the norm on the first coefficient of the spline basis, which plays the role of regularization in the linear setting (e.g. ridge regression, linear SVMs, etc). Alternatively one can think of this as an additional basis at left most point with its coefficient set to zero. *The key advantage is that the matrix $D_1$ is invertible and has a particularly simple form which allows us to linearize the whole system*. We will also show in Section \[sec:add\_rkhs\] that the derived embeddings also approximate the learning problem of kernel SVM classifier using the $\min$ kernel ($K_{\min}$) for a particular choice of spline basis. $$K_{\min}(\mathbf{x},\mathbf{y}) = \sum_i \min(x_i,y_i)$$ Given the choice of the regularization matrix $D_d$ which is invertible, one can linearize the whole system by re-parametrizing $\mathbf{w}$ by $D_d^{-1}\mathbf{w}$, which results in : $$c(w) = \frac{\lambda}{2} \mathbf{w}'\mathbf{w} + \frac{1}{n}\sum_k \max \left(0, 1 - y^k\left(\mathbf{w}'D_d^{'-1}\boldsymbol \phi(x^k)\right)\right)$$ Therefore the whole classifier is linear on the features $\boldsymbol \phi^d(x^k) = D_d^{'-1}\boldsymbol \phi(x^k)$, i.e. the optimization problem is equivalent to $$c(w) = \frac{\lambda}{2} \mathbf{w}'\mathbf{w} + \frac{1}{n}\sum_k \max \left(0, 1 - y^k\left(\mathbf{w}' \boldsymbol \phi^d(x^k)\right)\right)$$ The inverse matrices $D_1^{'-1}$ and $D_2^{'-1}$ are both upper triangular matrices. The matrix $D_1^{'-1}$ has entries $d_{i,j} = 1, j\geq i$ and $D_2^{'-1} = D_1^{'-2}$ has entries $d_{i,j} = j-i+1, j\geq i$ and look like: $$D_1^{'-1} = \left( \begin{array}{cccccc} 1 & 1 &1 & \ldots & 1 & 1 \\ & 1 & 1 & \ldots & 1 & 1\\ & & 1 & \ldots & 1 & 1\\ & & \ldots & \ldots & \ldots & \\ & & & & 1 & 1\\ & & & & &1 \\ \end{array}\right); D_2^{'-1} = \left( \begin{array}{cccccc} 1 & 2 &3 & \ldots & n-1 & n \\ & 1 & 2 & \ldots & n-2 & n-1\\ & & 1 & \ldots & n-3 & n-2\\ & & \ldots & \ldots & \ldots & \\ & & & & 1 & 2\\ & & & & &1 \\ \end{array} \right)$$ We refer the readers to [@eilers2005splines] for an excellent review of additive modeling using splines. Figure \[fig:bsplinebasis\] shows the $\boldsymbol \phi^d$ for various choices of the regularization degree $d=0,1,2$ and B Splines basis, linear, quadratic and cubic. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[fig:bsplinebasis\] Local basis functions for linear (left), quadratic (middle) and cubic (right) for various regularizations degrees $d$. In each figure $\boldsymbol \phi^d$ refers to the dense features $D_d^{'-1}\boldsymbol \phi$. When $d=0$, the function shown in the local basis of B-Splines. When $d=r+1$, where $r$ is degree of the B-Spline basis, then $\boldsymbol \phi^d$ are truncated polynomials basis, $(x-\tau_i)_+^r$ (see Section \[sec:add\_rkhs\]).](fig/linear_bspline_basis.eps "fig:"){width="0.3\linewidth"} ![\[fig:bsplinebasis\] Local basis functions for linear (left), quadratic (middle) and cubic (right) for various regularizations degrees $d$. In each figure $\boldsymbol \phi^d$ refers to the dense features $D_d^{'-1}\boldsymbol \phi$. When $d=0$, the function shown in the local basis of B-Splines. When $d=r+1$, where $r$ is degree of the B-Spline basis, then $\boldsymbol \phi^d$ are truncated polynomials basis, $(x-\tau_i)_+^r$ (see Section \[sec:add\_rkhs\]).](fig/quadratic_bspline_basis.eps "fig:"){width="0.3\linewidth"} ![\[fig:bsplinebasis\] Local basis functions for linear (left), quadratic (middle) and cubic (right) for various regularizations degrees $d$. In each figure $\boldsymbol \phi^d$ refers to the dense features $D_d^{'-1}\boldsymbol \phi$. When $d=0$, the function shown in the local basis of B-Splines. When $d=r+1$, where $r$ is degree of the B-Spline basis, then $\boldsymbol \phi^d$ are truncated polynomials basis, $(x-\tau_i)_+^r$ (see Section \[sec:add\_rkhs\]).](fig/cubic_bspline_basis.eps "fig:"){width="0.3\linewidth"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Generalized Fourier Embeddings ------------------------------ Generalized Fourier expansion provides an alternate way of fitting additive models. Let $\psi_1(x), \psi_2(x),\ldots, \psi_n(x)$ be a orthogonal basis system in the interval $[a,b]$, wrto. a weight function $w(x)$, i.e. we have $\int_a^b \psi_i(x) \psi_j(x) w(x)dx = 0, i\neq j$. Given a function $f(x)= \sum_i a_i \psi_i(x)$, the regularization can be written as: $$\int_{a}^bf^{d}(x)^2 w(x) dx = \int_{a}^b \left( \sum_i a_i \psi_i^d(x) \right)^2w(x)dx = \int_{a}^b \left( \sum_{i,j} a_i a_j \psi_i^d(x)\psi_j^d(x) \right)w(x)dx$$ *Consider an orthogonal family of basis functions which are differentiable and whose derivatives are also orthogonal*. One can normalize the basis such that $\int_a^b \psi_i^d(x) \psi_j^d(x) w(x)dx = \delta_{ij}$. In this case the regularization has a simple form: $$\int_{a}^bf^{d}(x)^2 w(x) dx = \int_{a}^b \left( \sum_{i,j} a_i a_j \psi_i^d(x)\psi_j^d(x) \right)w(x)dx = \sum_i a_i^2$$ Thus the overall regularized additive classifier can be learned by learning a linear classifier in the embedded space $\psi(x)$. In practice one can approximate the scheme by using a small number of basis function. We propose two practical ones with closed form embeddings: #### Fourier basis. The classic Fourier basis functions $\{1,\cos(\pi x),\sin(\pi x),\cos(2\pi x),\sin(2\pi x),\ldots\}$ are orthogonal in $[-1, 1]$, wrto. the weight function $w(x) = 1$. The derivatives are also in the same family (except the constant basis function), hence are also orthonormal. The normalized feature embeddings for $d=1,2$ are shown in Table \[table:fourier\_features\]. #### Hermite basis. Hermite polynomials also are an orthogonal basis system with orthogonal derivatives wrto. the weight function $e^{-x^2/2}$. Using the following identity: $$\int_{-\infty}^{\infty} H_m(x) H_n(x) e^{-x^2/2} dx = \sqrt{2\pi}n!\delta_{mn}$$ and the property that $H_n' = nH_{n-1}$ (Apell sequence), one can obtain closed form features for $d=1,2$ as shown in Table \[table:fourier\_features\]. It is also known that the family of polynomial basis functions which are orthogonal and whose derivatives are orthogonal belong to one of three families, Jacobi, Laguerre or Hermite [@webster1935orthogonal]. The extended support of the weight function of the Hermite basis, makes them well suited for additive modeling. Although both these basis are complete, for practical purposes one has to use the first few basis. The quality of approximation depends on how well the underlying function can be approximated by these chosen basis, for example, low degree polynomials are better represented by Hermite basis. [l|l]{} Fourier & Hermite\ $x \in [-1,1]$, $w(x) = 1$ & $x \in N(0,1)$, $w(x) = e^{-x^2/2}$\ \ $\phi_n^1(x) = \{ \frac{\cos(n\pi x)}{n}, \frac{\sin(n\pi x)}{n} \}$ & $\phi_n^1(x) = \frac{H_n(x)}{\sqrt{nn!}} $\ \ $\phi_n^2(x) = \{ \frac{\cos(n\pi x)}{n^2}, \frac{\sin(n\pi x)}{n^2} \}$ & $\phi_1^2(x) = \phi_1^1(x)$, $\phi_n^2(x) = \frac{H_n(x)}{\sqrt{n(n-1)n!}}$, $n > 1$\ Additive Kernel Reproducing Kernel Hilbert Space & Spline Embeddings {#sec:add_rkhs} ==================================================================== We begin by showing the close resemblance of the spline embeddings to the $\min$ kernel. To see this, let the features in $[0,1)$ be represented with $N+1$ uniformly spaced linear spline basis centered at $0, \frac{1}{N}, \frac{2}{N},\ldots, 1$. Let $r = \lfloor Nx \rfloor$ and let $\alpha = Nx -r$ . Then the features $\boldsymbol \phi(x)$ is given by $\phi_r(x) = 1-\alpha, \phi_{r+1}(x) = \alpha$ and the features $\boldsymbol \phi^1(x)$ for $D_1$ matrix is given by $\phi^1_i(x) = 1$, if $i \leq r$ and $\phi^1_r(x) = \alpha$. It can be seen that these features closely approximates the $\min$ kernel, i.e. $$\frac{1}{N}\boldsymbol \phi^1(x)'\boldsymbol \phi^1(y) \approx \min(x,y) + 1$$ The features $\boldsymbol \phi^1(x) = D_1^{'-1} \boldsymbol \phi(x)$ constructs a unary like representation where the number of ones equals the position of the bin of $x$. One can verify that for a B-spline basis of degree $r$ ($r=1,2,3$), the following holds: $$\frac{1}{N}\boldsymbol \phi^1(x)' \boldsymbol \phi^1(y) = \min(x,y) + \frac{r+1}{2}, \mbox{if} |x-y| \geq \frac{r}{N}$$ Define $K^r_{d}$ the kernel corresponding to a B-Spline basis of degree $r$ and regularization matrix $D_d$ as follows: $$K^r_{d}(x,y) = \frac{1}{N}\boldsymbol \phi^d(x)' \boldsymbol \phi^d(y) - \frac{r+1}{2} = \frac{1}{N}\boldsymbol \phi(x)' D_d^{-1}D_d^{'-1} \boldsymbol \phi(y) - \frac{r+1}{2}$$ Figure \[fig:approx\_min\_kernel\] shows $K^1_r$ for $r=1,2,3$ corresponding to a linear, quadratic and cubic B-Spline basis. In a recent paper, Maji and Berg [@maji2009max], propose to use linear spline basis and a $D_1$ regularization, to train approximate intersection kernel SVMs, which in turn approximate arbitrary additive classifiers. Our features can be seen as a generalization to this work which allows arbitrary spline basis and regularizations. B-Splines are closely related to the truncated polynomial kernel [@wahba1990spline; @pearce2006penalized] which consists of uniformly spaced knots $\tau_1, \ldots, \tau_n$ and truncated polynomial features: $$\phi_i(x) = (x-\tau_i)^p_+$$ However these features are not as numerically stable as B-Spline basis (see [@eilers2005splines] for an experimental comparison). Truncated polynomials of degree $k$ corresponds to a B-Spline basis of degree $k$ and $D_{k+1}$ regularization, i.e, same as $K_k^{k+1}$ kernel, when the knots are uniformly spaced. This is because B-Splines are derived from truncated polynomial basis by repeated application of the difference matrix $D_1$[@de2001practical]. As noted by the authors in [@eilers2005splines], one of the advantages of the P-Spline formulation is that is decouples the order of regularization and B Spline basis. Typically $D_1$ regularization provides sufficient smoothness in our experiments. ------------------------------------------------------- ------------------------------------------------------------- ------------------------------------------------------------------ ---------------------------------------------------------------- --------------------------------------------------------------------- ------------------------------------------------------------ ----------------------------------------------------------------- ![image](fig/kernel_min.eps){width="0.135\linewidth"} ![image](fig/kernel_linear_d1.eps){width="0.135\linewidth"} ![image](fig/kernel_deltalinear_d1.eps){width="0.135\linewidth"} ![image](fig/kernel_quadratic_d1.eps){width="0.135\linewidth"} ![image](fig/kernel_deltaquadratic_d1.eps){width="0.135\linewidth"} ![image](fig/kernel_cubic_d1.eps){width="0.135\linewidth"} ![image](fig/kernel_deltacubic_d1.eps){width="0.135\linewidth"} $K_{\min}$ $K_{1}^1$ $K_{\min}-K_{1}^1$ $K_{2}^1$ $K_{\min}-K_{2}^1$ $K_{3}^1$ $K_{\min}-K_{3}^1$ ------------------------------------------------------- ------------------------------------------------------------- ------------------------------------------------------------------ ---------------------------------------------------------------- --------------------------------------------------------------------- ------------------------------------------------------------ ----------------------------------------------------------------- Optimizations for Efficient Learning for B-Spline embeddings {#sec:sparse_embeddings} ============================================================ For B-spline basis one can exploit the sparsity to speed-up linear solvers. The classification function is based on evaluating $\mathbf{w}'D_d^{'-1}\phi(x)$. Most methods for training linear methods are based on evaluating the classifier and updating the classifier if the classification is incorrect. Since the number of such evaluations are larger than the number of updates, it is is much more efficient to maintain $\mathbf{w}_d=D_d^{-1}\mathbf{w}$ and use sparse vector multiplication. Updates to the weight vector $\mathbf{w}$ and $\mathbf{w}_d$ for various gradient descent algorithms look like: $$\mathbf{w}^t \leftarrow \mathbf{w}^{t-1} - \eta D_d^{'-1}\phi(x^k),~~~~\mathbf{w}_d^t = \mathbf{w}_d^{t-1} - \eta L_d \phi(x^k)$$ Where $\eta$ is a step and $L_d = D_d^{-1}D_d^{'-1}$. Unlike the matrix $D_d'D_d$, the matrix $L_d$ is a dense, and hence the updates to $\mathbf{w}_d$ may change all the entries of $\mathbf{w}_d$. However, one can compute $L_d\boldsymbol \phi(x)$ in $2dn$ steps instead of $n^2$ steps, exploiting the simple form of $D_1^{'-1}$. Initialize $a_i = \phi_i(x)$ then repeat step A, $d$ times followed by step B, $d$ times to compute $L_d\phi(x)$. $$\begin{aligned} \mbox{step A} &:& a_i = a_i + a_{i+1}, i = n-1 \mbox{ to } 1\\ \mbox{step B} &:& a_i = a_i + a_{i-1}, i = 2 \mbox{ to } n \end{aligned}$$ Experiments =========== Often on large datasets consisting of very high dimensional features, to avoid the memory bottleneck, one may compute the encodings in the inner loop of the training algorithm. We refer this to as the “online" method. Our solver is based on `LIBLINEAR`, but can be easily used with any other solver. The custom solver allows us to exploit the sparsity of embeddings (Section \[sec:sparse\_embeddings\]). A practical regularization is $D_0 = I$ with the B-Spline embeddings, where $I$ is the identity matrix, which leads to sparse features. This makes it difficult to estimate the weights on the basis functions which have no data, but one can use a higher order B-Spline basis, to somewhat mitigate this problem. We present image classification experiments on two image datasets, `MNIST` [@lecun1998mnist] and Daimler Chrysler (`DC`) pedestrians [@munder2006experimental]. On these datasets SVM classifiers based on histogram intersection kernel outperforms a linear SVM classifier [@maji2009max; @maji09fast], when used with features based on a spatial pyramid of histogram of oriented gradients [@dalal2005histograms; @lazebnik2006beyond]. We obtain the features from the author’s website for our experiments. The `MNIST` dataset has $60,000$ instances and the features are $2172$ dimensional and dense, leading to $130,320,000$ non-zero entries. The `DC` dataset has three training sets and two test sets. Each training set has $19,800$ instances and the features are $656$ dimensional and dense, leading to $12,988,800$ entries. These sizes are typical of image datasets and training kernel SVM classifiers often take several hours on a single machine. #### Toy Dataset. The points are sampled uniformly on a 2D grid $[-1,1]\times[-1,1]$ with the points satisfying $x^2 + y^2 \leq 1$ in the positive class and others as negative. Figure \[fig:toy-dataset\](b), shows the fits on the data along $x$ (or $y$) dimension using $4$ uniformly spaced B-spline basis of various degrees and regularizations. The quadratic and cubic splines offer smoother fits of the data. Figure \[fig:toy-dataset\](c,d) shows the learned functions using Fourier and Hermite embeddings of various degrees respectively. -------------------------------------------- --------------------------------------------------- --------------------------------------------------- --------------------------------------------------- ![image](fig/toy_data.eps){height="1.2in"} ![image](fig/toy_bspline_fit.eps){height="1.2in"} ![image](fig/toy_fourier_fit.eps){height="1.2in"} ![image](fig/toy_hermite_fit.eps){height="1.2in"} (a) (b) (c) (d) -------------------------------------------- --------------------------------------------------- --------------------------------------------------- --------------------------------------------------- #### Effect of B-Spline parameter choices. Table \[tab:DCped\], shows the accuracy and training times as a function of the number of bins, regularization degree ($D_r, r=0,1,2$) and the B-Spline basis degree ($d=1,2,3)$ on the first split of the DC pedestrian dataset. We set $C=1$ and the bias term $B=1$ for training all the models. On this dataset we find that $r=0,1$ is more accurate than $r=3$ and is significantly faster. *In further experiments we only include the results of $r=0,1$ and $d=1,3$.* In addition, setting the regularization to zero ($r=0$), leads to very sparse features and can be used directly with any linear solver which can exploit this sparsity. The training time for B-Splines scales sub-linearly with the number of bins, hence better fits of the functions can be obtained without much loss in efficiency. -------- ------------------------------- ---------------------------------------- ----------------------------------------- Degree $D_0$ $D_1$ $D_2$ 1 $\mathbf{06.60s}$ $(89.55\%)$ $\mathbf{20.27s}$ $(89.68\%)$ $\mathbf{041.60}s$ $(89.93\%)$ 2 $08.74s$ $(\mathbf{90.45\%})$ $30.47s$ $(\mathbf{90.20}\%)$ $080.25s$ $(\mathbf{89.94}\%)$ 3 $11.68s$ $(90.03\%)$ $49.85s$ $(89.93\%)$ $143.50s$ $(88.57\%)$ 1 $\mathbf{05.61s}$ $(90.42\%)$ $\mathbf{23.06}s$ $(90.86\%)$ $\mathbf{077.99}s$ $(\mathbf{89.43}\%)$ 2 $08.10s$ $(\mathbf{90.69}\%)$ $29.97s$ $(\mathbf{90.73\%})$ $126.03s$ $(89.23\%)$ 3 $11.59s$ $(90.48\%)$ $42.26s$ $(90.67\%)$ $193.47s$ $(89.14\%)$ 1 $\mathbf{05.96s}$ $(90.23\%)$ $\mathbf{32.43}s$ $(\mathbf{91.20\%})$ $\mathbf{246.87}s$ $(\mathbf{89.06}\%)$ 2 $07.26s$ $(90.34\%)$ $34.99s$ $(91.10\%)$ $328.32s$ $(88.89\%)$ 3 $10.08s$ $(\mathbf{90.39\%})$ $42.88s$ $(91.00\%)$ $429.57s$ $(88.92\%)$ -------- ------------------------------- ---------------------------------------- ----------------------------------------- #### Effect Fourier embedding parameter choices. Table \[tab:dcped\_fourier\], shows the accuracy and training times for various Fourier embeddings on `DC` dataset. Before computing the generalized Fourier features, we first normalize the data in each dimension to $[-1,1]$ using: $$x \leftarrow \frac{x -\mu}{\delta}, \mbox{where, } \mu = \frac{x_{\max} + x_{\min}}{2}, \delta = \frac{x_{\max} - x_{\min}}{2}$$ We precompute the features and use `LIBLINEAR` to train various models, since it is relatively more expensive to compute the features online. In this case the training times are similar to that of B-Spline models. However, precomputing and storing the features may not be possible on very large scale datasets. -------- -------------------- ------------------ -------------------- ------------------ -------------------- ------------------ -------------------- ------------------ Degree Accuracy Time Accuracy Time Accuracy Time Accuracy Time 1 $88.94\%$ $\mathbf{07.0s}$ $88.94\%$ $\mathbf{07.0s}$ $84.17\%$ $\mathbf{02.8s}$ $84.17\%$ $\mathbf{02.8s}$ 2 $89.59\%$ $10.2s$ $89.64\%$ $10.2s$ $88.01\%$ $04.6s$ $88.01\%$ $04.6s$ 3 $88.99\%$ $12.7s$ $89.77\%$ $12.8s$ $88.22\%$ $07.7s$ $88.70\%$ $09.9s$ 4 $\mathbf{89.77}\%$ $16.0s$ $\mathbf{89.84}\%$ $15.9s$ $\mathbf{89.00\%}$ $12.6s$ $\mathbf{89.05\%}$ $11.9s$ -------- -------------------- ------------------ -------------------- ------------------ -------------------- ------------------ -------------------- ------------------ #### Comparison of various additive models. Table \[tab:dcped\_full\], shows the accuracy and training times of various additive models compared to linear and the more expensive $\min$ kernel SVM on all the $6$ training and test set combinations of `DC` dataset. The optimal parameters were found on the first training and test set. The additive models are up to $50\times$ faster to train and are as accurate as the $\min$ kernel SVM. The B-Spline additive models significantly outperform a linear SVM on this dataset at the expense of small additional training time. Table \[tab:mnist\_full\] shows the accuracies and training times of various additive models on the `MNIST` dataset. We train one-vs-all classifiers for each digit and the classification scores are normalized by passing them through a logistic. During testing, each example given the label of the classifier with the highest response. The optimal parameters for training were found using 2-fold cross validation on the training set. Once again the additive models significantly outperform the linear classifier and closely matches the accuracy of $\min$ kernel SVM, while being $50\times$ faster. Method Test Accuracy ---------------------------- --------------------------- ----------------- ---------------------------- SVM (linear) + `LIBLINEAR` $81.49$ ($1.29$) SVM ($\min$) + `LIBSVM` $89.05$ ($1.42$) online batch B-Spline $(r=0,d=1,n=05)$ $88.51$ ($1.35$) $\mathbf{5.9}$s - B-Spline $(r=0,d=3,n=05)$ $89.00$ ($1.44$) $10.8$s - B-Spline $(r=1,d=1,n=10)$ $\mathbf{89.56}$ ($1.35$) $17.2$s - B-Spline $(r=1,d=3,n=10)$ $89.25$ ($1.39$) $19.2$s - Fourier $(r=1,d=2)$ $88.44$ ($1.43$) $159.9$s $12.7$s ($4\times$ memory) Hermite $(r=1,d=4)$ $87.67$ ($1.26$) $35.5$s $12.6$s ($4\times$ memory) Method Test Error Training Time ---------------------------- ------------------- ----------------- SVM (linear) + `LIBLINEAR` $1.44\%$ $6.2$s SVM ($\min$) + `LIBSVM` $0.79\%$ $\sim2.5$ hours B-Spline $(r=0,d=1,n=20)$ $0.88\%$ $31.6$s B-Spline $(r=0,d=3,n=20)$ $0.86\%$ $51.6$s B-Spline $(r=1,d=1,n=40)$ $\mathbf{0.81\%}$ $157.7$s B-Spline $(r=1,d=3,n=40)$ $0.82\%$ $244.9$s Hermite $(r=1,d=4)$ $1.06\%$ $358.6$s Conclusion ==========
--- abstract: 'We investigate the response of the bound state structure of a two-boson system, within a Yukawa model with a scalar boson exchange, to the inclusion of the cross-ladder contribution to the ladder kernel of the Bethe-Salpeter equation. The equation is solved by means of the Nakanishi integral representation and light-front projection. The valence light-front wave function and the elastic electromagnetic form factor beyond the impulse approximation, with the inclusion of the two-body current, generated by the cross-ladder kernel, are computed. The valence wave function and electromagnetic form factor, considering both ladder and ladder plus cross-ladder kernels, are studied in detail. Their asymptotic forms are found to be quite independent of the inclusion of the cross-ladder kernel, for a given binding energy. The asymptotic decrease of form factor agrees with the counting rules. This analysis can be generalized to fermionic systems, with a wide application in the study of the meson structure.' address: - 'Laboratório de Física Teórica e Computacional - LFTC, Universidade Cruzeiro do Sul, 01506-000 São Paulo, Brazil' - | Instituto Tecnológico de Aeronáutica, DCTA, 12228-900,\ São José dos Campos, Brazil - 'Instituto de Física Teórica, UNESP, 01156-970, São Paulo, Brazil' - 'Lebedev Physical Institute, Leninsky Prospekt 53, 119991 Moskow, Russia' author: - 'V. Gigante' - 'J. H. Alvarenga Nogueira' - 'E. Ydrefors' - 'C. Gutierrez' - 'V.A. Karmanov' - 'T. Frederico' title: Bound state structure and electromagnetic form factor beyond the ladder approximation --- , , , , and [^1] Relativistic bound states, Bethe-Salpeter equation, Minkowski space, light-front wave function, electromagnetic form factor The investigation of fundamental interactions faces the challenge to obtain theoretically the properties of relativistic bound systems in the Minkowski space. One relevant present example, is the introduction of quasi-parton distributions calculated with moving hadrons in the Euclidean Lattice QCD for large longitudinal momentum to match with parton distribution functions (PDFs) in the infinite momentum frame [@XJiPRL13]. On the other side, recent tools are being introduced to investigate the spectrum and the Minkowski space structure of composite systems within the continuum approach to bound states in field theory, without resorting to the Wick rotation in the Bethe-Salpeter (BS) equation. One technique to solve bound and scattering problems within the BS approach relies on the Nakanishi integral representation (NIR) [@nakanishi]. This method was introduced about two decades ago in [@KusPRD95], and further developed in [@KarEPJA06; @CarEPJA06] where the projection onto the light-front (LF) was used as an essential step to simplify the formalism. Later on, it was further extended to scattering states [@FrePR12], and computations using convenient polynomial basis expansion were provided in [@FrePRD14; @FreEPJC15]. The efforts were undertaken to invert the NIR for the Euclidean BS amplitude, in order to find the Nakanishi weight function and use it to reconstruct the BS amplitude in Minkowski space [@FCGK_LC_2015]. Applications to bound fermionic systems were also done [@CarEPJA10; @dPaPRD16]. The method has been shown to be reliable to study the spectrum and the Minkowski space structure of a relativistic two-boson system in the ladder approximation [@GutPLB16]. Impact parameter space amplitudes (see e.g. [@BurIJMPA03]) can be derived from the LF valence wave function of the ground and excited states. Together with the asymptotic form of the LF wave function for large transverse momentum, the impact parameter space representation at large distances were studied within the ladder approximation in [@GutPLB16]. The Minkowski space approach has been extended beyond the ladder exchange in Ref. [@CarEPJA06], where it was considered the cross-ladder contribution to the kernel of the BS equation for the two-boson bound state. The binding energy as a function of the coupling constant was also computed in the Euclidean space approach within the Feynman-Schwinger framework of the Yukawa model for two-boson bound states [@NiePRL96], where all possible cross-ladder diagrams were taken into account. This work shows quite clearly the extra attraction provided by adding the infinite set of diagrams in the kernel of the corresponding BS equation. Indeed, the consideration of only lowest order cross-ladder diagram gives a considerable net attraction in the two-boson bound state (see e.g. [@CarEPJA06]). Therefore, it is natural to expect that the dynamics beyond the ladder exchange is reflected not only on the binding energy but also on the Minkowski space structure of the bound state. This is modified by the higher order contributions to the kernel of the BS equation, even if the binding energy is kept fixed by changing the coupling constant. The interesting question comes on how the dynamics beyond the ladder exchange contributes quantitatively to the asymptotic behavior of both valence LF wave function, associated with the PDFs, and to the elastic electromagnetic (EM) structure. In this paper we study the two-boson bound state structure in the Yukawa model, considering the ladder and ladder plus cross-ladder kernels, to investigate quantitatively the LF wave function and elastic EM form factor. To satisfy the gauge invariance, the cross-ladder graph must also contribute to the EM current of the bound pair as a two-body current, which is also considered in this work. These observables are intrinsically connected with the Minkowski space structure of the bound state. The solution of the homogeneous two-boson BS equation in Minkowski space is found numerically by transforming it into a non-singular integral equation for the weight function provided by the NIR of the BS amplitude, using the technique proposed in [@KarEPJA06; @CarEPJA06] and further developed in [@FrePRD14]. To put our work on a broad perspective, we recall that in the realm of the quark counting rules [@MatLNC73; @BrodPRL73] within perturbative QCD, it was derived the leading asymptotic large momentum form of amplitudes for exclusive processes, in particular to elastic form factors [@LepPRD80; @BrodPRL85]. Higher-twist contributions to the associated amplitudes are subleading [@LepPRD80]. These ideas applied to a spin 1 two-fermion bound state resulted in the “universal ratios” [@Brodsky:1992px] between the leading asymptotic contributions to the elastic EM form factors. Later on, subleading power corrections were considered in the EM form factors of the deuteron [@KobPRD94; @KobPAN95] and $\rho$-meson [@MelPLB16], and in exclusive processes [@Carson2003] consistent with the LF angular conditions. Furthermore, the quark counting rules were generalized in Ref. [@XJiPRL03] for the leading hard transverse momentum dependence of the Fock components of the hadronic LF wave function in terms of the parton number, orbital angular momentum along the z-direction and hadron helicity. The present study is a preparation for future applications to explore the nonperturbative physics of QCD. We address the issue on how the asymptotic behavior from counting rules is formed qualitatively and quantitatively, considering the NIR of the BS amplitude, both in the valence LF wave function and EM form factor. In addition, we study quantitatively the elastic two-body current associated with a cross-ladder term being a higher-twist contribution to the form factor. Our aim is to determine how it is damped with respect to the leading term in the present nonperturbative calculation of the bound state. Another aspect analyzed here is the question on how the asymptotic behavior of the form factor and LF wave function change with the modified kernel. For this study we consider a fixed binding energy, which keeps the low momentum behavior of these quantities quite independent of the kernel choice, allowing us to focus on the high momentum region independent on the binding energy. This work is organized as follows. In Sect. \[sect:BSNIR\], the BS equation and NIR of the BS amplitude are briefly introduced. In Sect. \[sect:LFWF\] the valence LF wave function is studied in detail with respect to its asymptotic form and the role of the ladder exchange in forming the leading large momentum behavior. In Sect. \[sect:SLEMFF\] the space-like EM form factor is introduced and the current conservation is discussed. The numerical results for the impulse and two-body current contributions to the form factor are presented, and a discussion of the ladder exchange dominance is performed quantitatively. The asymptotic behavior of the form factor is derived in Sect. \[sect:ASYFF\], and we illustrate them numerically. In Sect. \[sect:SUMOUT\], we provide the summary and an outlook for future developments of our work. Bethe-Salpeter Equation and Nakanishi Integral Representation {#sect:BSNIR} ============================================================== The BS equation in Minkowski space, for two spinless particles, reads: $$\label{bs} \Phi(k,p)=S\left(\frac{p}{2}+k\right)\,S\left(\frac{p}{2}-k\right) \int \frac{d^4k'}{(2\pi)^4} \, iK(k,k',p)\Phi(k',p) \, ,$$ where the Feynman propagator is $S(k)=i \left[k^2-m^2+i\epsilon\right]^{-1}$. The interaction kernel $K$ is given by the sum of irreducible Feynman diagrams. The ladder kernel is considered in most of the works, but here we incorporate also the cross-ladder contribution. The BS amplitude is found in the form of the NIR [@nakanishi; @KusPRD95]: $$\label{bsint} \Phi(k,p)=-{i}\int_{-1}^1dz\int_0^{\infty}d\gamma \frac{g(\gamma,z)}{D^3(\gamma,z;k,p)}\, ,$$ where the Nakanishi denominator is: $$\label{dennaka} D(\gamma,z;k,p)=\gamma+m^2-\frac{1}{4}M^2-k^2-p\cdot k\; z-i\epsilon \, .$$ The weight function $g(\gamma,z)$ itself is not singular, whereas the singularities of the BS amplitude are fully reproduced by this integral. The BS amplitude in the form (\[bsint\]) is substituted into the BS equation (\[bs\]) and after some mathematical transformations [@KarEPJA06], one obtains the following integral equation for $g(\gamma,z)$: $$\label{bsnew} \int_0^{\infty}\frac{g(\gamma',z)d\gamma'}{\Bigl[\gamma'+\gamma +z^2 m^2+(1-z^2)\kappa^2\Bigr]^2} = \int_0^{\infty}d\gamma'\int_{-1}^{1}dz'\;V(\gamma,z,\gamma',z') g(\gamma',z'),$$ where for bound states $\kappa^2 = m^2- \frac{1}{4}M^2 > 0$ and $V$ is expressed via the kernel $K$. The ladder and ladder plus cross-ladder kernels in Eq. (\[bsnew\]) were worked out in detail in Refs. [@KarEPJA06] and [@CarEPJA06], respectively. The numerical method to solve Eq. (\[bsnew\]) was described in detail in [@FrePRD14; @GutPLB16], where it was proposed a basis expansion in Laguerre polynomials for the noncompact variable and Gegenbauer polynomials for the compact one. Noteworthy to point out that the $s-$wave valence LF wave function is written in the form (we follow the convention of our previous paper [@FCGK_LC_2015]): $$\label{LFWF} \psi_{LF}(\gamma,\xi)=\frac{1-z^2}{4 }\,\int_0^{\infty}\frac{g(\gamma',z)d\gamma'}{\Bigl[\gamma'+\gamma +z^2 m^2+(1-z^2)\kappa^2\Bigr]^2} \, ,$$ where the transverse momentum is $k_\perp=\sqrt{\gamma}$ and the LF momentum fraction is $\xi=(1-z)/2$ with $0<\xi<1$. The physical or normal solutions of the BS equation are the ones for which the weight function has the symmetry property $g(\gamma,z)=g(\gamma,-z)$, and it is reflected in the expected symmetry of the valence wave function for two identical bosons. $B/m$ $\mu/m$ $\alpha^{(L+CL)}$ $\alpha^{(L)}$ $\alpha^{(L)}/\alpha^{(L+CL)}$ $\psi^{(L)}_{LF}/\psi^{(L+CL)}_{LF}$ ------- --------- ------------------- ---------------- -------------------------------- -------------------------------------- 1.5 0.15 4.1399 6.2812 1.5172 1.5774 0.50 5.1568 7.7294 1.4988 1.5395 1.0 0.15 3.5515 5.3136 1.4961 1.5508 0.50 4.5453 6.7116 1.4766 1.5094 0.5 0.15 2.5010 3.6106 1.4436 1.4805 0.50 3.4436 4.9007 1.4231 1.4405 0.1 0.15 1.1052 1.4365 1.2997 1.2763 0.50 1.9280 2.4980 1.2956 1.2694 : Comparison between the ratio of the coupling constants, given in terms of $\alpha=g^ 2/(16\pi m^2)$, corresponding to ladder (L) and ladder plus cross-ladder (L+CL) kernels, with the ratio of the LF wave functions in the asymptotic limit, namely for a large value of $\gamma=\,500m^2$, and with the particular choice $\xi$ = 1/2. For this analysis we use the normalization $\psi^{(L)}_{LF}(0,1/2)=\psi^{(L+CL)}_{LF}(0,1/2)=1$. []{data-label="tab:table1"} Valence light-front wave function {#sect:LFWF} ================================== The cross-ladder kernel is attractive as it is known [@CarEPJA06], and therefore the coupling constant decreases to keep the same binding energy, as illustrated by the values of $\alpha^{(L)}$ and $\alpha^{(L+CL)}$ presented in Table \[tab:table1\] for a given $B$ and $\mu$. The momentum dependence of the valence wave function is discussed in what follows. We choose a strongly bound situation with $B=1.5\,m$ to show the effect of changing the interaction kernel from ladder to ladder plus cross-ladder at a fixed binding energy. ![LF wave function vs. $\gamma$ for $\xi=1/2$ with ladder (L) (dashed lines) and ladder plus cross-ladder (L+CL) (solid lines) interaction kernels for $B=1.5\, m$ and $\mu=0.15\,m$ (left-frame) and ${\mu=} 0.5\,m$ (right-frame). []{data-label="fig:wflf1"}](1a.pdf "fig:") ![LF wave function vs. $\gamma$ for $\xi=1/2$ with ladder (L) (dashed lines) and ladder plus cross-ladder (L+CL) (solid lines) interaction kernels for $B=1.5\, m$ and $\mu=0.15\,m$ (left-frame) and ${\mu=} 0.5\,m$ (right-frame). []{data-label="fig:wflf1"}](1b.pdf "fig:") The result for the wave function is shown in Fig. \[fig:wflf1\]. At relatively low momentum, $\sqrt{\gamma}\lesssim 3\, m$, the wave function is practically the same for the ladder and ladder plus cross-ladder kernels. That happens because this momentum region is determined by the binding energy that gives the behavior of the wave function at large distances. In the present case one should expect that the momentum region determined mainly by the binding energy is of the order of ${\sqrt{\gamma}\sim}B= 1.5 \, m$, which seems to be the case. At large momentum, we observe that the ladder and ladder plus cross-ladder results for the wave function, are essentially proportional. According to the general discussion on asymptotic behavior of the LF wave function [@LepPRD80], the large momentum tail should be dominated by the ladder exchange, that is common to both calculations. In [@GutPLB16], it was found, for ground and excited states, that for $\gamma\to \infty$: $$\label{cxi} \psi_{LF}(\gamma,\xi) \to \alpha \ \gamma^{-2}\,C(\xi) \, ,$$ where $\alpha$ is factorized and $\psi_{LF}(0,1/2)=1$ is chosen to get $C(\xi)$ in Fig. \[fig:cxi\]. The only fact that the kernel can be enlarged to include the cross-ladder, allows us to check how $C(\xi)$ changes for a given binding energy, considering that the coupling constant has to be modified for the two kernels to keep $B$ fixed and the ladder exchange dominates the large momentum region. Table \[tab:table1\] illustrates how the asymptotic wave function scales with $\alpha$ for ladder and ladder plus cross-ladder kernels with $\mu=0.15 \, m$ and $0.5 \, m$. We considered the coupling constants for different binding energies, and the ratio of the wave functions ($\psi^{(L)}_{LF}/\psi^{(L+CL)}_{LF}$) when $\gamma=500\,m^2$ and $\xi=1/2$ ($z=0$). The Table also illustrates that the ratio between the values of $\alpha$ are about the same as the ratio of the wave functions, namely $\alpha^{(L)}/\alpha^{(L+CL)}\approx\psi^{(L)}_{LF}/\psi^{(L+CL)}_{LF}$. This turns clear the motivation in factorizing $\alpha$ in Eq. (\[cxi\]). ![ Asymptotic function $C(\xi)$ defined from the LF wave function for $\gamma\to\infty$ (\[cxi\]) computed for the ladder kernel, $C^{(L)}(\xi)$ (dashed line), and ladder plus cross-ladder kernel, $C^{(L+CL)}(\xi)$ (solid line), with exchanged boson mass of $\mu=0.15\,m$. Calculations are performed for $B=1.5\, m$ (left frame) and $B=0.118\, m$ (right frame). A comparison with the analytical forms of $C(\xi)$ valid for the Wick-Cutkosky model for $B=2m$ (full box) and $B\to 0$ (dash-dotted line) both arbitrarily normalized.[]{data-label="fig:cxi"}](2a.pdf "fig:")![ Asymptotic function $C(\xi)$ defined from the LF wave function for $\gamma\to\infty$ (\[cxi\]) computed for the ladder kernel, $C^{(L)}(\xi)$ (dashed line), and ladder plus cross-ladder kernel, $C^{(L+CL)}(\xi)$ (solid line), with exchanged boson mass of $\mu=0.15\,m$. Calculations are performed for $B=1.5\, m$ (left frame) and $B=0.118\, m$ (right frame). A comparison with the analytical forms of $C(\xi)$ valid for the Wick-Cutkosky model for $B=2m$ (full box) and $B\to 0$ (dash-dotted line) both arbitrarily normalized.[]{data-label="fig:cxi"}](2b.pdf "fig:") The asymptotic form in Eq. (\[cxi\]) is also found in the Wick-Cutkosky (WC) model, where the valence ground state wave function is [@HwaNPB04]: $$\label{lfwvwc} \psi^{(WC)}_{LF}(\gamma,\xi)=\frac{C^{(WC)}(\xi)}{2\sqrt{\pi}(\gamma+m^2-\xi(1-\xi)M^2)^2}$$ with $C^{(WC)}(\xi)=\xi(1-\xi)g^{(WC)}(1-2\xi)$. In the two extreme limits of binding energy, strongly and weakly bound state, this function is found analytically and it is given by $$\label{wcs} C^{(WC)}(\xi)=[\xi(1-\xi)]^2\, ,$$ for $B=2\:m$, and by, $$\label{wcw} C^{(WC)}(\xi)=\xi(1-\xi)\left(\frac12-\big|\frac12-\xi\big|\right)$$ for $B\to 0$. The normalization of $C^{(WC)}(\xi)$ presented above is chosen arbitrarily. The asymptotic functions $C(\xi)$ defined from the LF wave function for $\gamma\to\infty$ (\[cxi\]) obtained with the ladder and ladder plus cross-ladder kernels are shown in Fig. \[fig:cxi\], for weak and strong binding energies $B=0.118\,m$ and $B=1.5\,m$, respectively. We choose the case of an exchanged boson mass of $\mu=0.15\,m$. As mentioned, for this study the normalization of the wave function is chosen as $\psi_{LF}(0,1/2)=1$. First we observe, a quite weak sensitivity in the form of $C(\xi)$ with $B$, for the values we use, while the Wick-Cutkosky model in the extreme limits of binding energy has $C(\xi)$ quite different as given by Eqs. (\[wcs\]) and (\[wcw\]). The noticeable difference in the weak and strong binding cases is the magnitude of $C(\xi)$, which from $B=1.5\,m$ to $0.118\,m$ decreases by a factor of 10, considering that the normalization $\psi_{LF}(0,1/2)=1$ is fixed in both cases. This is the expected behavior as the wave function in the strong binding case spreads out for larger momentum than in the weak binding situation. Our results are closer to the analytical form of $C(\xi)$ obtained in the Wick-Cutkosky model for $B=2m$. This comparison suggests that $C(\xi)$ is well approximated by $[\xi(1-\xi)]^\lambda$ with $\lambda$ close to 2 for small $\mu$. In the extreme case of $\mu=\infty$ the asymptotic form of the LF wave function changes to $\gamma^{-1}$, while $c(\xi)=[\xi(1-\xi)]^2$. We remind that the end-point behavior of the LF wave function is immediately associated by Eq. (\[LFWF\]) with the behavior of the Nakanishi weight function $g(\gamma,z)$ at $z\to\pm 1$. The quadratic form at the end point of $C(\xi)$ comes from a linear damping of $g(\gamma,z)\sim (1-|z|)$ for $|z|\to 1$. This property will be later on used to study analytically the asymptotic form of the EM form factor and show the consistence of the formulation with the counting rules. Our study of the structure of the bound state continues now with the analysis of the elastic EM form factor. We check the effect of the addition of the cross-ladder to the kernel, by comparing results with a fixed binding energy. We explore the low and high momentum transfer regions, with the aim to verify the asymptotic behavior and the dominance of the ladder exchange for large momentum. Furthermore, the ladder plus cross-ladder kernel offers the opportunity to study the effect of the two-body current in the form factor and we show analytically and quantitatively its faster decay with momentum transfer, as it is expected from a higher-twist contribution to the form factor [@LepPRD80]. Space-like Electromagnetic Form factor {#sect:SLEMFF} ====================================== For a spinless system in the general case the e.m. current (not necessary elastic and conserved) is given by $$\label{current} J_\mu = (p_\mu+p'_\mu) F_{1}(Q^2) + (p_\mu-p'_\mu) F_2(Q^2)\, ,$$ where $Q^2=-(p-p')^2>0$. In the elastic case current conservation implies that $F_2=0$ and only $F_1$ survives and represents the virtual photon absorption amplitude by the composite system. For our kernel considered up to the cross-ladder the gauge invariance of the EM coupling implies two irreducible contributions to the photon absorption amplitude, which leads to two parts of the form factors $$F_{1}(Q^2)=F_{I}(Q^2)+F_{X}(Q^2),$$ where $F_{I}$ means the impulse contribution, obtained from the triangle diagram, Fig. \[triangle\] (left), and $F_{X}$ is the two-body current contribution to the form factor, which is computed from the virtual photon absorption amplitude diagrammatically depicted in Fig. \[triangle\] (right). ![Diagrammatic representation of the photon absorption amplitude: impulse (left) and two-body current contribution (right).[]{data-label="triangle"}](3a.pdf "fig:") ![Diagrammatic representation of the photon absorption amplitude: impulse (left) and two-body current contribution (right).[]{data-label="triangle"}](3b.pdf "fig:") Impulse contribution to the form factor --------------------------------------- The impulse contribution to the form factor is represented diagrammatically in Fig. \[triangle\]. For a system composed of two spinless particles, the EM vertex can be expressed in terms of the BS amplitude by the formula $$\label{ff} (p+p')^{\mu} F_{I}(Q^2) = i\int \,\frac{d^4k}{(2\pi)^4} (p+p'-2k)^{\mu}\,(k^2-m^2)\, \Phi \left(\frac{p}{2} -k,p\right)\Phi \left(\frac{p'}{2} -k,p'\right).$$ We contract both sides of (\[ff\]) with $(p+p')_{\mu}$ and substitute in its r.h.-sides the BS amplitude in terms of the NIR given in Eq. (\[bsint\]): $$\begin{gathered} \label{J} F_{I}(Q^2)=\frac{i }{(2\pi)^4} \int_{0}^\infty d\gamma \int_{-1}^1 dz\,\int_{0}^\infty d\gamma'\int_{-1}^1dz'\,\int d^4k\left[1-\frac{2k\cdot (p+p')}{(p+p')^2}\right]\\ \times \frac{(m^2-k^2) \,g(\gamma,z)g(\gamma',z')} {D^3(\gamma,z;\frac{p}{2}-k,p)\,D^3(\gamma',z';\frac{p'}{2}-k,p')}\, ,\end{gathered}$$ $D$ is defined in (\[dennaka\]). The loop integral in $d^4k$ is calculated analytically by means of the Feynman parametrization. This procedure is described in detail in Ref. [@CarEPJA09]. In this way, one finds the exact formula in terms of the weight function $g(\gamma,z)$: $$\begin{gathered} \label{ffM} F_{I}(Q^2)=\\=\frac{1}{ 2^7\pi^3}\int_0^\infty d\gamma\int_{-1}^1 dz\, g(\gamma,z) \int_0^\infty d\gamma' \int_{-1}^1 dz'\, g(\gamma',z') \int_0^1 dy\,y^2(1-y)^2 \frac{f_{num}}{f_{den}^4},\end{gathered}$$ where $$\begin{gathered} \label{C} f_{num}=(6 \eta-5)m^2 + [\gamma' (1 - y) + \gamma y] (3 \eta -2) + 2M^2 \eta(1-\eta) +\\ + \frac{1}{4} Q^2 (1 - y) y (1+z) (1+z') \\ f_{den}= m^2 + \gamma' (1 - y) + \gamma y - M^2 (1 - \eta ) \eta + \frac{1}{4}Q^2 (1 - y) y (1+z) (1+ z'),\end{gathered}$$ with $ 2\,\eta=(1 + z)y + (1+z')(1-y). $ Two-body current contribution to the form factor ------------------------------------------------ Next, we sketch the computation of the form factor for the two-body current represented by the diagram shown in the right of Fig. \[triangle\], where the photon vertex is given by $-i(p_4+p_3)^{\mu}$. The form of the two-body current in terms of the BS amplitudes of the final and initial state is written as: $$\begin{gathered} \label{formfac-2} F_{X}(Q^2)= - i \frac{g^4}{ (2 \pi)^{12} }\int d^4p_{2} d^4p_{8} d^4p_{9} \, \left[1- 2\frac{ (p+p')\cdot(p_9 + p_2 -p_8)}{(p+p')^2} \right] \\ \times \left[\prod_{i=3,\,i\neq 5}^8\frac{1}{p_i^2-m_i^2+i\epsilon}\right] \,\Phi \left(\frac{p}{2}-p_2,p\right)\, \Phi \left(\frac{p'}{2}-p_9,p'\right)\, \, ,\end{gathered}$$ where $p_3=p-p_9-p_2+p_8$, $p_4=p'-p_9-p_2+p_8$, $p_6=p_2-p_8$, $p_7=p_9-p_8$, $m_3=m_4=m$ and $m_6=m_7=\mu$. After substituting the BS amplitude by the NIR, Eq. (\[bsint\]), in the above formula, and using six Feynman parametric integrations, only one denominator remains, and by standard integrations over the three loops one obtains $$\begin{gathered} \label{formfac-5} F_{X}(Q^2) = -\frac{3 \alpha^2 m^4}{(2 \pi)^5} \int_0^\infty d\gamma \int_{-1}^1 dz \int_0^\infty d\gamma'\int_{-1}^1 dz' g(z',\gamma') g(z,\gamma)\\ \times \prod_{i=1}^6\int_{0}^{1} dy_i \Theta\left(1-\sum_{j=i+1;i<4}^4y_j\right) (1-y_5)^2 y_5^2 (1-y_6)^2 y_6^3 \frac{f^X_{num}}{\left[f^X_{den}\right]^5}, \end{gathered}$$ where the functions $f^X_{num}$ and $f^X_{den}$, depends on the $m$, $y_i$, $\gamma$, $z$, $\gamma'$, $z'$, $p'$ and $p$. They do not contain any singularity, but are too lengthy to be explicitly shown here. For the calculation of the form factor the above formula is used. [*Current conservation.*]{} The expression for the elastic EM vertex is symmetric relative to the permutation $p \leftrightarrow p'$ both for the impulse as well as for the two-body current contributions. Hence, the second (antisymmetric) term in (\[current\]) cannot appear in the elastic EM vertex, and therefore $F_2(Q^2) \equiv 0$. That follows from the contraction of the EM vertices associated with the impulse and two-body current terms, diagrammatically shown in Fig. \[triangle\], with $(p-p')^{\mu}$, which results in zero for any BS amplitude in the elastic case. In this case current conservation is automatically fulfilled for any particular contribution to the current. However, $J_\mu $ is an operator and the current conservation $J\cdot q=0$ means that all the matrix elements of this operator must be zero. What we considered for elastic form factor is only one (diagonal) matrix element. The non-diagonal (transition) matrix elements bound $\to$ excited state also must be zero. The above symmetry will not hold in this case and the zero value of $J\cdot q$ should appear as a subtle cancellation of different contributions both in the kernel and in the EM vertex. This cancellation found numerically would be indeed a powerful test, as in Ref. [@CKtransit], where this cancellation was demonstrated numerically for the transition form factor associated with the EM breakup process: bound $\to$ scattering state. In the present work we restrict ourselves by the elastic case only. The inelastic transitions and the current conservation in this case will be a subject of forthcoming paper. ![Form factor as a function of $Q^2$. Calculations performed with the BS amplitude from the ladder plus cross-ladder kernel. The solid curve is the full form factor. The dashed curve is the impulse contribution ($F_I$). The double-dotted dashed curve is the two-body current ($F_X$) contribution to the EM vertex. Results for: $B=0.1\, m$ and $\mu=0.15\,m$ (upper-left frame), $B=0.1\, m$ and $\mu=0.5\,m$ (upper-right frame), $B=1.5\, m$ and $\mu=0.15\,m$ (lower-left frame), $B=1.5\, m$ and $\mu=0.5\,m$ (lower-right frame).[]{data-label="fig:ff1"}](4a.pdf "fig:") ![Form factor as a function of $Q^2$. Calculations performed with the BS amplitude from the ladder plus cross-ladder kernel. The solid curve is the full form factor. The dashed curve is the impulse contribution ($F_I$). The double-dotted dashed curve is the two-body current ($F_X$) contribution to the EM vertex. Results for: $B=0.1\, m$ and $\mu=0.15\,m$ (upper-left frame), $B=0.1\, m$ and $\mu=0.5\,m$ (upper-right frame), $B=1.5\, m$ and $\mu=0.15\,m$ (lower-left frame), $B=1.5\, m$ and $\mu=0.5\,m$ (lower-right frame).[]{data-label="fig:ff1"}](4b.pdf "fig:") ![Form factor as a function of $Q^2$. Calculations performed with the BS amplitude from the ladder plus cross-ladder kernel. The solid curve is the full form factor. The dashed curve is the impulse contribution ($F_I$). The double-dotted dashed curve is the two-body current ($F_X$) contribution to the EM vertex. Results for: $B=0.1\, m$ and $\mu=0.15\,m$ (upper-left frame), $B=0.1\, m$ and $\mu=0.5\,m$ (upper-right frame), $B=1.5\, m$ and $\mu=0.15\,m$ (lower-left frame), $B=1.5\, m$ and $\mu=0.5\,m$ (lower-right frame).[]{data-label="fig:ff1"}](4c.pdf "fig:") ![Form factor as a function of $Q^2$. Calculations performed with the BS amplitude from the ladder plus cross-ladder kernel. The solid curve is the full form factor. The dashed curve is the impulse contribution ($F_I$). The double-dotted dashed curve is the two-body current ($F_X$) contribution to the EM vertex. Results for: $B=0.1\, m$ and $\mu=0.15\,m$ (upper-left frame), $B=0.1\, m$ and $\mu=0.5\,m$ (upper-right frame), $B=1.5\, m$ and $\mu=0.15\,m$ (lower-left frame), $B=1.5\, m$ and $\mu=0.5\,m$ (lower-right frame).[]{data-label="fig:ff1"}](4d.pdf "fig:") Results for the impulse and two-body current form factors ---------------------------------------------------------- In Fig. \[fig:ff1\], we present the impulse ($F_I$) and two-body current contributions ($F_X$) to the form factor, diagrammatically depicted in Fig. \[triangle\], and computed with Eqs. (\[ffM\]) and (\[formfac-5\]), respectively. The calculations are performed for two representative binding energies $B= 0.1 \, m$ and $1.5 \, m$, namely weak and strong binding cases, respectively. For both cases, the calculations are carried out with the BS amplitude obtained with the ladder plus cross-ladder kernel in Eq. (\[bsnew\]) for exchanged boson mass of $\mu=0.15 \, m$ and $\mu=0.5 \, m$. The solid curve everywhere is the total form factor, normalized to one at $Q^2=0$. The total form factor is the sum of $F_I$ (dashed curve) and the $F_X$ (double dot-dashed curve) contributions to the EM vertex. We see that the relative contribution of $F_X$ increases when $\mu$ decreases for a given $B$, as the overlap between the two-body current operator and the BS amplitude increases, once the size of the state is fixed essentially by $B$. The same reason explains that by increasing the binding energy the magnitude of the contribution of the two-body current to the form factor increases. Indeed, in the case that we presented the maximal contribution is at $Q^2=0$ of $F_X$ (about 15% from the total form factor) achieved for $\mu=0.15 \, m$ and $B=1.5 \, m$. This indicates that the two-body current operator contributes to short distance physics, as one could expect. Another feature one can extract by inspecting Fig. \[fig:ff1\], is the role of the ladder exchange in shaping the large momentum region of $F_I$ for $Q^2>\mu^2\, , m^2$ (later on we will discuss in more details the asymptotics of the form factors). For a given binding energy, the change of $\mu$ modifies considerably the form factor, which essentially is dominated by the impulse contribution for the large momentum region, as for instance, in the case of $B=0.1 \, m$, presented in the upper frames of Fig. \[fig:ff1\]. In addition, the dominance of the ladder exchange in forming the tail of the form factor is evident, and for $Q^2/m^2=20$ one sees a scaling with $\alpha$, which changes by about a factor of about two when $\mu$ goes from $0.15 \, m$ to $0.5 \, m$ (see Table 1). This feature at large momentum is independent on the binding energy, as is exemplified, in the lower frames of Fig. \[fig:ff1\] for $B=1.5 \, m$. The same property is found at large transverse momentum for the LF wave function as given by Eq. (\[cxi\]). It is important to point out that the binding energy is fixed, which shapes the low momentum region of the wave function, and also to some extent the form factor. In Fig. \[fig:ff2\] we study the sensitivity of the form factor to the dynamics, namely using the BS amplitude computed with ladder or with ladder plus cross-ladder kernels, for fixed binding energy of $B=1.5 \, m$ and exchange boson masses of $\mu=0.15 \, m$ and $ 0.5 \, m$. We choose the momentum transfer interval of $0\leq Q^2/m^2 \leq 50$. We start by comparing the ladder results for the form factor with the one obtained for the ladder plus cross-ladder kernel, considering the full current in both cases. We observe at low momentum, below $m$, very similar slopes, that reflects close charge radius and the bound state size, which is determined by the same binding energy. This finding is independent of the mass of the exchanged boson, as one can verify by inspecting the right and left panels of Fig. \[fig:ff2\]. Comparing both frames, one observes that while the slopes are similar by changing $\mu$, the form factor at large momentum approximately scales with $\alpha$, which we have already discussed together with Fig. \[fig:ff1\], and the dominance of the ladder exchange in the structure of the state at large momentum. We also compare the impulse contribution for the ladder and ladder plus cross-ladder kernels in Fig. \[fig:ff2\], which are represented by the dot-dashed lines. For that purpose both are normalized to one at zero momentum transfer. We notice two interesting features: ([*i*]{}) the slope is the same for $Q\lesssim m$; ([*ii*]{}) at large momentum the inclusion once properly normalized the impulse contribution dominates. The first point ([*i*]{}), comes from the fact that the binding energy essentially fixes the structure at low momentum, the second point ([*ii*]{}), comes from the fact that the two-body current decreases much faster than the impulse contribution, as the former is a higher-twist contribution to the photon absorption process. Indeed for large momentum the two-body current decays as $Q^{-2}$ with respect to the impulse contribution, which will be shown in detail in what follows. ![Form factor as a function of $Q^2$. The dot-dashed curve is the form factor calculated with the BS amplitude found for ladder (L) kernel. The dashed curve is the impulse contribution to the form factor computed with the BS amplitude obtained with the ladder plus cross-ladder (L + CL) kernel. The solid curve is the full form factor obtained from the BS amplitude calculated with L + CL kernel. The binding energy is $B=1.5\,m$, with the mass of the exchanged boson $\mu=0.15\,m$ (left-frame) and $\mu=0.5\,m$ (right-frame). All curves are normalized to 1 at $Q^2=0$.[]{data-label="fig:ff2"}](5a.pdf "fig:") ![Form factor as a function of $Q^2$. The dot-dashed curve is the form factor calculated with the BS amplitude found for ladder (L) kernel. The dashed curve is the impulse contribution to the form factor computed with the BS amplitude obtained with the ladder plus cross-ladder (L + CL) kernel. The solid curve is the full form factor obtained from the BS amplitude calculated with L + CL kernel. The binding energy is $B=1.5\,m$, with the mass of the exchanged boson $\mu=0.15\,m$ (left-frame) and $\mu=0.5\,m$ (right-frame). All curves are normalized to 1 at $Q^2=0$.[]{data-label="fig:ff2"}](5b.pdf "fig:") Asymptotic behavior of the form factor {#sect:ASYFF} ====================================== The leading behavior of the impulse and two-body current contributions to the form factors for $Q^2\to\infty$ can be obtained by using standard counting rules [@LepPRD80]. In order to find the leading power law behavior of the form factors represented in Figs. \[fig:ff1\] and \[fig:ff2\], one has to count the number of propagators, in which the large virtual photon momentum flows between the emission and absorption by the constituents in the bound state. This counting is provided, of course, by our formalism and it results in $$\label{ffas} F_I(Q^2)\sim Q^{-4} \,\,\,\text{ and }\,\,\, F_X(Q^2)\sim Q^{-6},$$ apart from logarithmic corrections (see e.g. [@HwaNPB04]). The two-body current is identified with a higher twist contribution and decreases faster than the impulse term by a $Q^{-2}$ factor. To illustrate in a transparent and analytical way how such asymptotic behavior of the form factors arises, we analyze it using directly Eqs. (\[J\]) and (\[formfac-2\]) in the following. $F_I(Q^ 2)$ at large $Q^2$ -------------------------- We work in the Breit reference frame, where $\vec{p}=-\vec{p'}\equiv\vec{n}p_v$, $p_0=p'_0=\sqrt{M^2+p_v^2}$, $Q^2=-(p'-p)^2= 4p_v^2$ and $\vec{n}$ is the direction of the incident momentum $\vec{p}$. Hence $p_v=\frac{1}{2}Q$, $p_0=p'_0=\sqrt{M^2+\frac{1}{4}Q^2}$ in Eq. (\[J\]). We also denote $|\vec{k}|=k_v$. Substituting these expressions into the functions ${D}(\gamma,z;\frac{p}{2}-k,p)$ appearing in the denominator of Eq. (\[J\]), at large $Q$ we get: $${D}(\gamma,z;\frac{p}{2}-k,p)\approx (k_0-\vec{n}\cdot\vec{k})(1+z)\frac{Q}{2}+\gamma-k_0^2+k_v^2+m^2-i\epsilon -\frac{1}{2}M^2(1+z),$$ and similarly for ${D}(\gamma',z';\frac{p'}{2}-k,p')$. Omitting a factor, we can represent the denominators in (\[J\]) as: $$\label{DDp} {D}(\gamma,z;\frac{p}{2}-k,p)\propto (1+z)Q+\delta,\quad {D}(\gamma',z';\frac{p'}{2}-k,p') \propto (1+z')Q+\delta',$$ where $\delta,\,\delta'$ do not depend on $Q$. Hence $$\begin{gathered} \label{J2} F_{I}(Q^2)\propto \int_{-1}^1\frac{g(z)dz}{[(1+z)Q+\delta]^3}\int_{-1}^1\frac{g(z')dz'}{[(1+z')Q+\delta']^3}=\\ = \frac{1}{Q^6}\int_{-1}^1\frac{g(z)dz}{\left(1+z+\frac{\delta}{Q}\right)^3}\,\int_{-1}^1\frac{g(z')dz'}{\left(1+z'+\frac{\delta'}{Q}\right)^3}\,,\end{gathered}$$ where the variables $\gamma,\gamma'$ and the integration over $k$ are omitted since they give a finite corrections making no influence on the asymptotic behavior of the form factor. If we put $\frac{\delta}{Q}=0$ in Eq. (\[J2\]), we get a divergent integral at $z=-1$. This means that the decreasing of the factor $\frac{1}{Q^6}$ can be compensated by an increasing of the values of the integrals at finite $Q^2$. Indeed, for $g(z)\equiv 1$, the integral has the form: $$\int_{-1}^1\frac{dz}{\left(1+z+\frac{\delta}{Q}\right)^3}\sim \frac{Q^2}{2\delta^2}.$$ For $g(z)\equiv 1$ it gives the asymptotic form factor as $F_I(Q^2)\propto 1/Q^2$. However, the function $g(z)$ tends linearly to zero as $z\to -1$: $g(z)\sim (1+z)$. This weakens the compensation, therefore: $$\int_{-1}^1\frac{g(z)dz}{\left(1+z+\frac{\delta}{Q}\right)^3}=\int_{-1}^1\frac{(1+z)dz}{\left(1+z+\frac{\delta}{Q}\right)^3} \sim \frac{2Q}{\delta}.$$ This provides the asymptotic behavior: $$\label{fia} F_{I}(Q^2)\propto Q^{-4}.$$ We can summarize the origin of this result as follows: the denominator of each propagator, containing $p$ or $p'$, according to (\[DDp\]), contributes the factor $\sim Q$ (if the limit $Q\to\infty$ does not create a divergence). In (\[J\]) we have two such propagators, each in 3rd degree ($\sim \frac{1}{D^3{D'}^3}$). This gives the factor $\sim\frac{\delta^6}{Q^6}$ in (\[J2\]). In the case of divergence (at $Q\to \infty$), like $$\int_{-1}\frac{dz}{(1+z)^2}=\left.-\frac{1}{1+z}\right|_{z\to -1},$$ large but finite value of $Q$ eliminates the divergence, automatically replacing the limit $z=-1$ by the cutoff $z=-1+\frac{\delta}{Q}$. The integral becomes to be finite but large: $\sim \frac{Q}{\delta}$. In (\[J2\]) the product of two such integrals results in the factor $\sim \frac{Q^2}{\delta^2}$. This weakens the falloff $\sim \frac{1}{Q^6}$ up to $F(Q^2)\propto \frac{1}{Q^4}$ in (\[J\]). Except for the term $\sim \log\left(\frac{Q^2}{m^2}\right)$ (which is out of the precision of this consideration) the asymptotic behavior $F_I(Q^2) \propto \frac{1}{Q^4}$ coincides with the form factor fall-off found in Eq. (28) of Ref. [@HwaNPB04] for the Wick-Cutkosky model. The agreement of Eq. (\[fia\]) with the asymptotic behavior found in [@HwaNPB04] confirms the validity of the present consideration. Below we will apply this method to the two-body current contribution. $F_X(Q^2)$ at large $Q^2$ -------------------------- The two-body current to the EM form factor is shown in Fig. \[triangle\]. As independent integration variables we chose the four-momenta $p_2,p_8,p_9$. The other momentas are expressed as: $p_1=p-p_2$, $p_5=p'-p_9$, $p_6=p_2-p_8$, $p_7=p_9-p_8$, $p_3=p-p_2-p_9+p_8$, $p_4=p'-p_2-p_9+p_8$. The arguments of the BS amplitudes are: $k=\frac{1}{2}(p_1-p_2)=\frac{1}{2}p-p_2$, $k'=\frac{1}{2}(p_5-p_9)=\frac{1}{2}p'-p_9$. Then the two-body current in the form factor is given by Eq. (\[formfac-2\]). It should be noticed that the arguments of the BS amplitudes in this equation are $\frac{p}{2}-p_2$ and $ \frac{p'}{2}-p_9$. At large $Q^2$ we omit the factors $(p'+p)^2$ and $[ (p'+p)^2 - 2(p+p'){\makebox[0.08cm]{$\cdot$}}( p_9 + p_2 - p_8)] $. We can also omit the propagators carrying the momenta $p_6$, $p_7$ and $p_8$. The first two (cubic) factors in (\[formfac-2\]) coming from the NIR of the two BS amplitudes have the same form as the corresponding factors in (\[J\]). Applying to them the analysis performed for $F_I$, we find that the product of them results in $\sim \frac{1}{Q^4}$. However, Eq. (\[formfac-2\]) contains two additional propagators with $p$ and $p'$, associated with $p_3$ and $p_4$. They result in an asymptotic behavior similar to (\[DDp\]), but without the factors $(1+z)$, $(1+z')$. Hence, each of them adds one extra factor $\frac{1}{Q}$. They together give two extra powers of momentum $\sim \frac{1}{Q^2}$. Hence, the degree $\frac{1}{Q^4}$ is replaced by $\sim \frac{1}{Q^6}$. We conclude that the two-body current has the asymptotic behavior $F_{X}(Q^2)\propto Q^{-6}$, consistent with the counting rules. We stress that the asymptotic forms depend crucially on the end-point behavior of the weight function, which is immediately translated to the valence wave function, as seen in Eq. (\[LFWF\]). ![EM form factor for the case $\mu=0.15\,m$ and $B=0.1\,m$ obtained with the ladder plus cross-ladder kernel. In the left-frame the two contributions of the form factor are displayed. In the right-frame the asymptotic behaviors of the corresponding contributions are analyzed.[]{data-label="fig:ff3"}](6a.pdf "fig:") ![EM form factor for the case $\mu=0.15\,m$ and $B=0.1\,m$ obtained with the ladder plus cross-ladder kernel. In the left-frame the two contributions of the form factor are displayed. In the right-frame the asymptotic behaviors of the corresponding contributions are analyzed.[]{data-label="fig:ff3"}](6b.pdf "fig:") Form factors at large $Q$: some numerical results ------------------------------------------------- The asymptotic behavior of the form factors given in (\[ffas\]) are illustrated in Fig. \[fig:ff3\]. The calculations are done with the ladder plus cross-ladder kernels with $\mu=0.15\,m$ and $B=0.1\,m$. The results for $F_I$ and $F_X$ normalized according to Fig. \[fig:ff1\] are shown. We perform an extensive exploration for very large momentum transfers to check as well the leading log corrections to the form factors. We have not derived these corrections for the form factor and just use it as suggested from the Wick-Cutkosky model, as derived in [@HwaNPB04]. We perform four studies devoted to single out the asymptotic behavior of $F_I$ and $F_X$: ([*i*]{}) $Q^4 \, F_I$, ([*ii*]{}) $Q^6 \, F_X$, ([*iii*]{}) $Q^4 /\left[1+(\alpha/2\pi)\log(Q/m)^ 2\right]\, F_I$, and([*iv*]{}) $Q^6/\left[1+(\alpha/2\pi)\log(Q/m)^ 2\right]\, F_X$. We first observe that the asymptotic region is established for $Q/m\sim 30$, which seems reasonable as all involved scales, masses and binding energy are of order $m$. Second, the products ([*i*]{}) and ([*ii*]{}) are slowly decreasing, while (iii) and (iv), with the inclusion of the leading log correction, which we can distinguish in so large momentum transfer interval presented in Fig. \[fig:ff3\], show an improvement in getting the flat behavior at large momentum. Summary and outlook {#sect:SUMOUT} ==================== The response of the Minkowski space structure of a two-boson bound state, within a Yukawa model with a scalar boson exchange, to the inclusion of the cross-ladder contribution to the ladder kernel of the BS equation was investigated quantitatively. The NIR allied with the LF projection was used to solve numerically the BS equation in Minkowski space. We computed both the valence wave function and elastic electromagnetic form factor including the two-body current contribution to the electromagnetic vertex. We have discussed in detail the dependence on the ladder exchange in building the asymptotic behavior of the valence wave function and form factor, for a fixed binding energy, considering both ladder and ladder plus cross-ladder kernels. This allowed us to single out the dominance of the ladder exchange, by comparing results for a fixed binding energy and using the two interacting kernels. The valence wave function at low transverse momentum is independent of the kernel, being determined just by the given binding energy. We also studied quantitatively the factorization of the valence wave function in terms of the transverse and longitudinal momenta at large transverse momentum [@GutPLB16]. In this case, as expressed by Eq. (\[cxi\]), once $\alpha$ is factorized out for both binding energy and normalization fixed, the form of the wave function with the longitudinal momentum fraction is quite universal. In the case of $B/m > 0.1$, we found that the functional form approaches the Wick-Cutkosky solution $[\xi(1-\xi)]^2$ obtained for $B=2m$ in Ref. [@HwaNPB04]. Our conjecture is that the form and magnitude of $C(\xi)$ and the wave function at low transverse momentum, for the normalization $\psi_{LF}(0,1/2)=1$ and a given binding energy, are to great deal independent on the inclusion of the irreducible cross-ladder contributions in higher order in the kernel. Then, we can turn our attention to the nice work [@NiePRL96], where the problem with the generalized ladder kernel was solved by means of the Feymman-Schwinger representation. Making use of that, one can speculate on the form of the valence wave function when an infinite set of cross-ladder diagrams are included in the kernel. The electromagnetic current in the case of the cross-ladder kernel includes, besides the impulse term, a two-body current obtained by gauging the cross-ladder kernel. We note that due to the symmetry of the elastic virtual photo-absorption amplitude, the impulse and two-body amplitudes, conserve current independently, which is not the case in an inelastic transition. Our numerical results show that for a given binding energy, the two-body current becomes more relevant as lighter is the exchanged boson mass as well as when the binding energy becomes larger. This is easy to understand if one considers that in both cases the overlap between the bound state and the two-body current increases, either by increasing the range of the interaction or decreasing the size of the bound state. For zero momentum transfers, where the two-body current is more relevant, and for a strongly bound system the contribution is about 15 $\%$ of the normalization. The form factor in the large momentum region was studied in detail and the power-law decreasing, as expected from the counting rules applied to our model, was derived using the adopted Nakanishi integral representation of Bethe-Salpeter amplitude. The leading contribution to the dependence on the large momentum transfer comes from the ladder exchange, which was illustrated by comparing the impulse term from ladder and ladder plus cross-ladder kernels, where the proportionality of the tail to $\alpha$ was singled out for a fixed binding energy. It was pointed out the crucial role of the end-point behavior of the Nakanishi weight function in the power-law behavior. Although, the present study is focused on the two-boson problem, the present analysis can be extended to fermionic systems, for which the Bethe-Salpeter amplitude has been obtained by means of the Nakanishi representation [@CarEPJA10; @dPaPRD16]. It has of course wide applications to the study of meson structure. [*Acknowledgments.*]{} We thank the support from Conselho Nacional de Desenvolvimento Científico e Tecnológico(CNPq) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) of Brazil. J.H.A.N. acknowledges the support of the grant \#2014/19094-8 and V.A.K. of the grant \#2015/22701-6 from Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP). [18]{} X. Ji, Phys. Rev. Lett. [**110**]{} (2013) 262002 N. Nakanishi, Phys. Rev. [**130**]{} (1963) 1230; N. Nakanishi, Prog. Theor. Phys. Suppl. [**43**]{} (1969) 1; Graph Theory and Feynman Integrals (Gordon and Breach, New York, 1971). K. Kusaka and A. G. Williams, Phys. Rev. D [**51**]{} (1995) 7026; K. Kusaka, K. Simpson and A. G. Williams, Phys. Rev. D [**56**]{} (1997) 5071. V. A. Karmanov and J. Carbonell, Eur. Phys. J. A [**27**]{} (2006) 1. J. Carbonell, V. A. Karmanov, Eur. Phys. J. A [**27**]{} (2006) 11. T. Frederico, G. Salmè and M. Viviani, Phy. Rev. D [**85**]{} (2012) 036009. T. Frederico, G. Salmè and M. Viviani, Phy. Rev. D [**89**]{} (2014) 016010. T. Frederico, G. Salmè and M. Viviani, Eur. Phys. J. C [**75**]{} (2015) 398. T. Frederico, J. Carbonell, V. Gigante and V.A. Karmanov, Few-Body Syst. [**56**]{} (2016) 549. J. Carbonell, V.A. Karmanov, Eur. Phys. J. A [**46**]{} (2010) 387. W. de Paula, T. Frederico, G. Salmè, M. Viviani, Phys. Rev. D [**94**]{} (2016) 071901. C. Gutierrez, V. Gigante, T. Frederico, G. Salmè, M. Viviani, L. Tomio, Phys. Lett. B [**759**]{} (2016) 131. M. Burkardt, Int. J. Mod. Phys. A [**18**]{} (2003) 173. T. Nieuwenhuis and J. A. Tjon, Phys. Rev. Lett. [**77**]{} (1996) 814. B. A. Matveev, R. M. Muradjan and A. N. Tavkhelidze, Lett. Nuov. Cim. [**7**]{} (1973) 719. S. J. Brodsky and G. R. Farrar, Phys. Rev. Lett. [**31**]{} (1973) 1153. G. P. Lepage and S. J. Brodsky, Phys. Rev. D [**22**]{} (1980) 2157 S. J. Brodsky, C.-R. Ji, Phys. Rev. Lett. [**55**]{} (1985) 2257. S. J. Brodsky and J. R. Hiller, Phys. Rev.  D [**46**]{} (1992) 2141. A. P. Kobushkin and A. I. Syamtomov, Phys. Rev. D [**49**]{} (1994) 1637. A. P. Kobushkin and A. I. Syamtomov, Phys. Atom. Nucl. [**58**]{} (1995) 1477. J.P.B.C. de Melo, Chueng-Ryong Ji, T. Frederico, Phys. Lett. [**B**]{} 763 (2016) 87. C. E. Carlson and C.-R. Ji, Phys. Rev. D [**67**]{} (2003) 116002. X. Ji, J.-P. Ma, and F. Yuan, Phys. Rev. Lett. [**90**]{} (2003) 241601. D. S. Hwang and V.A. Karmanov, Nucl. Phys. B [**696**]{} (2004) 413. J. Carbonell, V. A. Karmanov and M. Mangin-Brinet, Eur. Phys. J. A [**39**]{} (2009) 53. J. Carbonell and V.A. Karmanov, Phys. Rev. D [**91**]{} (2015) 076010. [^1]: Corresponding author: [email protected]
--- abstract: 'We introduce a numerical framework that enables unprecedented direct numerical studies of the electropermeabilization effects of a cell aggregate at the meso-scale. Our simulations qualitatively replicate the shadowing effect observed in experiments and reproduce the time evolution of the impedance of the cell sample in agreement with the trends observed in experiments. This approach sets the scene for performing homogenization studies for understanding the effect of tissue environment on the efficiency of electropermeabilization. We employ a forest of Octree grids along with a Voronoi mesh in a parallel environment that exhibits excellent scalability. We exploit the electric interactions between the cells through a nonlinear phenomenological model that is generalized to account for the permeability of the cell membranes. We use the Voronoi Interface Method (VIM) to accurately capture the sharp jump in the electric potential on the cell boundaries. The case study simulation covers a volume of $(1\ mm)^3$ with more than $27,000$ well-resolved cells with a heterogeneous mix of morphologies that are randomly distributed throughout a spheroid region.' address: - 'Department of Mechanical Engineering, University of California, Santa Barbara, CA 93106-5070' - 'Department of Computer Science, University of California, Santa Barbara, CA 93106-5110' - 'Team MONC, INRIA Bordeaux-Sud-Ouest, Institut de Mathématiques de Bordeaux, CNRS UMR 5251 & Université de Bordeaux, 351 cours de la Libération, 33405 Talence Cedex, France. ' author: - Pouria Mistani - Arthur Guittet - Clair Poignard - Frederic Gibou bibliography: - 'references\_electroporation.bib' title: 'A parallel Voronoi-based approach for mesoscale simulations of cell aggregate electropermeabilization' --- Level-Set Method ,Voronoi Mesh ,Finite Volume Method ,Quad/Oc-tree Grids ,Mathematical Biology ,Electropermeabilization Introduction {#sec::introduction} ============ Electropermeabilization (also called electroporation) is a significant increase in the electrical conductivity and permeability of the cells’ membrane that occur when pulses of large amplitude (a few hundred volts per centimeter) are applied. The physical basis of this phenomenon lies in the fact that, since membranes are mainly composed of phospholipids and proteins, they behave like a capacitor in parallel with a resistor. The applied electric field is then dramatically enhanced in the vicinity of the membrane, leading to a jump of the electric potential. This locally varying transmembrane potential difference (TMP) can prevail over the cell membrane barrier in regions where this difference surpasses the electroporation threshold. This phenomenon has attracted increasing attention due to its capacity to facilitate targeted drug delivery of non-permeant cytotoxic molecules such as bleomycin or cisplatin [@belehradek1994electropermeabilization]. DNA vaccination and gene therapy are other promising applications for electropermeabilization, which enables non-viral gene transfection [@mcmahon2001]. However, despite extensive scrutiny of this phenomenon, no substantial evidence of the elementary mechanism of electropermealization has been obtained. Nevertheless, the most accepted theory speculates the creation of pores in the membrane as a consequence of a large transmembrane voltage. However these pores have not yet been observed. One important reason behind this inability is that, in the absence of cell imaging techniques in the nanometer scale, almost all experiments that have studied the electroporation effect have used tissue scale samples to infer the underlying molecular level processes. Such inferences have led to the advent of different theoretical models, with membrane pore density approaches being among the most popular mechanisms. Developments in this avenue have been carried out in the work of Debruin and Krassowska [@DEBRUIN] and have been augmented in [@KRASSOWSKA2007404] and [@LI201110] to incorporate the spatio-temporal evolution of the speculated pore radii. Other attempts have been made to model the tissue scale behavior of electropermeabilization [@Langus2016]. Recently, Leguebe [*et al. *]{}[@LEGUEBE201483] have proposed a phenomenological approach to model this effect at single-cell scale in terms of a nonlinear partial differential equation. Their description determines the local behavior of each cell membrane under the influence of its surrounding electric potential in a continuous manner. Remarkably, this representation qualifies for a multi-scale characterization of electropermeabilization. However, we note that in practice these models embody calibrations of free parameters that are tuned by experimenting on populations of cells and extending these measurements to single-cell scale, oversighting the multi-scale nature of electropermeabilization in the experiments. Such approximations are inevitable in the absence of numerical tools to adjust these models in accordance with experiments. However, recent attempts have been made in the work of Voyer [*et al. *]{}[@VOYER201898] to theoretically extend this model to tissue scale. We emphasize the predictability of any such model at the cell aggregate regime to corroborate these results. However, such comparisons with available experimental results were prohibitive in the case of electropermeabilization, partially due to the enormous computational costs of such ventures as well as the complexity of the molecular events involved in membrane electropermeabilization. To facilitate the accurate modeling of molecular processes that regulate electropermeabilization, there has been emerging incentive to overcome the hindering computational difficulties. In the wake of the aforementioned arguments, the advent of “direct” tissue scale simulations seems necessary. Such simulations not only commission better understanding of the involved molecular processes, but also will aid developing semi-analytic models of the overall permeabilization of the tissue under different circumstances. Such endeavors require a complete characterization of the relevant physical parameters from cell scale physics to tissue scale configurations. Quite recently, significant progress has been made in this venue by Guittet [*et al. *]{}[@guittet2017voronoi]. They have proposed a novel Voronoi Interface Method (VIM) to capture the irregular cell interface and accurately impose the sharp TMP jump. The VIM utilizes a Voronoi mesh to capture the irregular interface before applying the dimension-by-dimension Ghost Fluid Method [@FEDKIW1999457; @Kang2000; @liu2003convergence]. This is aimed to direct the fluxes normal to the interface where there is a discontinuity. This reframing the mesh around the interface guarantees the convergence of the solution’s gradients. Also, only the right hand side is affected by the TMP jump which simplifies the computational treatment. We also note that an alternative framework would be using adaptive Chimera grids as proposed by English [*et al. *]{}[@english2013adaptive]. In their proposed method, English [*et al. *]{}used multiple Cartesian grids in different regions of the domain that are coupled on their boundaries by generating a Voronoi mesh. In the case of electroporation, one could also use finer Cartesian grids near the cell membrane that are coupled on the cell boundary with a Voronoi extension. Guittet [*et al. *]{}[@guittet2017voronoi] have derived a finite volume discretization for this phenomenon and implemented it in a serial framework. Their numerical results are in agreement with experimental expectations. However, the computational costs of solving the involved discretization prohibited the consideration of tissue scale simulations. Here, we build on the method proposed by Guittet [*et al. *]{}[@guittet2017voronoi] and generalize their approach to a parallel environment. This parallelization empowers simulations of the single-cell model of Leguèbe and Poignard [*et al. *]{}[@LEGUEBE201483] at the tissue scale, hence providing a framework to validate or improve the understanding of cell electroporation. The structure of this paper is as follows. We introduce the mathematical model for our simulations in section \[sec::cell\_membrane\_model\] and the computational strategy that we develop in section \[sec::parallelAdaptiveStrategy\]. Then we present performance of our implementation as well as some preliminary demonstrations of the numerical results in sections \[sec::NumericalResults\]. In section \[sec::emergent\] we illustrate the emergence of macro-level properties in the cell aggregate. We conclude with a summary of our main results in section \[sec::Conclusion\]. Cell membrane model {#sec::cell_membrane_model} =================== Geometric representation {#subsec::Geometric_rep} ------------------------ The cell cytoplasm $\Omega^c$ and the extracellular matrix $\Omega^e$ are separated by a thin and resistive membrane denoted by $\Gamma$. The outward normal to $\Omega^c$ is denoted by $\mathbf{n}$. Figure \[fig::geometry\] illustrates the geometry in the case where a single cell is considered. The entire domain is denoted by $\Omega = \Omega^e \cup \Gamma \cup \Omega^c$. We denote the conductivities of the materials by $\sigma^c$ and $\sigma^e$ for the cell and the extracellular matrix respectively. Electrical model {#subsec::Elect_model} ---------------- For simulating the electropermeabilization process, we solve the following boundary value problem defined in equations  –. The electric potential field $u$ in the computational domain is governed by the Laplace equation: $$\begin{aligned} &\Delta u = 0, \ {\boldsymbol{x}} \in (\Omega_c \cup \Omega_e), \label{eq::Laplace} \intertext{with the appropriate boundary conditions:} &\left[\sigma \partial_{{\boldsymbol{n}}} u\right]_\Gamma = 0,\ {\boldsymbol{x}} \in \Gamma, \label{eq::bc1} \\ &C_m \partial_t \left[u\right]_\Gamma + S(t,\left[u\right])\left[u\right] = \sigma \partial_{{\boldsymbol{n}}} u\vert_\Gamma, \ {\boldsymbol{x}} \in \Gamma,\label{eq::bc2} \\ &u(t,{\boldsymbol{x}}) = g(t,{\boldsymbol{x}}),\ {\boldsymbol{x}} \in \partial\Omega, \label{eq::bc3} \intertext{and the homogeneous initial condition:} &u(0,{\boldsymbol{x}}) = 0, \ {\boldsymbol{x}} \in \Omega, \label{eq::IC}\end{aligned}$$ where we used the $\left[ \boldsymbol{\cdot}\right]$ notation for describing the jump operator across $\Gamma$. Equation imposes the continuity of the electric flux across the membrane, captures the capacitor and resistor effect of the membrane and is the external voltage applied on the boundaries of the domain. In these equations, $C_m$ and $S$ are the capacitance and conductance of the membrane material respectively. The source term corresponding to the applied voltage is denoted by $g(t,{\boldsymbol{x}})$. The effect of the electroporation current is modeled by the $S(t,\left[u\right])\left[u\right]$ term in equation . We adopt a nonlinear description of the conducting membrane [@LEGUEBE201483] in the next subsection. Membrane electropermeabilization model {#subsec::Electroperm_model} -------------------------------------- The long-term permeabilization of the membrane is modeled by formulating the surface membrane conductivity. Leguèbe, Poignard [*et al. *]{}[@LEGUEBE201483] modeled the surface conductivity of the membrane as follows: $$S_m(t,s) = S_0 +S_{ep}(t,s) = S_0 +X_1(t,s)\times S_1+X_2(t,s)\times S_2,\ \ \ \forall t>0, s\in\Gamma \label{eq::conductance}$$ In this equation $S_0$, $S_1$ and $S_2$ are the surface conductance of the membrane in the resting, porated and permeabilized states, respectively. The level of poration and permeabilization of the membrane are captured in the functions $X_1$ and $X_2$. These are computed as a function of the transmembrane potential difference and are valued in the range $\left[0,1\right]$ by definition. The ordinary differential equations determining $X_1$ and $X_2$ read: $$\begin{aligned} \frac{\partial X_1(t,s)}{\partial t} = \frac{\beta_0(s) - X_1}{\tau_{ep}}, \ \ \ X_1(t,s) = 0, \label{eq::porosity} \\ \frac{\partial X_2(t,X_1)}{\partial t} = max\bigg(\frac{\beta_1(X_1) - X_2}{\tau_{perm}}, \frac{\beta_1(X_1) - X_2}{\tau_{res}} \bigg), \ \ \ X_2(t,s) = 0.\label{eq::perm} \end{aligned}$$ The parameters $\tau_{ep}$, $\tau_{perm}$ and $\tau_{res}$ are the time scales for poration, permeabilization and resealing, respectively. Furthermore, in the above equations $\beta_0$ and $\beta_1$ are regularized step-functions defined by: $$\begin{aligned} \beta_0(s) = e^{-\frac{V_{ep}^2}{s^2}}, \ \ \ \forall s \in \mathbb{R}, \label{eq::beta_0}, \\ \beta_1(X) = e^{-\frac{X_{ep}^2}{X^2}}, \ \ \ \forall X \in \mathbb{R}, \label{eq::beta_1},\end{aligned}$$ where $V_{ep}$ and $X_{ep}$ are the membrane voltage and the poration thresholds respectively. Computational strategy {#sec::parallelAdaptiveStrategy} ====================== Level-set representation {#subsec::LevelSet} ------------------------ As presented by Guittet [*et al. *]{}[@guittet2017voronoi], we describe the cells in our simulations using the level-set method as first introduced by [@OSHER198812] (see [@Gibou;Fedkiw;Osher:18:A-review-of-level-se] for a recent review) and in particular the technology on Octree Cartesian grid by Min and Gibou [@Min;Gibou:07:A-second-order-accur]. To this end, we construct a spatial signed-distance function $\phi$ relative to the irregular interface $\Gamma$ such that: $$\phi({\boldsymbol{x}}) = \begin{cases} \ \ d({\boldsymbol{x}},\Gamma)> 0 ,\ {\boldsymbol{x}} \in \mathcal{O}_e\\ \ \ d({\boldsymbol{x}},\Gamma) = 0,\ {\boldsymbol{x}} \in \Gamma\\ -d({\boldsymbol{x}},\Gamma) < 0 ,\ {\boldsymbol{x}} \in \mathcal{O}_c \end{cases}, \,{\boldsymbol{x}} \in \mathbb{R}^{3},$$ where $d({\boldsymbol{x}},\Gamma)$ is the Euclidean distance from a given point in the domain to the 0-th level-set hyperspace: $$d({\boldsymbol{x}},\Gamma) = \underset{{\boldsymbol{y}} \in \Gamma} \inf\ d({\boldsymbol{x}},{\boldsymbol{y}}),$$ Figures \[subfig::cell\] and \[subfig::phi\] give an example of such interface representation and a sample level-set function, respectively. Octree data structure and refinement criterion {#subsec::OctreeDataStructure} ---------------------------------------------- Simulating a large number of biological cells in three spatial dimensions requires minimizing the total number of degree of freedom without loss of accuracy. As the physical variations in the solution occur close to the membrane, one needs more nodes to capture the physics at the vicinity of the biological cells compared to farther regions. We utilize the adaptive Cartesian grid based on Quad-/Oc-trees [@Finkel1974; @MEAGHER1982129]. A “Quad-/Oc-tree” is a recursive tree data structure where each node is either a leaf node or a parent to 4/8 children nodes. The Octree is constructed by setting the root of the Octree to the entire computational domain. Then higher resolutions are achieved by recursively dividing each cell into 8 subcells (or 4 subcells in the case of Quadtrees). We use the following refinement criteria introduced by [@strain1999tree] and extended by [@min2004local] to orchestrate this partitioning of space: **Refinement/coarsening criterion:** Split a cell ($\mathcal{C}$) if the following inequality applies (otherwise merge it to its parent cell): $$\min_{v\in \textrm{vertices}(\mathcal{C})}|\phi(v)| \le \textrm{Lip}(\phi)\cdot \textrm{diag-size}(\mathcal{C}), \label{eq::refinement}$$ where we choose a Lipschitz constant of Lip$(\phi)\approx 1.2$ for the level-set $\phi$. Furthermore, diag-size($\mathcal{C}$) stands for the length of the diagonal of $\mathcal{C}$ and $v$ refers to its vertices. Intuitively, the use of the signed-distance function in equation translates into a refinement based on distance from the interface. This process is depicted in figure \[subfig::Octree\]. An Octree is then characterized by its minimum/maximum levels of refinement. Figure \[subfig::full\_tree\] illustrates an example of a levels (3,8) tree meaning the minimum and maximum number of cells in each dimension are $2^3=8$ and $2^8=256$ respectively. Note that if larger macromesh is used these numbers will be multiplied by the macromesh number; [*e*.*g*. ]{}if one sets $n_x=2$ for levels $(3,8)$ then the number of cells in $x$-direction will be twice as before, [*i*.*e*., ]{}bound between $16$ and $512$ instead. This is the case in all of the simulations in this work. Parallel framework {#subsec:parallel-frame} ------------------ We utilize the parallelism scheme introduced by Mirzadeh [*et al. *]{}[@Mirzadeh2016345]. This scheme is built upon the ${\texttt{p4est }}$software library [@burstedde2011p4est]. ${\texttt{p4est }}$ is a suite of scalable algorithms for parallel adaptive mesh refinement/coarsening (AMR) and partitioning of the computational domain to a forest of connected Quad-/Oc-trees. The partitioning strategy used in [`p4est `]{}is illustrated in figure \[fig::p4est\_partitioning\]. This process is [@burstedde2011p4est]: - A uniform macromesh is created; - A forest of Octrees is recursively constructed using all processes; - The produced tree is partitioned among all processes using a $Z$-ordering; [*i*.*e*., ]{}a contiguous traversal of all the leaves covering all the octrees. The $Z$-ordering is then stored in a one dimensional array and is equally divided between the processes. This contiguous partitioning optimizes the communication overhead compared to the computation costs when solving equations in parallel. To perform the discretizations derived for this problem, we need to construct the local Octrees from the one dimensional array of leaves. To this end, following the method suggested by Mirzadeh [*et al. *]{}[@Mirzadeh2016345], we construct a local tree on each process such that the levels of its leaves matches that of the leaves produced by the ${\texttt{p4est }}$ refinement. This is because ${\texttt{p4est }}$ does not provide the vertical structure, and we need to be able to find a cell containing a point quickly, in $\mathcal{O}(\log(N))$. Each process stores only its local grid plus a surrounding layer of points from other processes, [*i*.*e*., ]{}a ghost layer. Quasi-random cell distribution {#subsec:quasirandom} ------------------------------ To computationally capture the effects of a large aggregate of cells under the influence of an external electric stimulant, first we need to efficiently mimic the randomness in the distribution of the cells while simultaneously constraining the minimum distance among the cells. In fact, for the purposes of this work we need to simulate tens to hundreds of thousands of cells in a relatively small computational domain if we are to observe the relevant aspects of electropermeabilization at the tissue scale. To this end, we distribute the cells using the quasi-random numbers generated by the Halton Quasi Monte Carlo (HQMC) sequence [@levy2002introduction; @Halton1960; @BRAATEN1979249; @ArtScientificComputing]. Quasi-random sequences are more uniformly distributed than the well-known pseudo-random sequences as illustrated in figure \[fig::random\]. As seen in this figure, while uniform pseudo-random numbers suffer from local clustering and voids, the HQMC sequence spans the space more uniformly. Mathematically, the uniformity of a sequence is measured by its “discrepancy” which is measured by comparing the number of points in a given region of space with the number of points expected from an ideal uniform distribution [@levy2002introduction]. The quasi-random sequences are also called *low* discrepancy sequences as they exhibit a more uniform spatial coverage. Remarkably, the low discrepancy characteristic is inherently built in the HQMC algorithm, as opposed to a pseudo-random number generator that would require further processing. In our approach, we locate each cell at the next element in a three dimensional HQMC sequence while skipping the elements that violate the minimum distance criterion to the previously located cells. In contrast to a pseudo-random based technique, such rejections are very rare due to the intrinsic low discrepancy of the HQMC sequence, and hence the efficiency of our technique. As the number of cells increases in our simulations, it becomes computationally prohibitive to generate such a non-overlapping pseudo-random distribution of cells at high densities. Our experiments with HQMC demonstrate that a moderately dense non-overlapping cluster of cells can be generated at least hundreds of times faster than a pseudo-random number based technique. Notably, initializing higher cluster volume fractions (a volume fraction of $\rm n = \dfrac{volume\ of\ the\ cells}{volume\ of\ the\ spheroid}\approx\mathcal{O}(10^{-1})$ ) seems completely impossible using pseudo-random number generators. Discretization of the equations - the Voronoi Interface Method {#subsec:discretize} -------------------------------------------------------------- ![*(left to middle) An Octree is converted into an adaptive Voronoi mesh such that Voronoi faces are fitted to the interface. In our framework the computational domain is partitioned among different processors as demonstrated by different cell colors. (right) In our discretization, $u_p$ corresponds to the normal projection of nodes $i$ and $j$ on the interface ($\Gamma$). This point is equidistant to nodes $i$ and $j$. $s$ is the common length (or area in 3D) of the interface between cells $i$ and $j$. $d$ is the distance between $i$ and $j$.* []{data-label="fig::voronoi"}](discretization_voro "fig:"){width="\textwidth"} \[subfig::voro\] The main difficulty in solving the equations of section \[subsec::Elect\_model\] is related to the non-trivial boundary conditions and discontinuities across the cells’ surface. Guittet [*et al. *]{}[@GuittetVoronoi] introduced the Voronoi Interface Method (VIM) to solve elliptic problems with discontinuities on irregular interfaces. Their proposed method exhibits second order accuracy by solving the problem on a Voronoi mesh instead of the given Cartesian grid. Also, Guittet [*et al. *]{}[@guittet2017voronoi] extended the VIM to the case of the electropermeabilization problem including the aforementioned non-trivial boundary condition in the discretization. In this work, we implement their modified approach in parallel. In this section we briefly highlight this technique. The solver presented by Guittet [*et al. *]{}[@GuittetVoronoi] is based on building a Voronoi mesh using the freely available library [`Voro++ `]{}[@voro]. The Poisson equation is then solved on a Voronoi mesh that coincides with the irregular interface. This introduces additional degrees of freedom close to the interface and on either side that are equidistant to the interface by design. Briefly, the procedure for converting an initial adaptive Cartesian mesh to a conforming Voronoi mesh starts by adopting the Cartesian nodes as cell centers, *i.e.* known as Voronoi seeds, for a Voronoi mesh covering the computational domain. Next, if a Voronoi cell crosses the interface we replace the corresponding degree of freedom with a pair of equidistant points on either sides of the interface. This procedure provides a conforming Voronoi tessellation of the domain such that interfaces are tiled with collections of faces from adjacent Voronoi cells. For more details on generating the Voronoi mesh we refer the interested reader to [@GuittetVoronoi]. Here, we present the numerical scheme of Guittet [*et al. *]{}[@guittet2017voronoi] for completeness using the nomenclature given in Figure \[fig::voronoi\]. First we discretize the boundary condition using a standard Backward Euler scheme: $$C_m\frac{\left[u\right]^{n+1} - \left[u\right]^{n}}{\Delta t} + S^n\left[u\right]^{n+1} = (\sigma\partial_n u^{n+1})_\Gamma,$$ which can be rearranged to get the membrane voltage jump: $$\left[u\right]^{n+1} = \frac{C_m\left[u\right]^n+\Delta t(\sigma\partial_n u^{n+1})_\Gamma}{C_m+\Delta t S^n}, \label{eq::jump}$$ In the second step, we discretize the continuity in the electric flux boundary condition : $$\sigma^e\frac{u^e_p - u^e_i}{d/2}=\sigma^c\frac{u^c_j - u^c_p}{d/2},$$ Replacing $u^c_p$ by its definition $u^e_p-\left[u\right]^{n+1}$ in the above expression, coupling it with equation and rearranging the terms, the final expression of $u^e_p$ reads: $$u^e_p = \bigg(\sigma^e u^e_i + \sigma^c u^c_j + \frac{\sigma^c C_m \left[u\right]^n}{C_m+\Delta t S^n} + \frac{\sigma^c\sigma^e \Delta t }{(C_m+\Delta t S^n)d/2}u^e_i \bigg)/ \bigg(\sigma^c + \sigma^e + \frac{\sigma^c \sigma^e \Delta t}{C_m + \Delta t S^n)d/2} \bigg), \label{eq::uep}$$ This equation for $u^e_p$ is then included in the discretization of the Laplace equation on the Voronoi cells. Finally, we get the following expression for the potential around the interface: $$\sum_{k \in \{ \partial \mathcal{C} \backslash \Gamma\}} s_k \sigma^e \frac{u^e_k -u^e_i}{d_k} + s\hat{\sigma} \frac{u_j - u_i}{d/2} = \mathrm{sign}(\phi_i)s \hat{\sigma}\frac{C_m\left[u\right]^n}{(C_m + \Delta t S^n)d/2},$$ where $$\hat{\sigma} = \frac{\sigma^c \sigma^e}{\sigma^e + \sigma^c+\frac{\sigma^e \sigma^c \Delta t}{(C_m+ \Delta t S^n) d/2}},$$ and “$\rm{sign}$” refers to the signum function. This discretization leads to a positive definite linear system as all coefficients are positive and the jump appears only on the right-hand side of this system. We emphasize that the points far from the interface are discretized according to a standard finite volume discretization on the Voronoi grid. Integrations are performed with the geometric approach of Min and Gibou [@Min;Gibou:07:Geometric-Integratio]. Note that finite volume discretizations are flexible with respect to spatial variations of the Voronoi mesh topology as they only utilize values on adjacent Voronoi cell centers, as well as values of the jump on the faces midway between pairs of Voronoi cells around the interface. Despite finite difference discretizations, this aspect circumvents challenges that arise when treating the faces between coarser and finer grids. Numerical Results {#sec::NumericalResults} ================= Qualitative results ------------------- First, we present numerical results illustrating the capabilities of our approach in capturing the interaction between the cell membrane and the applied electric field. Electric fields provide a feedback channel for the cell membranes to interact over long distances and leads to environmental dependence of electropermeabilization within the aggregate environment. Second, to demonstrate this effect on a biologically relevant construct and to showcase the computational capabilities of our approach, we consider the case of a spherical aggregate of cells confined in the center of a computational box of size $1mm$ on each side. The volume fraction of cells is set to $n=0.13$ corresponding to $27,440$ well-resolved cells. The minimum distance between each pair of cells is set to $3\times R_0$ where $R_0$ is the average radius of a cell. At present, we only intend to randomly distribute the spheroids with varying eccentricities and orientations. Therefore, this minimum threshold was adopted conservatively to avoid overlap between cells. A denser configuration would require to account for the orientation of each neighboring cell to be able to fill the free space more compactly. The different parameters defining the geometry and properties of the cells are tabulated in table \[tab:properties\]. The computational configuration used to run this simulation is tabulated in table \[tab:config\]. The resulting cell aggregate is illustrated in figure \[fig::clusterconfig\], with figure \[subfig::clusterPhi\] depicting the electric potential (the aforementioned $u$ field) across the domain and figure \[subfig::clusterProcess\] showing the partitioning between the $2048$ processors (identified with different colors - for visualization purposes, every adjacent 8 processors are displayed with same color). Figure \[fig::zoom\] provides a cross section of the domain as well as a zoom that demonstrates that the cells are well-resolved. [ |l|c|c|c| ]{} Property & Symbol & Value & Units\ Average cell radius & $R_0$ & 7 & $\mu m$\ \ Cell radii & $r_0$ & 0.57-1.43 $\times R_0$ & $\mu m$\ semi-axes & a, b, c & 0.8-1.2 $\times R_0$ & $\mu m$\ \ Capacitance & $C$ & $9.5\times 10^{-3}$ & $F/m^2$\ Extracellular conductivity & $\sigma^e$ & $15$ & $S/m$\ Intracellular conductivity & $\sigma^c$ & $1$ & $S/m$\ Voltage threshold for poration & $V_{ep}$ & $258\times 10^{-3}$ &$V$\ Membrane surface conductivity & $S_0$ & $1.9$ & $S/m$\ Porated membrane conductance & $S_1$ & $1.1\times10^6$ & $S/m^2$\ Permeabilized membrane conductance & $S_2$ & $10^4$ & $S/m^2$\ Poration timescale & $\tau_{ep}$ & $10^{-6}$ & $s$\ Permeabilization timescale & $\tau_{perm}$ & $80\times10^{-6}$ & $s$\ Resealing timescale & $\tau_{res}$ & $60$ & $s$\ Threshold for poration & $X_{ep}$ & $0.5$ & -\ \ Electric field magnitude & $\vert {\boldsymbol{E}}\vert$ & $40$ & $kV/m$\ \[tab:properties\] Property Value ------------------------------------------------------------- --------------------- -- -- Macromesh in x,y & z directions $n_x\times n_y\times n_z$ $2\times 2\times 2$ Minimum/Maximum levels of refinement $(l_{min},l_{max})$ $2\times9$ Total number of voronoi cells $224,218,754$ Total number of nodes $194,666,253$ Number of processors 2048 Total time of simulation $\approx 9$ hours Number of timesteps 44 Total physical time of the simulation $2.25\ (\mu s)$ \[tab:config\] Convergence test and mesh independence {#sec::convergence} -------------------------------------- ![*The configuration used for convergence tests. (a) A circular cross section of the cell demonstrates how the electric potential field experiences a jump when passing through the interface. (b) The jump is measured on the Octree mesh by first extrapolating solutions on each side to the opposite side and then subtracting the extrapolated values on the nodes around the interface.*[]{data-label="fig::Jump"}](converg "fig:"){width="\textwidth"} \[subfig::JumpSol\] To validate the numerical reliability of our implementation, we investigate the spatio-temporal convergence of the transmembrane potential jump, which is the key variable that couples the electropermeabilization equations. For this purpose, we consider a single spherical cell and track the evolution of the transmembrane potential jump $\left[u\right]$ at a $\pi/4$ radian distance from the cell’s equator over time. Figure \[fig::Jump\] illustrates the setup used for this purpose, as well as the refined mesh used. We use the dynamic linear case with $S=S_L$, for which the transmembrane jump, $[u]$, satisfies: $$C\frac{\partial [u]}{\partial t} + S_L[u]=\sigma_c\frac{\partial u}{\partial {\boldsymbol{n}}}.$$ In this case, the exact solution is available for our validations and reads: $$[u](t,\theta)=\frac{A}{S_L-B}g \bigg(1-e^{-\frac{S_L-B}{C}t}\bigg)\cos(\theta),$$ where $g=ER_2$ and $\theta$ is the polar angle measured from the north pole. Also, $A$ and $B$ are given by: $$\begin{aligned} K^{-1}&=R_1^3(\sigma_e-\sigma_c)+R_2^3(2\sigma_e+\sigma_c),\\ A&=3\sigma_c\sigma_e R_2^2K,\\ B&=-\sigma_c\sigma_e(R_1^2+\frac{2R_2^3}{R_1}K).\end{aligned}$$ In our tests, we use $R_1=50\mu m$ and $R_2=600\mu m$. We perform the spatial and temporal refinements separately. First, we compare the results from simulations with different timesteps at a fixed resolution level of $\rm (l_{min}, l_{max})=(3, 7)$. In figure \[subfig::convergence\_time\] we show how the jump converges as we decrease the time step by a factor of $2$ each time. We performed our simulations with time steps of $\Delta t = 1\times 10^{-8}\ (s), 2\times 10^{-8}\ (s), 4\times10^{-8}, 8\times10^{-8}\ (s)$ and only for the linear case also with $1.6\times10^{-7}\ (s)$. This is because in the nonlinear case the latter time step is too big to capture the width of the peak in the jump profile. Also, in figure \[subfig::convergence\_space\] we increase the maximum refinement level while keeping the minimum refinement level fixed at $\rm l_{min}=3$ and the time step constant at $\Delta t = 2\times 10^{-8}\ (s)$; these are plotted with solid lines. Additionally, we perform identical simulations while simultaneously increasing both the minimum and maximum levels of refinements; these are shown with dashed lines. This is motivated by the observation that the solid lines in figure \[subfig::convergence\_space\], corresponding to a fixed $\rm l_{min}=3$, converge to the exact solution at slower rate than the dashed lines. Maintaining low $\rm l_{min}$ while enhancing resolutions at the interface does not improve accuracy because errors produced at coarser grids far from interface become dominant in the simulation box, making further refinements useless when considering the error in the maximum norm. Even though both cases demonstrate convergence, increasing both the minimum and maximum refinement levels naturally exhibits a better convergence behavior. We also demonstrate that for the full nonlinear dynamic case, the convergence of our numerical results is achieved both in time and space in figures \[subfig::convergence\_time\_nonlinear\] and \[subfig::convergence\_space\_nonlinear\] respectively. In the nonlinear case, we choose a constant electric field intensity of $E=40kV/m$ across the domain in the $z$-direction. The size of the domain is $400\mu m$ in each spatial dimension. For the temporal convergence, we performed our simulations at fixed resolution levels of $(3,7)$ and for the spatial convergence we picked a fixed timestep of $\Delta t=2\times 10^{-8}(s)$ while varying the maximum refinement level. In the nonlinear case, convergence in time seems more problematic. As noted in [@guittet2017voronoi], this is expected due to the highly nonlinear temporal nature of the equations, while the equations are spatially well-behaved. This implies that smaller timesteps are preferable over finer spatial resolutions for decreasing the numerical errors. Hence, we observe the system’s response converges both in linear and nonlinear cases. We also note that in real case simulations that we perform the timestep is determined after setting the mesh at the desired resolution levels. Then in each simulation, the time-step is determined from $\Delta t = \Delta {\boldsymbol{x}}_{min}/dt_{scaling}$. Performance and scalability of the approach {#sec::results_scaling} ------------------------------------------- We show a simple test of the performance of the parallel approach for real applications of interest. We solve the same cell aggregate problem introduced in section \[sec::NumericalResults\] on different numbers of processors while keeping all other parameters fixed. This test captures the full problem complexity and hence enables a reasonable assessment of the computational efficiency and scalability of the approach. Constructing the Voronoi mesh at each time step and solving the linear system arising from the discretization introduced in section \[subsec:discretize\] constitute the bulk of the computational expense of our approach. Figure \[fig::scaling\] demonstrates that our approach tackles these tasks excellently up to $4096$ processors, which is the upper limit to our current account on the “Stampede2” supercomputer. In figure \[fig::scaling\], we also show the scaling test for a smaller cell density in order to demonstrate the capabilities of our implementation at lower problem sizes, where communication overhead easily exceeds that of computational time. Interestingly, we find that our approach exhibits excellent scalability even for quite small problems. We should emphasize that parallelization is only one avenue to simulating larger problems in our methodology. Another significant aspect is the use of adaptive mesh refinement using Octree grids. This introduces a significant reduction in the size of the grid from $\approx2^{30}$ nodes to $194,666,253$ nodes in this example. We refer the interested reader to [@Mistani2017] for a quantitative study of this enhancement. This consequently advances the limits of the possible simulation scales with the current state-of-the-art available resources. Mesoscale Phenomenology {#sec::emergent} ======================= Cell aggregates are complex systems composed of many cells that each follow a set of principles and collectively reach an equilibrium state with their environment. Cell aggregates exhibit emergent phenomena [@nagel1961structure], *i.e.* “novel and robust behaviors of a system that appear at the limit of some parameter in the system” [@butterfield2011less; @butterfield2011emergence]. In our case, a weak form of emergence appears at some finite limit of system size. These novel features are robust against certain details at the smaller scales of the aggregate; viz. in the sense that via the process of coarse-graining the renormalized parameters describing theories at different scales *always* converge to certain fixed values in natural systems (cf. [@kadanoff2013relating]). This generic feature of complex systems is recognized as a fundamental principle of nature [@butterfield2015renormalization]. Recently the descriptive framework that arises by relying on this aspect of complex systems has been discussed by [@Mistani2019a; @Mistani2019b]. In the study of complex systems, computational strategies provide powerful or in some specific cases the *only* method to exploit the so called “weak emergent” phenomena, first described by Bedau 2002 [@bedau2002downward]. Weak emergence is attributed to those physical aspects of complex systems that, in practice, only appear through computer simulations. This is due to the nonlinearity of the micro-level equations and the complex interactions between its constituent parts. For a comprehensive review of this topic we refer the interested reader to Fulmer [*et al. *]{}[@fulmer2016convergence]. As in most large-scale numerical simulations, our main purpose is to study the non-local effects that are not already encoded *locally* in the governing partial differential equations, but are encrypted in the spatial domain as a whole and influence the overall behavior via feedback processes among elements. In the case of electroporation, such influences are in part due to the heterogeneous cell topologies, long range electrostatic interactions, and the overall shape of the aggregate among other factors. In this section, we aim to show that macro-level features of cell aggregates are recovered in our methodology. We first demonstrate the influence of cell shape on the macro-level properties of the aggregate, and will present first results for a tumor-like aggregate. Effect of biological cell shape {#sec::shape} ------------------------------- Biological cells come in different shapes. We place three simple types of cells in the same experimental setup and compare their bioelectric behavior. To this end, we choose to place oblate, spherical and prolate cells with identical volume on a $7\times7\times7$ regular lattice. Figure \[fig::topog\] shows the configurations used in our experiments, and the effect of cell shape is compared in figure \[fig::shapes\]. One can observe that cells with prolate topology exhibit higher levels of permeabilization, spheres fall in between and oblate spheroids are the least electroporated. This is consistent with previous reports of [@guittet2017voronoi], and may be due to higher effective cross section area exposed to the influx of the electric field. ![*Effect of cell shape on the parameters of the electroporation model.*[]{data-label="fig::shapes"}](shape_effect){width="\textwidth"} Shadowing effect {#sec::shadowing} ---------------- Shadowing refers to the adverse effect of upstream cells to the permeabilization levels exhibited by their downstream counterparts. We performed experiments on a controlled sample of 125 spherical cells in a cubic lattice centered in a bounding box with twice the size of the lattice. We place cells symmetrically in a $5 \times 5 \times 5$ array as depicted in figure \[subfig::shadows1\]. We compare the surface average of $X_2$ parameter over the surface of all cells in the top, center, and bottom rows. The results are given in figure \[subfig::shadows2\]. As expected the middle row is less permeabilized, and cells closer to the electrodes (in this specific configuration) exhibit higher levels of permeabilization. In particular, this observation is in accordance with the experimental data of spheroid electroporation of Rols [*et al. *]{} [@WASUNGU2009278]. Note that owing to the reflection symmetry, top and bottom slices are in identical environments, this is also reflected by the overlapping measurements for their permeabilization curves as in figure \[subfig::shadows2\]. So far we have only considered regular lattice configurations, in the remaining of this work, we focus on the tumor-like demonstration case that is depicted in figure \[fig::clusterconfig\]. To date, studying computationally this relevant biological structure is only possible with the computational approach introduced in this manuscript. Electroporation fraction {#sec::fraction} ------------------------ In experiments, one can measure the fraction of cells that are electropermeabilized more than a detectable threshold. In order to compare our numerical results with experiments, we set the minimum detectable threshold for electropermeabilization to different values: $$\rm S_m\ge (100 \ or \ 1,000\ or \ 10,000\ or \ 100,000) S_L.$$ Then, we measure the fraction of total electropermeabilized surface area of all cells normalized by the total surface area of the cells. Figure \[fig::clusterPerm1\] depicts the permeabilization pattern throughout a dense suspension (volume fraction of $13\%$), and figure \[fig::area\] quantifies the evolution of the membrane electropermeabilization fraction. Remarkably, we observe that the maximum value of this fraction under a short $40kV/m$ electric pulse reaches $\approx 70\%$, $ 65\%$, $ 50\%$ and $ 5\%$ for the given thresholds respectively. This is in qualitative agreement with the experimental results of Pucihar [*et al. *]{}[@pucihar2007electropermeabilization]. ![*Electropermeabilization fraction over time for a $1\mu s$ square pulse of $40kV/m$. Figures on the right panel are color coded by conductance, with hotter colors encoding higher conductance levels.*[]{data-label="fig::area"}](Sm_evolution){width="\textwidth"} The evolution of the relevant electropermeabilization parameters including membrane conductance ($S_m$), level of membrane poration ($X_1$), level of membrane permeabilization ($X_2$) and absolute value of the transmembrane potential (TMP) are shown in figure \[fig::allparams\] for reference. One observation is that the transmembrane voltage does not vanish spontaneously after the external pulse is turned off; this is due to the capacitive nature of the cell membranes that maintain a slowly vanishing electric field in the environment. ![*Time evolution of relevant parameters averaged over the membranes of the $27,440$ cells in our simulations. The applied pulse is turned off at $1\mu s$.*[]{data-label="fig::allparams"}](params_cluster "fig:"){width="\textwidth"} \[subfig::params\] The signature of the nonlinear model underlying the evolution of the transmembrane voltage is also evident in these figures. We present three snapshots of the transmembrane potential in the aggregate in figure \[fig::VnEvolution\]. These snapshots capture the initial overshoot in the transmembrane voltage (cf. figure \[fig::allparams\]) and then the saturation phase that follows. These snapshots are color coded according to the transmembrane potential. Impedance of the aggregate {#sec::impedance} -------------------------- In these simulations we apply a constant and uniform potential difference between the electrodes. The electric field will adapt to the geometrical configuration of the domain as well as the cells, while the cell membranes also distort the field. The distortions in the observed electric field close to the boundaries, where the electrodes are located, produce a different profile for the “needle potential” that the cell aggregate experiences. Needle intensity is defined as: $$I(t) = \int_{\mathcal{E}_1} \sigma^e \partial_{{\boldsymbol{n}}} V(t,x)\cdot {\boldsymbol{n}} ds, \label{eq::intensity}$$ where $\mathcal{E}$ is one of the electrodes where the voltage is imposed. The evolution of the needle intensity for the tumor-like aggregate is shown in figure \[subfig::intensity\]. Furthermore, one can measure the overall permeability within the environment by measuring the impedance of the sample detected at the electrodes. We define the impedance of the cell aggregate as: $$Z(t) = \frac{\int_{\mathcal{E}_{1-2}}V(t,x) ds/\int_{\mathcal{E}_{1}}ds}{ \int_{\mathcal{E}_1}\sigma \partial_{{\boldsymbol{n}}} V(t,x)\cdot {\boldsymbol{n}} \, ds}, \label{eq::impedance}$$ where $\mathcal{E}_{1}$ and $\mathcal{E}_{2}$ are either the top or the bottom electrode, and $\mathcal{E}_{1-2}$ is the difference of the integral between $\mathcal{E}_{1}$ and $\mathcal{E}_{2}$ electrodes. Note that the exact choice of labels does not change the result due to continuity of current through the medium. The time evolution of the impedance of the aggregate is shown in figure \[subfig::impedance\]. Comparison with figure \[fig::allparams\] suggests a strong negative correlation between impedance and the overall degree of permeability. We find that even though permeabilized cells have a huge increase of their membrane conductance (from $1$ to $10^4$ $S/m^2$), as illustrated in figure \[fig::allparams\], the relative impedance of the aggregate drops about $\approx 0.15\%$ after $1\mu s$ of a constant external electric pulse. Conclusion {#sec::Conclusion} ========== We have presented a computational framework for parallel simulations of cell aggregate electropermeabilization at the mesoscale. We used an adaptive Octree/Voronoi mesh along with a numerical treatment that preserves the jump in the electric potential across each cell’s membrane. The core aspects of our methodology are its efficiency and excellent scalability, making it possible to consider meaningful simulations of tumor-like spheroids, as opposed to previous serial approaches that were not able to go beyond micro-scale simulations. We have presented preliminary numerical results on cell aggregate electropermeabilization that are in qualitative agreement with experimental observations. This work thus paves the way for a wide range of comparisons with biological experiments, as it makes possible the multiscale understanding of electroporation from the cell to the tissue. Acknowledgement {#acknowledgement .unnumbered} =============== The research of P. Mistani, A. Guittet and F. Gibou was supported by NSF DMS-1620471 and ARO W911NF-16-1-0136. C. Poignard research is supported by Plan Cancer DYNAMO (ref. PC201515) and Plan Cancer NUMEP (ref. PC201615). P. Mistani would like to thank Daniil Bochkov in the CASL group for fruitful discussions that have contributed to this research. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC and visualization resources that have contributed to the research results reported within this paper. This research was performed in part within the scope of the Inria associate team NUM4SEP, between the CASL group at UCSB and the Inria team MONC. C.P.’s research is partly performed within the scope of the European Associated Laboratory EBAM on electroporation, granted by CNRS. References {#references .unnumbered} ==========
--- abstract: 'Existing Visual Question Answering (VQA) methods tend to exploit dataset biases and spurious statistical correlations, instead of producing right answers for the right reasons. To address this issue, recent bias mitigation methods for VQA propose to incorporate visual cues (e.g., human attention maps) to better ground the VQA models, showcasing impressive gains. However, we show that the performance improvements are not a result of improved visual grounding, but a regularization effect which prevents over-fitting to linguistic priors. For instance, we find that it is not actually necessary to provide proper, human-based cues; random, insensible cues also result in similar improvements. Based on this observation, we propose a simpler regularization scheme that does not require any external annotations and yet achieves near state-of-the-art performance on VQA-CPv2.' author: - | Robik Shrestha$^1$ Kushal Kafle$^1$ Christopher Kanan$^{1,2,3}$\ Rochester Institute of Technology$^1$ Paige$^2$ Cornell Tech$^3$\ `{rss9369, kk6055, kanan}@rit.edu` bibliography: - 'acl2020.bib' title: A negative case analysis of visual grounding methods for VQA --- Introduction {#sec:introduction} ============ Visual Question Answering (VQA) [@antol2015vqa], the task of answering questions about visual content, was proposed to facilitate the development of models with human-like visual and linguistic understanding. However, existing VQA models often exploit superficial statistical biases to produce responses, instead of producing the right answers for the right reasons [@kafle2019challenges]. The VQA-CP dataset [@agrawal2018don] showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have endeavored to enforce proper visual grounding, where the goal is to make models produce answers by looking at relevant visual regions [@gan2017vqs; @selvaraju2019taking; @wu2019self], instead of exploiting linguistic priors. These approaches rely on additional annotations/cues such as human-based attention maps [@das2017human], textual explanations [@huk2018multimodal] and object label predictions [@ren2015faster] to identify relevant regions, and train the model to base its predictions on those regions, showing large improvements (8-10% accuracy) on the VQA-CPv2 dataset. ![We find that existing visual sensitivity enhancement methods improve performance on VQA-CPv2 through regularization as opposed to proper visual grounding.](images/main-image-alt.pdf){width="0.98\linewidth"} Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model *forgets* the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy. Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the predictions are correct or incorrect. We find that this approach also achieves near state-of-the-art performance ($48.9\%$ on VQA-CPv2), providing further support for our claims. While we agree that visual grounding is a useful direction to pursue, our experiments show that the community requires better ways to test if systems are actually visually grounded. We make some recommendations in the discussion section. Related Work ============ Biases in VQA ------------- As expected of any real world dataset, VQA datasets also contain dataset biases [@goyal2017making]. The VQA-CP dataset [@agrawal2018don] was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nearly impossible for the models that rely upon linguistic correlations to perform well on the test set [@agrawal2018don; @shrestha2019ramen]. Bias Mitigation for VQA ----------------------- VQA algorithms without explicit bias mitigation mechanisms fail on VQA-CP, so recent works have focused on the following solutions: ### Reducing Reliance on Questions Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations. The question-only model is either used to perform adversarial regularization [@grand2019adversarial; @ramakrishnan2018overcoming] or to re-scale the loss based on the difficulty of the question [@cadene2019rubi]. However, when these ideas are applied to the UpDn model [@Anderson2017up-down], which attempts to learn correct visual grounding, these approaches achieve 4-7% lower accuracy compared to the state-of-the-art methods. ### Enhancing Visual Sensitivities Both Human Importance Aware Network Tuning (HINT) [@selvaraju2019taking] and Self Critical Reasoning (SCR) [@wu2019self], train the network to be more sensitive towards salient image regions by improving the alignment between visual cues and gradient-based sensitivity scores. HINT proposes a ranking loss between human-based importance scores [@das2016human] and the gradient-based sensitivities. In contrast, SCR does not require exact saliency ranks. Instead, it penalizes the model if correct answers are more sensitive towards non-important regions as compared to important regions, and if incorrect answers are more sensitive to important regions than correct answers. Existing VQA Methods ==================== Given a question $\mathcal{Q}$ and an image $\mathcal{I}$, *e.g.,* represented by bottom-up region proposals: $v$  [@Anderson2017up-down], a VQA model is tasked with predicting the answer $a$: $$\begin{aligned} P(a|\mathcal{Q}, \mathcal{I}) = f_{VQA}(v, \mathcal{Q}).\end{aligned}$$ Baseline VQA Methods -------------------- Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn [@Anderson2017up-down], tend to rely on the linguistic priors: $P(a|\mathcal{Q})$ to answer questions. Such models fail on VQA-CP, because the priors in the test set differ from the train set. Visual Sensitivity Enhancement Methods -------------------------------------- To reduce the reliance on linguistic priors, visual sensitivity enhancement methods attempt to train the model to be more sensitive to relevant visual regions when answering questions. Following [@wu2019self], we define the sensitivity of an answer $a$ with respect to a visual region $v_i$ as: $$\begin{aligned} \mathcal{S}(a, v_i) := (\nabla_{v_i} P(a|\mathcal{I}, \mathcal{Q}))^T \mathbf{1}.\end{aligned}$$ Existing methods propose the following training objectives to improve grounding using $\mathcal{S}$: - **HINT** uses a ranking loss, which penalizes the model if the pair-wise rankings of the sensitivities of visual regions towards ground truth answers $a_{gt}$ are different from the ranks computed from the human-based attention maps. - **SCR** divides the region proposals into influential and non-influential regions and penalizes the model if: 1) $\mathcal{S}(a_{gt})$ of a non-influential region is higher than an influential region, and 2) the region most influential for the correct answer has even higher sensitivity for incorrect answers. Both methods improve baseline accuracy by 8-10%. Is this actually due to better visual grounding? Why Did the Performance Improve? {#sec:improvement-reasons} ================================ We probe the reasons behind the performance improvements of HINT and SCR. We first analyze if the results improve even when the visual cues are irrelevant (Sec. \[sec:irrelevant\_regions\]) or random (Sec. \[sec:random\_regions\]) and examine if their differences are statistically significant (Sec. \[sec:stats\]). Then, we analyze the regularization effects by evaluating the performance on VQA-CPv2’s train split (Sec. \[sec:drop\_in\_train\]) and the behavior on a dataset without changing priors (Sec. \[sec:drop\_in\_vqav2\]). We present a new metric to assess visual grounding in Sec. \[sec:cpig\] and describe our regularization method in Sec. \[sec:our-approach\]. Experimental Setup {#sec:experimental_setup} ------------------ We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across $5$ runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Further training details are provided in the Appendix. Training on Irrelevant Visual Cues {#sec:irrelevant_regions} ---------------------------------- In our first experiment we studied how irrelevant visual cues performed compared to relevant ones. We fine-tune the model with irrelevant cues defined as: $\mathcal{S}_{irrelevant} := (1 - \mathcal{S}_h)$, where, $\mathcal{S}_h$ represents the human-based importance scores. As shown in the ‘Grounding using irrelevant cues’ section of Table \[tab:main-results\], both HINT and SCR are within 0.3% of the results obtained from looking at relevant regions, which indicates the gains for HINT and SCR are not necessarily from looking at relevant regions. Training on Random Visual Cues {#sec:random_regions} ------------------------------ In our next experiment we studied how random visual cues performed with HINT and SCR. We assign random importance scores to the visual regions: $\mathcal{S}_{rand} \sim \textit{uniform}(0,1)$. We test two variants of randomness: **Fixed random regions****, where $\mathcal{S}_{rand}$ are fixed once chosen, and **Variable random regions**, where $\mathcal{S}_{rand}$ are regenerated every epoch. As shown in Table \[tab:main-results\], both of these variants obtain similar results as the model trained with human-based importance scores. The performance improves even when the importance scores are changed every epoch, indicating that it is not even necessary to look at the *same* visual regions. Significance of Statistical Differences {#sec:stats} --------------------------------------- To test if the changes in results were statistically significant, we performed Welch’s t-tests [@welch1938significance] on the predictions of the variants trained on relevant, irrelevant and random cues. We pick Welch’s t-test over the Student’s t-test, because the latter assumes equal variances for predictions from different variants. To perform the tests, we first randomly sample $5000$ subsets of non-overlapping test instances. We then average the accuracy of each subset across $5$ runs, obtaining $5000$ values. Next, we run the t-tests for HINT and SCR separately on the subset accuracies. As shown in Table \[tab:hint-overlaps\], the $p$-values across the variants of HINT and SCR are greater than or equal to $0.3$. Using a confidence level of $95\%$ ($\alpha = 0.05$), we fail to reject the null hypothesis that the mean difference between the paired values is $0$, showing that the variants are not statistically significantly different from each other. We also compare the predictions of HINT/SCR against baseline, and find that $p$-values are all zeros, showing that the differences have statistical significance. **Percentage of Overlaps:** To further check if the variants trained on irrelevant or random regions gain performance in a manner similar to the models trained on relevant regions, we compute the overlap between their predictions on VQA-CPv2’s test set. The percentage of overlap is defined as: $$\begin{aligned} \%~Overlap = \frac{n_{same}}{n_{total}} \times 100\%,\end{aligned}$$ where, $n_{same}$ denotes the number of instances where either both variants were correct or both were incorrect and $n_{total}$ denotes the total number of test instances. As shown in Table \[tab:hint-overlaps\], we compare $\%Overlap$ between different variants of HINT/SCR with baseline and against each other. We find $89.7-91.9\%$ and $89.5-92.0\%$ overlaps for different variants of HINT and SCR respectively. These high overlaps suggest that the variants are not working in fundamentally different manners. Drops in Training Accuracy {#sec:drop_in_train} -------------------------- We compare the training accuracies to analyze the regularization effects. As shown in Table \[tab:main-results\], the baseline method has the highest training results, while the other methods cause $6.0-14.0\%$ and $3.3-10.5\%$ drops in the training accuracy on VQA-CPv2 and VQAv2, respectively. We hypothesize that degrading performance on the train set helps forget linguistic biases, which in turn helps accuracy on VQA-CPv2’s test set but hurts accuracy on VQAv2’s val set. Drops in VQAv2 Accuracy ----------------------- ![Accuracies for HINT and SCR on VQAv2’s val set, when fine-tuned either on the full train set or on the subset containing visual cues.[]{data-label="fig:drop_in_vqav2"}](images/vqa2-drop.pdf){width="0.9\linewidth"} \[sec:drop\_in\_vqav2\] As observed by @selvaraju2019taking and as shown in Fig. \[fig:drop\_in\_vqav2\], we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the performance on VQAv2 drops continuously during the course of the training. This indicates that HINT and SCR help forget linguistic priors, which is beneficial for VQA-CPv2 but not for VQAv2. Assessment of Proper Grounding {#sec:cpig} ------------------------------ In order to quantitatively assess visual grounding, we propose a new metric called: Correctly Predicted but Improperly Grounded (CPIG): $$\begin{aligned} \% CPIG = \frac{N_{\text{correct answer, improper grounding}}}{N_{\text{correct answer}}} \times 100\%,\end{aligned}$$ which is the number instances for which the most sensitive visual region used to correctly predict the answer is not within top-3 most relevant ground truth regions, normalized by the total number of correct predictions. HINT and SCR trained on relevant regions obtained lower CPIG values that other variants (70.24% and 80.22% respectively), indicating they are better than other variants at finding relevant regions. However, these numbers are still high, and show that only 29.76% and 19.78% of the correct predictions for HINT and SCR were properly grounded. Further analysis is presented in the Appendix. Embarrassingly Simple Regularizer {#sec:our-approach} ================================= The usage of visual cues and sensitivities in existing methods is superfluous because the results indicate that performance improves through degradation of training accuracy. We hypothesize that simple regularization that does not rely on cues or sensitivities can also achieve large performance gains for VQA-CP. To test this hypothesis, we devise a simple loss function which continuously degrades the training accuracy by training the network to always predict a score of zero for all possible answers i.e. produce a zero vector ($\mathbf{0}$). The overall loss function can be written as: $$\begin{aligned} L := BCE(P(\mathcal{A}), \mathcal{A}_{gt}) + \lambda BCE(P(\mathcal{A}), \mathbf{0}),\end{aligned}$$ where, BCE refers to the binary cross entropy loss and $P(\mathcal{A})$ is a vector consisting of predicted scores for all possible answers. The first term is the binary cross entropy loss between model predictions and ground truth answer vector ($\mathcal{A}_{gt}$), and the second term is our regularizer with a coefficient of $\lambda=1$. Note that this regularizer continually penalizes the model during the course of the training, whether its predictions are correct or incorrect. As shown in Table \[tab:main-results\], we present results when this loss is used on: a) Fixed subset covering $1\%$ of the dataset, b) Varying subset covering $1\%$ of the dataset, where a new random subset is sampled every epoch and c) $100\%$ of the dataset. Confirming our hypothesis, all variants of our model achieve near state-of-the-art results, solidifying our claim that the performance gains for recent methods come from regularization effects. It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in any of the methods, indicating that they are not producing right answers for the right reasons. Discussion on Proper Grounding ============================== While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented in this paper. We recommend that both train and test accuracy be reported, because a model truly capable of visual grounding would not cause drastic drops in training accuracy to do well on the test sets. Finally, we advocate for creating a dataset with ground truth grounding available for 100% of the instances using synthetically generated datasets [@kafle2017data; @kafle2017analysis; @kafle2018dvqa; @acharya2019tallyqa; @Hudson2019GQAAN; @johnson2017clevr], enabling the community to evaluate if their methods are able to focus on relevant information. Another alternative is to use tasks that explicitly test grounding, e.g., in visual query detection an agent must output boxes around any regions of a scene that match the natural language query [@acharya2019vqd]. Conclusion ========== Here, we showed that existing visual grounding based bias mitigation methods for VQA are not working as intended. We found that the accuracy improvements stem from a regularization effect rather than proper visual grounding. We proposed a simple regularization scheme which, despite not requiring additional annotations, rivals state-of-the-art accuracy. Future visual grounding methods should be tested with a more comprehensive experimental setup and datasets for proper evaluation. **Acknowledgement.** This work was supported in part by AFOSR grant \[FA9550-18-1-0121\], NSF award \#1909696, and a gift from Adobe Research. We thank NVIDIA for the GPU donation. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any sponsor. We are grateful to Tyler Hayes for agreeing to review the paper at short notice and suggesting valuable edits and corrections for the paper.
--- author: - 'F. Sainsbury-Martinez[^1]' - 'P. Wang' - 'S. Fromang' - 'P. Tremblin' - 'T. Dubos' - 'Y. Meurdesoif' - 'A. Spiga' - 'J. Leconte' - 'I. Baraffe' - 'G. Chabrier' - 'N. Mayne' - 'B. Drummond' - 'F. Debras' bibliography: - 'main.bib' date: 'Received 02 August 2019; accepted 14 Nov 2019' subtitle: 'deep, hot, adiabats as a robust solution to the radius inflation problem ' title: 'Idealised simulations of the deep atmosphere of hot jupiters:' --- [The anomalously large radii of hot Jupiters has long been a mystery. However, by combining both theoretical arguments and 2D models, a recent study has suggested that the vertical advection of potential temperature leads to an adiabatic temperature profile in the deep atmosphere hotter than the profile obtained with standard 1D models.]{} [In order to confirm the viability of that scenario, we extend this investigation to three dimensional, time-dependent, models.]{} [We use a 3D General Circulation Model (GCM), DYNAMICO to perform a series of calculations designed to explore the formation and structure of the driving atmospheric circulations, and detail how it responds to changes in both the upper and deep atmospheric forcing.]{} [In agreement [with the previous, 2D, study]{}, we find that a hot adiabat is the natural outcome of the long-term evolution of the deep atmosphere. Integration times of order $1500$ years are needed for that adiabat to emerge from an isothermal atmosphere, explaining why it has not been found in previous hot Jupiter studies. Models initialised from a hotter deep atmosphere tend to evolve faster toward the same final state. We also find that the deep adiabat is stable against low-levels of deep heating and cooling, as long as the Newtonian cooling time-scale is longer than $\sim 3000$ years at $200$ bar.]{} [We conclude that the steady-state vertical advection of potential temperature by deep atmospheric circulations constitutes a robust mechanism to explain hot Jupiter inflated radii. We suggest that future studies of hot Jupiters are evolved for a longer time than currently done, and, when possible, include models initialised with a hot deep adiabat. We stress that this mechanism stems from the advection of entropy by irradiation induced mass flows and does not require (finely tuned) dissipative process, in contrast with most previously suggested scenarios.]{} Introduction {#sec:introduction} ============ The anomalously large radii of highly irradiated Jupiter-like exoplanets, known as hot Jupiters, remains one of the key unresolved issues in our understanding of extrasolar planetary atmospheres. The observed correlation between the stellar irradiation of a hot Jupiter and its observed inflation suggests that it is linked to the amount of energy deposited in the upper atmosphere. Several mechanisms have been suggested as possible explanations (see @Baraffe_2009 [@2014prpl.conf..763B; @2010SSRv..152..423F], for a review). These solutions include tidal heating and physical (i.e. not for stabilisation reasons) dissipation (@refId0999 [@2010ApJ...714....1A; @2019MNRAS.484.5845L]), ohmic dissipation of electrical energy (@Batygin_2010 [@2010ApJ...719.1421P; @2011ApJ...738....1B; @2012ApJ...750...96R]), deep deposition of kinetic energy (), enhanced opacities which inhibit cooling [@Burrows_2007] or ongoing layered convection that reduces the efficiency of heat transport [@Chabrier_2007]. At present time, however, there is no consensus across the community on a given scenario because [the majority]{} of these solutions require finely tuned physical environments which either deposit additional energy deep within the atmosphere or affect the efficiency of vertical heat transport.\ Recently, @2017ApJ...841...30T, hereafter , suggested a mechanism that naturally arises from first physical principles. Their argument goes as follows: consider the equation for the evolution of the potential temperature $\Theta$, which is equivalent to entropy in this case: $$\frac{D\Theta}{Dt} = \frac{\Theta H}{Tc_{p}} \, , \label{eq:2ndlaw}$$ where $D/Dt$ stands for the Lagrangian derivative in spherical coordinates, $H$ is the local heating or cooling rate, $c_{p}$ is the heat capacity at constant pressure, and $\Theta$ is defined as a function of the temperature $T$ and pressure, $P$: $$\Theta = T\left(\frac{P_0}{P}\right)^{\frac{\gamma-1}{\gamma}} \, ,$$ where $P_{0}$ is a reference pressure and $\gamma$=$C_p/C_v$ is the adiabatic index. In a steady state, reduces to $$\bm{u}\cdot\nabla\Theta = \frac{\Theta H}{Tc_{p}} \, , \label{eq:Y}$$ where $\bm{u}$ is the velocity. In the deep atmosphere, radiative heating and cooling both tend to zero (i.e. $H\rightarrow 0$) because of large atmospheric opacities. [In this case (with $H\rightarrow 0$), and]{} if the winds do not vanish (i.e. $\left|\bm{u}\right|\neq 0$, see ), the potential temperature $\Theta$ must remain constant for to be valid. In other words, the temperature-pressure profile must be adiabatic and satisfy the scaling: $$P \propto T^{\frac{\gamma}{\gamma-1}} \, .$$ We emphasise that this adiabatic solution is an equilibrium that does not require any physical dissipation. [There is an internal energy transfer to the deep atmosphere, through an enthalpy flux, but there is no dissipation from kinetic, magnetic, or radiative energy reservoirs to the internal energy reservoir.]{} Dissipative processes $D_\mathrm{dis}$ would act as a source term with $\bm{u}\cdot \nabla \Theta \propto D_\mathrm{dis}$ and would drive the profile away from the adiabat.\ Physically, as discussed by , this constant potential temperature profile in the deep atmosphere is driven by the vertical advection of potential temperature from the outer and highly irradiated atmosphere to the deep atmosphere [by large scale dynamical motions]{} where it is [almost completely]{} homogenised by the residual global circulations (which themselves can be linked to the conservation of mass and momentum, and the large mass/momentum flux the super-rotating jet drives in the outer atmosphere). The key point is that it causes the temperature-pressure profile to converge to an adiabat at lower pressures than those at which the atmosphere becomes unstable to convection. As a result, the outer atmosphere connects to a hotter internal adiabat than would be obtained through a standard, ’radiative-convective’ single column model. This potentially leads to a larger radius compared with the predictions born out of these 1D models.\ Whilst was able to confirm this hypothesis through the use of a 2D stationary circulation model, there are still a number of limitations to their work. Maybe most importantly, the models they used only considered the formation of the deep adiabat within a 2D equatorial slice. The steady-state temperature-pressure profiles at other latitudes remains unknown, as well as the nature of the global circulations at these high pressures in the equilibrated state. Strong ansatzes were also made about the nature of the meridional (i.e. vertical and latitudinal) wind at the equator, with their models prescribing the ratio of latitudinal to vertical mass fluxes, that could potentially affect the proposed scenario. The purpose of this paper is to reduce and constrain these assumptions and limitations and to demonstrate the viability of a deep adiabat at equilibrium. This is done by means of a series of idealised 3D GCM calculations designed such as to allow us to fully explore the structure of the deep atmospheric circulations in equilibrated hot Jupiter atmospheres, as well as investigate the time-evolution of the deep adiabat. As we demonstrate in this work, the adiabatic profile predicted by naturally emerges from such calculations and appears to be robust against changes in the deep atmosphere radiative properties. This is the core result of this work.\ The structure of the work is as follows. Our simulations properties are described in , where we introduce the GCM DYNAMICO, used throughout this study. We then demonstrate that, when using DYNAMICO, not only are we are able to recover standard features observed in previous short-timescale studies of hot Jupiter atmospheres (), but also that, when the simulations are extended to long-enough time-scales, an adiabatic profile develops within the deep atmosphere (). We then explore the robustness of our results by presenting a series of sensitivity tests, including changes in the outer and deep atmosphere thermal forcing (). Finally, in , we provide concluding remarks, including suggestions for future computational studies of hot Jupiter atmospheres and a discussion about implications for the evolution of highly irradiated gas giants. Method {#sec:method} ====== DYNAMICO is a highly computationally efficient GCM that solves the primitive equation of meteorology ([see @Vallis17 for a review and @2014JAtS...71.4621D for a more detailed discussion of the approach taken in DYNAMICO]{}) on a sphere [@gmd-8-3131-2015]. It is being developed as the next state–of–the art dynamical core for Earth and planetary climate studies at the Laboratoire de Météorologie Dynamique and is publicly available[^2]. It has recently been used to model the atmosphere of Saturn at high resolution [@2018arXiv181101250S]. Here, we present some specificities of DYNAMICO (section \[sec:dynamico\_NS\]) as well as the required modifications we implemented to model hot Jupiter atmospheres (section \[sec:newtonian\_cooling\]). DYNAMICOs numerical scheme {#sec:dynamico_NS} -------------------------- Quantity (units) Description Value -------------------------------------- --------------------------- ---------------------- dt (seconds) Time-step 120 $N_z$ Number of Pressure Levels 33 $d$ Number of Sub-divisions 20 $N\left(^\circ\right)$ Angular Resolution $~3.5$ $P_{top}$ (bar) Pressure at Top $7 \times 10^{-3}$ $P_{bottom}$ (bar) Pressure at Bottom 200 $g$ (m.s${^-2}$) Gravity 8.0 $R_{HJ}$ (m) HJ Radius $10^8$ $\Omega$ (s$^{-1}$) HJ Angular Rotation Rate $2.1 \times 10^{-5}$ $c_p$ (J.kg$^{-1}$.K$^{-1}$) Specific Heat 13226.5 $\mathcal{R}$ (J.kg$^{-1}$.K$^{-1}$) Ideal Gas Constant 3779.0 $T_{init} (K) $ Initial Temperature 1800 : Parameters for Low Resolution Simulations[]{data-label="tab:lr_params"} Briefly, DYNAMICO takes an energy-conserving Hamiltonian approach to solving the primitive equations. This has been shown to be suitable for modelling hot Jupiter atmospheres [@Showman_2008; @2012ApJ...750...96R], although this may not be valid in other planetary atmospheres [@2019ApJ...871...56M]. Rather than the traditional latitude-longitude horizontal grid (which presents numerical issues near the poles due to singularities in the coordinate system - see the review of @WILLIAMSON2007 for more details), DYNAMICO uses a staggered horizontal-icosahedral grid (see @gmd-7-909-2014 for a discussion of the relative numerical accuracy for this type of grids) for which the number of horizontal cells $N$ is defined by the number of subdivisions $d$ of each edge of the main spherical icosahedral[^3]: $$N=10 d^2 + 2.$$ As for the vertical grid, DYNAMICO uses a pressure coordinate system whose levels can be defined by the user at runtime. Finally, the boundaries of our simulations are closed and stress-free with zero energy transfer (i.e. the only means on energy injection and removal are the Newtonian cooling profile and the horizontal, numerical, dissipation). [Note that, unlike some other GCM models of hot Jupiters (e.g. @2009JAtS...66..579S [@2013ApJ...770...42L; @2019ApJ...883....4S]), we do not include an additional frictional (i.e. Rayleigh) drag scheme at the bottom of our simulation domain, instead relying on the hyperviscosity and impermeable bottom boundary to stabilise the system. ]{}\ As a consequence of the finite difference scheme used in DYNAMICO, artificial numerical dissipation must be introduced in order to stabilise the system against the accumulation of grid-scale numerical noise. This numerical dissipation takes the form of a horizontal hyper-diffusion filter with a fixed hyperviscosity and a dissipation time-scale at the grid scale, labelled $\tau_{dissip}$, which serves to adjust the strength of the filtering (the longer the dissipation time, the weaker the dissipation). Technically DYNAMICO includes three dissipation timescales, each of which either diffuses scalar, vorticity, or divergence independently. However, for our models, we set all three timescales to the same value. It is important to point out that the hyperviscosity is not a direct equivalent of the physical viscosity of the planetary atmosphere, but can be viewed as a form of increased artificial dissipation that both enhances the stability of the code, and accounts for motions, flows, and turbulences which are unresolved at typical grid scale resolutions. This is known as the large eddy approximation and has long been standard practice in the stellar (e.g. @2005LRSP....2....1M) and planetary (e.g @doi:10.1098/rsta.2008.0268) atmospheric modelling communities. Because it acts at the grid cell level, the strength of the dissipation is resolution dependent at a fixed $\tau_{dissip}$ (this can be seen in our results in ).\ In a series of benchmark cases, @10.1111/j.1365-2966.2011.18315.x (hereafter ) have shown that both spectral and finite-difference based dynamical cores which implement horizontal hyper-diffusion filters can produce differences of the order of tens of percent in the temperature and velocity fields when varying the dissipation strength. We also found such a similar sensitivity in our models: for example, the maximum super-rotating jet speed varies between $3000\,\mathrm{ ms^{-1}}$ and $4500\,\mathrm{ms^{-1}}$ as the dissipation strength is varied. The dissipation strength must thus be carefully calibrated. In the absence of significant constraints on hot Jupiter zonal wind velocities, this was done empirically by minimising unwanted small-scale numerical noise as well as replicating published benchmark results (An alternative, which is especially useful in scenarios where direct or indirect data comparisons are unavailable, is to plot the spectral decomposition of the energy profile and adjust the diffusion such that the energy accumulation on the smallest scales is insignificant). We found that setting $\tau_{dissip}=2500\,\mathrm{s}$ in our low resolution runs leads to benchmark cases in good agreement with the results of, for example @Mayne_2014, whilst also exhibiting minimal small-scale numerical noise. This is in reasonable agreement with other studies, with our models including a hyper-diffusion of the same order of magnitude as, for example, . Note that, due to differences in the dynamics between those of Saturn and that observed in hot Jupiters, and in particular due to the presence of the strong super-rotating jet, we must use a significantly stronger dissipation to counter grid-scale noise than that used in previous atmospheric studies calculated using DYNAMICO [@2018arXiv181101250S]. Newtonian cooling {#sec:newtonian_cooling} ----------------- In our simulations of hot Jupiter atmospheres using DYNAMICO, we do not directly model either the incident thermal radiation on the day-side, or the thermal emission on the night-side, of the exoplanet. This would be prohibitively computationally expensive for the long simulations we perform in the present work. Instead we use a simple thermal relaxation scheme to model those effects, with a spatially varying equilibrium temperature profile $T_{eq}$ and a relaxation time-scale $\tau$ that increases with pressure throughout the outer atmosphere. Specifically, this is done by adding a source term to the temperature evolution equation that takes the form: $$\frac{\partial T\left(P,\theta,\phi\right)}{\partial t} = - \frac{T\left(P,\theta,\phi\right)-T_{eq}\left(P,\theta,\phi\right)}{\tau\left(P\right)} \, .$$ This method, known as Newtonian Cooling has long been applied within the 3D GCM exoplanetary community (i.e. , @Showman_2008, @2010ApJ...714.1334R, @2011ApJ...738...71S, @2014GMD.....7.3059M, @GUERLET2014110 or @Mayne_2014), although it is gradually being replaced by coupling with simplified, but more computationally expensive, radiative transfer schemes (e.g. [@2009ApJ...699..564S]{}, @2012ApJ...750...96R or ) due to its limitations (e.g. it is difficult to use to probe individual emission or absorption features, such as non-equilibrium atmospheric chemistry or stellar activity).\ The forcing temperature and cooling time-scale we use within our models have their basis in the profiles calculated via a series of 1D radiative transfer models. These models were then parametrised by , who created simplified day-side and night-side profiles. The parametrisation used here is based upon this work, albeit modified in the deep atmosphere since this is the focus of our analysis. As a result, it somewhat resembles a parametrised version of the cooling profile considered by @2013ApJ...770...42L.\ Specifically, $T_{eq}$ is calculated from the pressure dependent night-side profile ($T_{night}\left(P\right)$) according to the following relation: $$T_{eq}\left(P,\theta,\phi\right) = T_{night}\left(P\right) + \Delta{T}(P) \cos\left(\theta\right) \max \left[ 0, \cos (\phi - \pi) \right] \, ,$$ where $\Delta T$ is the pressure dependent day-side/night-side temperature difference, $$\Delta T (P) =\left\{ \begin{array}{ll} \Delta T_0 & \textrm{if } P<P_{low} \\ \Delta T_0 \log (P/P_{low}) & \textrm{if } P_{low} < P < P_{high} \\ 0 & \textrm{if } P > P_{high} \end{array} \right. \, ,$$ in which we used $\Delta T_0=600$ K, $P_{low}=0.01$ bar and $P_{high}=10$ bar. The night-side temperature profile $T_{night}$ is parametrised as a series of linear interpolations in $\log(P)$ space between the points $$\left(\frac{T}{1 \textrm{K}},\frac{P}{1 \textrm{ bar}}\right)=(800,10^{-6}) \textrm{, } (1100,1) \textrm{ \& } (1800,10) \, .$$ For $P>10$ bar, we set $T_{eq}=T_{night}=T_{day}=1800$K.\ Likewise, at pressures smaller than $10$ bar, $\tau$ is linearly interpolated, in $\log(P)$ space, between the points $$\left(\log\left(\frac{\tau}{1 \textrm{sec}}\right),\frac{P}{1 \textrm{ bar}}\right)=(2.5,10^{-6}) \textrm{, } (5,1) \textrm{, } (7.5,10) \textrm{ \& } (\log(\tau_{220}),220) \, . \label{eq:X}$$ For $P>10$ bar, we consider a series of models that lie between two extremes: at one extreme we set $\log(\tau_{220})$ [(which we define as the decimal logarithm of the cooling time-scale $\tau$ at the bottom of our model atmospheres: i.e. at $P=220$ bar)]{} to infinity, which implies that the deep atmosphere is radiatively inert, with no heating or cooling. As for the other extreme, this involves setting $\log(\tau_{220})=7.5$, which implies that radiative effects do not diminish below $10$ bar. In we explore results at the first extreme, with no deep radiative dynamics. Then, in , we explore the sensitivity of our results to varying this prescription. Results {#sec:core_results} ======= [&gt;m[10mm]{}|m[65mm]{}]{} Model & Description\ & The base low resolution model, in which the deep atmosphere is isothermally initialised\ [*B*]{} & Like model [*A*]{}, but with the deep atmosphere adiabatically initialised\ [*C*]{} & Mid Resolution version of model [*A*]{} ($d = 30$)\ [*D*]{} & High Resolution version of model [*A*]{} ($d = 40$)\ [*E$\rightarrow$I*]{} & Highly evolved versions of model [*A*]{}, which have reached a deep adiabat and then had deep isothermal Newtonian cooling introduced at various strengths: For [*E*]{} $\log(\tau_{220})=7.5$, [*F*]{} $\log(\tau_{220})=11$, [*G*]{} $\log(\tau_{220})=15$, [*H*]{} $\log(\tau_{220})=20$, and [*I*]{} $\log(\tau_{220})=22.5$\ [*J*]{} & [*K*]{} & Highly evolved versions of model [*A*]{} which have reached a deep adiabat, and then had their outer atmospheric Newtonian cooling modified to reflect a different surface temperature: 1200K in model [*J*]{} and 2200K in model [*K*]{} The default parameters used with our models are outlined in , with the resultant models, as well as the simulation specific parameters, detailed in .\ In , we use the results of models [*A*]{} and [*B*]{} to demonstrate the validity of the work of in the time-dependent, three-dimensional, regime. We next explore the robustness and sensitivity of our results to numerical and external effects in . Note that, throughout this paper, all times are either given in seconds or in Earth years - specifically one Earth year is exactly 365 days. Validation of the hot jupiter model {#sec:Dynamico_HJ_confirm} ----------------------------------- [0.47]{} ![image](latP_T_1800K_t_2_liney.png){width="0.9\columnwidth"} [0.47]{} ![image](latP_U_1800K_t_2_liney.png){width="0.9\columnwidth"} We start by exploring the early evolution of model [*A*]{}, testing how well it agrees with the benchmark calculations of . The model is run for an initial period of $30$ years in order to reach an evolved state before we take averages over the next five $\textrm{years}$ of data. Note that this model was also used to calibrate the horizontal dissipation ($\tau_{dissip}$). In , we show zonally and temporally-averaged plots of the zonal wind and the temperature as a function of both latitude and pressure.\ We find that the temperature (left panel) is qualitatively similar to that reported by both and @Mayne_2014. The temperature range we find ($\sim\!\!750\textrm{K}\rightarrow\sim\!\!2150\textrm{K}$) matches their results ($\sim700\rightarrow2000\si{\kelvin}$) to within a 10% margin of uncertainty. This is satisfactory given the differences between the various set-ups and numerical implementations of the GCMs, as well as the variations that occur when adjusting the length of the temporal averaging window.\ The zonal wind displays a prominent, eastward, super-rotating equatorial jet that extends from the top of the atmosphere down to approximately $\textrm{10 bar}$ ([Note that, as we continue to run this model for more time, the vertical extent of the jet increases, eventually reaching significantly deeper that 100 bar after 1700 years]{}). It exhibits a peak wind velocity of $\approx3500\si{\meter\per\second}$, depending upon the averaging window considered, in good agreement with the work of both and @Mayne_2014 who found peak jet speeds on the order of $3500\rightarrow4000\si{\meter\per\second}$. In the upper atmosphere, it is balanced by counter-rotating (westward) flows at extratropical and polar latitudes. The zonal wind is also directed westwards at all latitudes below $\sim\!\!\textrm{50 bar}$, with this wind also contributing to the flows balancing the large mass and momentum transport of the super-rotating jet.\ The differences we find between our models and the reference models are not unexpected. As discussed by , the jet speed and temperature profile are indeed highly sensitive not only to the numerical scheme adopted by the GCM (i.e. spectral vs finite difference - see their Figure 12) but also to the form and magnitude of horizontal dissipation and Newtonian cooling used. In our models, unlike , we explicitly set our deep ($P>10\textrm{bar}$) cooling to zero, which may explain the enhanced deep temperatures observed in our models, most likely an early manifestation of the deep adiabat we expect to eventually develop.\ As noted by other works (e.g. @2009ApJ...700..887M [@2010ApJ...714.1334R; @Mayne_2014]), it takes a long time for the the deep atmosphere to reach equilibrium, and the above simulation is by no means an exception: the eastward equatorial jet extends deeper and deeper as time increases, with no sign of stopping by the end of the simulated duration. This long time-scale evolution is explored in detail in the following section. The formation of a deep adiabat {#sec:main_results} ------------------------------- [0.47]{} ![image](T_profile_1800K_t_62_unif_new.pdf){width="0.9\columnwidth"} [0.47]{} ![image](T_profile_1800K_t_20_ad_new.pdf){width="0.9\columnwidth"} ![image](T_profile_1800K_t_wide_new.pdf){width="90.00000%"} [0.5]{} ![image](wind_temp_p32_T_1800K_snap.png){width="0.95\columnwidth"} [0.5]{} ![image](wind_temp_p16_T_1800K_snap.png){width="0.95\columnwidth"} [0.5]{} ![image](wind_temp_p7_T_1800K_snap.png){width="0.95\columnwidth"} [0.5]{} ![image](wind_temp_p7_T_1800K_avg.png){width="0.95\columnwidth"} As discussed by , and in , an adiabatic profile in the deep atmosphere (i.e. $P>\sim\!\!1\rightarrow \sim\!\!\textrm{10 bar}$) should be a good representation of the steady state atmosphere. In order to confirm that this is the case, we performed a series of calculations with a radiatively inert deep atmosphere (i.e. no deep heating or cooling, as required by the theory of ).\ We explore this using two models, [*A*]{} and [*B*]{}, which only differ in both their initial condition and their duration. In model [*A*]{}, the atmosphere, including the deep atmosphere, is initially isothermal with $T$=$1800$K and is evolved for more than $1500$ Earth years in order to reach a steady state in its $T$–$P$ profile (as shown in ). As a consequence of the long time-scales required for the model to reach equilibrium, and the computational cost of such an endeavour, model [*A*]{} (and [*B*]{}) is run at a relatively low resolution[^4]. We will investigate the sensitivity of our results to spatial resolution in . As for model [*B*]{}, it is identical to model [*A*]{} except in the deep atmosphere, where it is initialised with an adiabatic $T$–$P$ profile for $P$&gt;$10$ bar. As a result of this model being initialised close to the expected equilibrium solution, model [*B*]{} was then run for only $100$ years in order to confirm the stability of the steady-state. In both cases, we find that the simulation time considered is long enough such that the thermodynamic structure of the atmosphere has not changed for multiple advective turnover times $t_{adv} \sim 2 \pi R_{HJ} /u_{\phi}$.\ shows that both models have evolved to the same steady state: an outer atmosphere whose $T-P$ profile is dictated by the Newtonian cooling profile, and a deep adiabat which is slightly hotter $\left(\sim\!\!1900\textrm{K}\right)$ than the cooling profile at $P=\textrm{10 bar}$ $\left(1800\textrm{K}\right)$. [This is reinforced by the latitudinal and longitudinal temperature profile throughout the simulation domain. In we plot the zonal wind and temperature profile at three different heights (pressures). Here we can see that, in the outer atmosphere (panels [*a*]{} and [*b*]{}) the profile is dominated by the newtonian cooling, with horizontal advection (and the resulting offset hotspot) starting to become significant as we move towards middle pressures. As for the deep atmosphere (snapshot in panel [*c*]{} and time average in panel [*d*]{}), here we start to see evidence of both the heating and near-homogenisation of the deep atmosphere. Note that we refer to the atmosphere as nearly homogenised because the temperature fluctuations at, for example, $P=10$ bar are less than $1\%$ of the mean temperature.]{}\ Importantly, this convergence to as deep adiabat not only occurs in the absence of vertical convective mixing (an effect which is absent from our models, which contain no convective driving), but also at a significantly lower pressure $\left(P=\textrm{10 bar}\right)$ than the pressure ($\sim$40 bar for HD209458b - @2004ApJ...603L..53C) at which we would expect the atmosphere to become unstable to convection (and so, in the traditional sense, prone to an adiabatic profile).\ Therefore, the characteristic entropy profile of the planet is warmer than the entropy profiles calculated from standard 1D irradiated models. We will discuss the implications of this result for the evolution of highly irradiated gas giant in .\ In model [*A*]{}, the steady state described above is very slow to emerge from an initially isothermal atmosphere. This is illustrated in which shows the time evolution of the $T$-$P$ profile. It takes more than 500 years of simulation time to stop exhibiting a temperature inversion in the deep atmosphere, let alone the $>$1500 years required to reach the same steady state as model [*B*]{}. As will be further discussed in , this [slow evolution of the deep adiabat]{} is probably one of the main reason why this result has not been reported by prior studies of hot Jupiter atmospheres.\ The mechanism advocated by relies on the existence of vertical and latitudinal motions that efficiently redistribute potential temperature. In order to determine their spatial structure, we plot in the zonally and temporally averaged meridional mass-flux stream function and zonal wind velocity for model [*A*]{}.\ [Starting with the zonal wind profile (grey lines) we can see evidence for a super-rotating jet that extends deep into the atmosphere, with balancing counter flows at the poles and near the bottom of the simulation domain. In the deep atmosphere, this jet has evolved with the deep adiabat, extending towards higher pressures as the developing adiabat (almost) homogenises (and hence barotropises) the atmosphere. This barotropisation on long timescales seems similar to the drag-free simulation started from a barotropic zonal wind in @2013ApJ...770...42L]{} The meridional mass-flux stream function is defined according to $$\Psi = \frac{2\pi R_{HJ}}{g\cos{\theta}}\int_{P_{top}}^{P}u_{\phi}\,dP.$$ We find that the meridional (latitudinal and vertical) circulation profile is dominated by four vertically aligned cells extending from the bottom of our simulation atmospheres to well within the thermally and radiatively active region located in the upper atmosphere. These circulation cells lead to the formation of a strong, deep, down-flow at the equator (which can be linked to the high equatorial temperatures in the upper atmosphere), weaker, upper atmosphere, downflows near the poles, and a mass conserving pair of upflows at mid latitudes ($\theta=20^\circ \rightarrow30^\circ$). The meridional circulation not only leads to the vertical transport of potential temperature (as high potential temperature fluid parcels from the outer atmosphere are mixed with their ‘cooler’ deep atmosphere counterparts), but also to the [almost complete]{} latitudinal homogenisation of the deep atmosphere [(with only small temperature variations remaining)]{}. In a fully radiative model, these circulations would also mix the outer atmosphere, leading to the equilibrium temperature profiles we instead impose via Newtonian cooling (see, for example, for more details about the 3D mixing in radiative atmospheres).\ [Note that the vertical extent the zonal wind, and the structure of the lowest cells in the mass-flux stream function, appear to be affected by the bottom boundary, suggesting that they extend deeper into the atmosphere. Whilst this is interesting and important, it should not affect the final state our P-T profiles reach, but does suggest that models of hot-Jupiters should be run to higher pressures to fully capture the irradiation driven deep flow dynamics.\ ]{} The primary driver of the latitudinal homogenisation are fluctuations in the meridional circulation profile, which are visible within individual profile snapshots, but are averaged out when we take a temporal average. This includes contributions from spatially small-scale velocity fluctuations at the interface of the large-scale meridional cells. Evidence for these effects can be seen in snapshots of the zonal and meridional flows, in an RMS analysis of the zonal velocity, [and of course in deep temperature profile that these advective motions drive]{}. The first reveals complex dynamics, such as zonally-asymmetric and temporally variable flows, that are hidden when looking at the temporal average, but which mask the net flows when looking at a snapshot of the circulation. The second reveals spatial and temporal fluctuations on the order of $5\rightarrow10\si{\meter\per\second}$ in the deep atmosphere. [Finally the third (as plotted in panels [*c*]{} and [*d*]{} of , which show snapshots or the time average of the zonal wind and temperature profile, respectively) reveals small scale temperature and wind fluctuations, which are likely associated with the deep atmosphere mixing, that are lost when looking at the average, steady, state. ]{}\ However, a more detailed analysis of the dynamics of this homogenisation, [as well as the exact nature of the driving flows and dynamics]{}, is beyond the scope [of this paper. Although interesting in its own right, the mechanism by which the circulation is set up in the deep atmosphere of our isothermally initialised simulations might not be relevant to the actual physical mechanism happening in hot Jupiters with hot, deep, atmospheres.]{} ![Zonally and temporally-averaged (over a period of $\approx30$ years) stream-function for model [*A*]{}. Clockwise circulations on the meridional plane are shown in red and anticlockwise circulations are shown in blue. Additionally the zonally and temporally averaged zonal wind is plotted in black (solid = eastward, dashed = westward). \[fig:vertical\_streamfuction\_mass\_flow\] ](streamfunction_lat_pressure_T_1800K_alt_neo2.png){width="0.9\columnwidth"} [0.47]{} ![image](T_profile_1800K_t_60_unif_multilon.pdf){width="0.9\columnwidth"} [0.47]{} ![image](T_profile_1800K_t_60_unif_multilat_new.pdf){width="0.9\columnwidth"} As a consequence of both the meridional circulations described above, and the zonal flows that form as a response to the strong day-side/night-side temperature differential, the deep atmosphere $T$–$P$ profile is independent of both longitude () and latitude (). Only in the upper atmosphere ($P<10$ bar) do the temperature profiles start to deviate from one another, reflecting the zonally and latitudinally varying Newtonian forcing. Taken together, the two panels of confirm that the latitudinal and vertical steady-state circulation, the super-rotating eastward jet, and any zonally-asymmetric flows act to advect potential temperature throughout the deep atmosphere, leading at depth to the formation of a hot adiabat without the need for any convective motions. Robustness of the results {#sec:robustness_investigate} ------------------------- Having confirmed that a deep adiabatic temperature profile connecting with the outer atmospheric temperature profile at $P=\textrm{10 bar}$ is a good representation of the steady state within our hot Jupiter model atmospheres, we now explore the robustness of this result. ### Sensitivity To Changes In The Horizontal Resolution {#sec:res} ![Equatorially averaged T-P profile snapshots for three initially isothermal (see grey dashed line in the deep atmosphere) models run with the same dissipation time ($t_{\mathrm{dissip}}=2500\textrm{s}$), vertical resolution, and Newtonian cooling profile (dark grey), but different horizontal resolutions (Models [*A*]{} (yellow), [*C*]{} (light green), and [*D*]{} (orange)).\[fig:resolution\_scaling\] ](T_profile_1800K_t_unif_multi_latest_mew.pdf){width="0.9\columnwidth"} We start our exploration of the robustness of our results by confirming that the eventual convergence of the deep atmosphere on to a deep adiabat appears resolution independent.\ shows the $T$–$P$ profiles obtained for three models at the same time ($t\approx1800$ years) but with different resolutions (our ‘base’ resolution model, [*A*]{}, a ‘mid-res’ model, [*C*]{}, and a ‘high-res’ model, [*D*]{}). The mid resolution model ([*C*]{}) has almost reached the exact same equilibrium adiabatic profile as the low resolution case ([*A*]{}): comparing this with the time-evolution of model [*A*]{} () confirms that they are both on the path to the same equilibrium state, and that a significant amount of computational time would be required to reach it. This becomes even clearer when we look at a high resolution model ([*D*]{}). Here we find that, despite the long time-scale of the computation, the deep atmosphere still exhibits a temperature inversion, suggesting, in comparison to , that the model has a long way to go until it reaches the same, deep adiabat, equilibrium.\ In general, we have found the better the resolution the more slowly the atmosphere temperature profile evolves towards the adiabatic steady state solution. This stems most likely from the fact that horizontal numerical dissipation, on a fixed dissipation time-scale, decreases with increasing resolution. Note that we kept the horizontal dissipation timescale constant due to both the computational expense of the parameter study required to set the correct dissipation at each resolution, and the numerical dissipation independence of the steady-state in the deep atmosphere.\ Evidence for the impact of the small-scale flows on this slow evolution can be seen in the temporal and spatial RMS profiles of the zonal flows, which reveal that, as we increase the resolution by a factor 2, the magnitude of the small-scale velocity fluctuations decreases by roughly the same factor. These results are in agreement with the effect of changing the numerical dissipation timescale ($\tau_{\textrm{dissip}}$) at a fixed resolution, where longer timescales also slow down the circulation, thereby increasing the time required to reach a steady $T$-$P$ profile in the deep atmosphere (not shown). Despite these numerical limitations, it remains clear that the, the presence, and strength, of any numerical dissipation does not affect the steady state solutions of the simulation, which remains as an adiabatic P-T profile in the deep atmosphere. ### Sensitivity to changes in the upper atmosphere forcing function ![Equatorially averaged $T$–$P$ profiles for three models: [*A*]{} (green), [*J*]{} (yellow) and [*K*]{} (orange). The orange ([*K*]{}) and yellow ([*J*]{}) models have had their outer atmosphere cooling modified such that $T_{\textrm{eq}}=2200$ K or $1200$ K, respectively. The solid lines represent the equilibrium $T$–$P$ profiles whilst the dashed lines represent the $T$–$P$ profiles $200$ years after the outer atmospheres forcing was adjusted (shown in dark grey for each model). Note that, after 200 years of ‘modified’ evolution, only the $2200$K model has not reached equilibrium. []{data-label="fig:force_to_adiabat_tests"}](T_profile_2200K_t_multi_duo_new.pdf){width="0.9\columnwidth"} We next explore how the deep adiabat responds to changes in the outer atmosphere irradiation and thermal emission (via the imposed Newtonian cooling). The aim is not only to test the robustness of the deep adiabat, but also to explore the response of the adiabat to changes in the atmospheric state. As part of this study, the two scenarios we consider were initialised using the evolved adiabatic profile obtained in model A, but with a modified outer atmosphere cooling profile such that $T_{night}=1200$K (model [*J*]{}) or $T_{night}=2200$K (model [*K*]{}). shows the equilibrium $T$–$P$ profiles (solid lines) as well as snapshots of the $T$–$P$ profiles after only $200$ years of ‘modified’ evolution (dashed lines). It also includes a plot of model [*A*]{} to aid comparison.\ Model [*J*]{} evolves in less than 200 years towards a new steady state profile that corresponds to the modified cooling profile. The deep adiabat reconnects with the outer atmospheric profile at $P=\textrm{10 bar}$ and $\sim\!\textrm{1250K}$ (in agreement with the relative offset found in our $1800\textrm{K}$ models, [*A*]{} and [*B*]{}). The meridional mass circulation (not shown) displays evidence for the same qualitative flows driving the vertical advection of potential temperature as models [*A*]{} and [*B*]{}. However it also shows signs that it is still evolving, suggesting that the steady state meridional circulation takes longer to establish than the vertical temperature profile.\ In model [*K*]{}, we find that, $200$ years after modifying the outer atmospheres cooling profile, the deep atmosphere has not yet reached a steady state. In fact it takes approximately $1000$ years of evolution for it to reach equilibrium, which we show as a solid line in . This confirms that, model [*K*]{}, although slow to evolve relative to the cooling case (model [*J*]{}), does eventually settle onto a deep, equilibrium, adiabat. Additionally, this adjustment occurs significantly faster than the equivalent evolution of a deep adiabat from an isothermal start. Based on the results of this section, we conclude that it is faster for the deep atmosphere to cool than to warm when it evolves toward its adiabatic temperature profile. In order to understand this time-scale ordering, we have to note that the only way for the simulation to inject or extract energy is through the fast Newtonian forcing of the upper atmosphere and also that the thermal heat content of the deep atmosphere is significantly larger than that of the outer layers. The deep ($_{d}$) and upper ($_{u}$) atmospheres are connected by the advection of potential temperature that we will rewrite in a conservative form as an enthalpy flux: $\rho c_p T u$ and we simplify the process to two steps between the two reservoirs (assuming they have similar volumes): injection/extraction by enthalpy flux and Newtonian forcing in the upper atmosphere. - In the case of cooling, the deep atmosphere contains too much energy and needs to evacuate it. It will setup a circulation to evacuate this extra-energy to the upper layers with an enthalpy flux that would lead to an upper energy content set by $\rho_u c_v T_u \sim \rho_u c_v T_{u,init}+\rho_d c_v (T_{d,init}-T_{d,eq})$ if we ignore first Newtonian cooling. $T_u$ would then be very large essentially because of the density difference between the upper and lower atmosphere. The Newtonian forcing term proportional to $-(T_u - T_{u,eq})/\tau$ is then very large and can efficiently remove the energy from the system. - In the case of heating, the deep atmosphere does not contain enough energy and needs an injection from the upper layers. This injection is coming from the Newtonian forcing and can at first only inject $\rho_u c_v (T_{u,eq} - T_{u,init})$ in the system. The enthalpy flux will then lead to an energy content in the deep atmosphere given by $\rho_d c_v T_d \sim \rho_d c_v T_{d,init}+\rho_u c_v (T_{u,eq}-T_{u,init})$ if we assume that all the extra-energy is pumped by the deep atmosphere. Because of the density difference and the limited variations in the temperature caused by the forcing, the temperature change in the deep atmosphere is small and will require more injection from the upper layers to reach equilibrium. However, even in the most favourable scenario in which all the extra energy is transferred, the Newtonian forcing cannot exceed $-(T_{u,init} - T_{u,eq})/\tau$ which explains why it will take a much longer time to heat the deep atmosphere than to cool it. ### Sensitivity to the addition of newtonian cooling to the deep atmosphere {#sec:sensitivity_deep} ![Newtonian cooling relaxation time-scale profiles used in the models shown in . Note that a smaller value of $\tau$ means more rapid forcing towards the imposed cooling profile [(which in all cases is isothermal in the deep atmosphere, where $P>10$ bar)]{}, and that the relaxation profiles are identical for $P<\textrm{10 bar}$ (grey line). \[fig:deep\_tau\_stability\_profiles\] ](newt_relax_deep_tau_looped+new.pdf){width="0.9\columnwidth"} ![Snapshots of the $T$–$P$ profile for five, initially adiabatic simulations (coloured lines - based on model [*B*]{}, and with the same outer atmosphere cooling profile (dark grey)) which are then forced to a deep isothermal profile (grey dashed line) with varying $\log(\tau_{\textrm{220}})$ (). \[fig:deep\_tau\_stability\] ](T_profile_1800K_t_unif_multi_new.pdf){width="0.9\columnwidth"} [0.47]{} ![image](streamfunction_lat_pressure_T_1900K_alt_15_neo.png){width="0.9\columnwidth"} [0.47]{} ![image](streamfunction_lat_pressure_T_1900K_alt_20_neo.png){width="0.9\columnwidth"} It is unlikely that the atmosphere will suddenly turn thermally inert at pressures greater than $10$ bar. Rather, we expect the thermal time-scale will gradually increase with increasing pressure. In this section, we examine the sensitivity of the deep atmospheric flows, circulations, and thermal structure to varying levels of Newtonian cooling. Additionally we are motivated to quantify the maximum amount of Newtonian cooling under which the deep atmosphere is still able to maintain a deep adiabat.\ To explore this, we consider five models each with different cooling time-scales at the bottom of the atmosphere [(i.e. five different values of $\log(\tau_{220})$)]{}. From this, we can then linearly interpolate the relaxation time-scale in $\log(P)$ space between $10$ and $220$ bar. The resultant profiles are plotted in , and can be split into three distinct groups: 1) The relaxation profile with $\log(\tau_{\textrm{220}})=7.5$ (model [*E*]{}) represents a case with rapid Newtonian cooling that does not decrease with increasing pressure; 2) The case $\log(\tau_{\textrm{220}})=11$ (model [*F*]{}) is a simple linear continuation of the relaxation profile we use between $P=\textrm{1 bar and 10 bar}$. It is the simplest possible extrapolation of the upper atmosphere thermal time-scale profile, and likely represents the strongest realistic forcing in the deep atmosphere; 3) The remaining relaxation profiles, $\log(\tau_{\textrm{220}})=\textrm{15, 20, 22.5}$ (models [*G, H*]{} and [*I*]{}), represent heating and/or cooling processes that get progressively slower in the deep atmosphere, in accordance with expectations born out from 1D atmospheric models of hot Jupiter atmospheres (see, for example, ).\ The results we obtained are summarised by the $T$–$P$ profiles we plot in . For low levels of heating and cooling in the deep atmosphere (models [*G, H*]{} and [*I*]{}), the results are almost indistinguishable from models [*A*]{} and [*B*]{}, with only a decrease in the outer atmosphere connection temperature of a few Kelvin in model [*G*]{}. We find a more significant reduction in the temperature of the $T$–$P$ when we investigate model [*F*]{}, in which we set $\log(\tau_{\textrm{220}})=11$. In particular, there is a deepening of the connection point between the outer atmosphere and the deep adiabat, which only becomes apparent for $P>20$ bar in this model. This result suggest that model [*F*]{} falls near the pivot point between models in which the deep atmosphere is adiabatic and those that relax toward the imposed temperature profile. This is confirmed by model [*E*]{}, in which $\tau_{\textrm{220}}=7.5$, where we find that the deep adiabat has been rapidly destroyed (in $<\textrm{30 years}$), such that the deep $T$–$P$ profile corresponds to the imposed cooling profile throughout the atmosphere. This occurs because the Newtonian time-scale has become smaller than the advective time-scale, which means that the imposed temperature profiles dominates over any dynamical effects.\ Before closing this section, let us briefly comment on the meridional circulation profiles obtained in those models that converge onto a similar deep adiabatic temperature profile (models [*G, H*]{} and [*I*]{}). For all of them, we recover the same qualitative structure we found for model [*A*]{}, characterised by meridional cells of alternating direction that extend from the deep atmosphere to the outer regions. The finer details of the circulations, however, differ from the ones seen in model [*A*]{}. This is illustrated in which displays the meridional circulation and zonal flow profiles for models [*G*]{} () and [*H*]{} (). As the Newtonian cooling becomes faster in the deep atmosphere, the number of meridional cells increases (see also ), to the point that, in model [*E*]{}, no deep meridional circulation cells exists and the deep circulation profile is essentially unstructured. Despite these differences in the shape of the meridional circulation, the steady state profiles obtained in these simulations in the deep atmosphere is again an adiabatic PT profile provided the Newtonian cooling is not (unphysically) strong. Conclusion and discussion {#sec:conclusion} ========================= ![ Evolution of the sub-stellar point (i.e. day-side) Temperature-Pressure profile in a simulation (detailed in ) calculated using the Met Office GCM, the Unified Model, [@2014GMD.....7.3059M] and including a robust two-stream radiation scheme . Here we show snapshots of the T-P profile at 0.25 (purple), 2.5 (green), and 25 (orange) Earth years, along with two example adiabats (grey dotted and dashed lines) designed to show how the deep atmosphere gets warmer and connects to steadily warmer adiabats as the simulation progresses. Note that this progression is, at the end of the simulated time, ongoing towards a deep, hot, adiabat, albeit at an increasingly slow rate. \[fig:Metoffice\_GCM\_fig\] ](T_profile_v2_MOGCM_and_adia.pdf){width="0.9\columnwidth"} Conclusions of the simulation results ------------------------------------- By carrying out a series of 3D GCM simulations of irradiated atmospheres, we have shown in the present paper that: - If the deep atmosphere is initialised on an adiabatic PT profile, it remains, as a steady state, on this profile, - If the deep atmosphere is initialised on a too hot state, it rapidly cools down to the same steady state adiabatic profile, - If the deep atmosphere is initialised on a too cold state, it slowly evolves towards the steady state adiabatic profile. Furthermore, in all the above cases, the deep adiabat forms at lower pressures that those at which we would expect, from 1D models, the atmosphere to be convectively unstable. We have also shown that this steady-state adiabatic profile is stable to changes in the deep Newtonian cooling and is independent of the details of the flow structures, provided that the velocities are not completely negligible. The hot adiabatic deep atmosphere is the natural final outcome of the simulations, for various resolutions, even though the time-scale to reach steady-state is longer at higher resolution when starting from a too cold initial state. When the simulations are initialised on a too cold profile, the time-scale to reach the steady state is of the order of $t\sim\textrm{1000 years}$, explaining why the formation of a deep adiabat has not been seen in previous GCM studies: this time-scale far outstrips the time taken for the outer atmosphere to reach an equilibrium state ($t<\sim\!\textrm{$1$ year}$ for $P<\textrm{1 bar}$). As a result, the vast majority of published GCM models only contain a [*partially evolved deep atmosphere*]{}, the structure of which is directly comparable to the early outputs of our isothermally initialised calculation. Examples of this early evolution of the deep atmosphere towards a deep adiabat (as seen in the early outputs plotted in ) include Figure 6 of @2010ApJ...714.1334R (where the deep temperature profile shows signs of heating from its initial isothermal state, albeit only on the irradiated side of the planet), Figure 7 of (where we see a clear shift from their initially isothermal deep atmosphere towards a deep adiabat), and Figure 8 of @2015ApJ...801...86K (where we again see a temperature inversion and a push towards a deep adiabat for Wasp-43b). It is tempting to think that if these simulations were run longer, they would evolve to a similar, deep adiabatic structure (with a corresponding increase in the exoplanetary radius). In order to investigate this possibility, we have extended the model of , run with the Unified Model of the Met Office (which includes a robust two-stream radiation scheme that replaces the Newtonian Cooling in our models), for a total of $\approx 25$ Earth years. The results are shown in , where we plot the pressure-temperature profile at three different times, along with examples of the approximate deep adiabat that best matches each snapshot. We see that the deep atmosphere rapidly converges towards a deep adiabat with further vertical advection of potential temperature warming up this adiabat as the simulation goes on. Since this process keeps going on during the simulation, the result not only reinforces our conclusions but suggests that our primary Newtonian cooling profile represents a reasonable approximation of the incident irradiation and radiative loss.\ The results obtained in the present simulations suggest that future hot Jupiter atmosphere studies should be initialised with a hot, deep, adiabat starting at the bottom of the surface irradiation zone ($P\sim\!\!\textrm{10 bar}$ for HD209458b). Furthermore, in a situation where the equilibrium profile in the deep atmosphere is uncertain, we suggest that this profile should be initialised with a hotter adiabat than expected rather than a cooler one. The simulation should then be run long enough for the deep atmosphere to reach equilibrium. This is in agreement with the results of , who also suggested that future GCM models should be initialised with hotter profiles than currently considered. For instance, recent 3D simulations of HD209458b have been initialised with a hotter interior T-P profile (for example, one of their models is initialised with an isotherm that is $800$ K hotter than typically used in GCM studies, thus bringing the deep atmosphere closer towards its deep adiabat equilibrium temperature), and show important differences, on the time-scales considered, between the internal dynamics obtained with this set-up, and the ones obtained with a cooler, more standard, deep atmospheric profile (see, ). Using aforementioned more correct atmosphere initial profiles should not only bring these models towards a more physical hot Jupiter parameter regime (with then a correct inflated radius), but also provide a wealth of information on how the deep adiabat responds to changes in parameter and computational regime. Evolution of highly irradiated gas giants ----------------------------------------- The results obtained in the present GCM simulations have strong implications for our understanding of the evolution of highly irradiated gas giants. As just mentioned, we first emphasise that simulations initialised from a too cold state are not relevant for the evolution of inflated hot Jupiters (although it could be of some interest for re-inflation, but this is beyond the scope of this paper). Indeed, inflated hot Jupiters are primarily in a hot initial state and, as far as the evolution is concerned, only the steady state of the atmosphere matters. The shorter timescales needed to reach this steady state are irrelevant for the evolution (with a typical Kelvin Helmholtz timescale of $\sim 1\textrm{Myr}$).\ As shown in the present simulations, provided they are run long enough, hot Jupiter atmospheres converge at depth, i.e. in the optically thick domain, to a hot adiabatic steady-state profile without the need to invoke any dissipation mechanism such as ohmic, or kinetic energy, dissipation. These 3D dynamical calculations thus confirm the 2D steady-state calculations of . Importantly enough, the transition to an adiabatic atmospheric profile occurs at lower pressures than the ones at which the medium is expected to become convectively unstable (thus adiabatic according to the Schwarzchild criterion). This means that the planet lies on a hotter internal entropy profile than suggested by 1D irradiation models, yielding a larger radius. The mechanism of potential temperature advection in the atmosphere of irradiated planets thus provides a robust solution to the radius inflation problem.\ As mentioned previously, [almost]{} all scenarios suggested so far to resolve the anomalously inflated planet problem rely on the (uncomfortable) necessity to introduce finely tuned parameters. This is true, in particular, for all the different dissipation mechanisms, whether they involve kinetic energy, or ohmic and tidal dissipation. This is in stark contrast with the present mechanism, in which [*entropy*]{} (potential temperature) is advected from the top to the bottom of the atmosphere. High entropy fluid parcels are moved from the upper to the deep atmosphere and toward high latitude while low entropy fluid parcels come from the deep atmosphere and are deposited in the upper atmosphere. This gradually changes the entropy profile until a steady state situation is obtained. Although an [enthalpy]{} (and mass and momentum) flux is associated with this process, down to the bottom of the atmosphere (characterised by some specific heat reservoir), this does not require a dissipative process [(from kinetic, magnetic or radiative energy reservoirs into the internal energy reservoir)]{}.\ [In order to characterise this deep heating flux, and confirm that our hot, deep, adiabat would not be unstable due to high temperature radiative losses, we also explored the vertical enthalpy flux in our model and compared it to the radiative flux, as calculated for a deep adiabat using ATMO (). This analysis reveals that the vertical enthalpy flux dominates the radiative flux at all $P>1$ bar: For example, averaging over a pressure surface at $P=10$ bar, we find a net vertical enthalpy flux ($\rho c_{p}Tu_{z}$) of $ -1.04\times10^8\mathrm{erg s^{-1} cm^{-2}}$ compared to a outgoing radiative flux of $ 7.68\times10^6\mathrm{erg s^{-1} cm^{-2}}$, suggesting that any deep radiative losses are well compensated by energy (enthalpy) transport from the highly irradiated outer atmosphere. This result is reinforced by UM calculation we show in , which intrinsically includes this deep radiative loss and show no evidence of cooling due to deep radiative effects. ]{}\ This ([lack of a requirement for additional dissipative processes]{}) is of prime importance when trying to understand the evolution of irradiated planets. Whereas dissipative processes imply an extra energy source in the evolution ($\int_M {\dot \epsilon} dm$, where ${\dot \epsilon}$ is the energy dissipation rate, to be finely tuned), to slow down the planet’s contraction, there is no need for such a term in the present process. Indeed, as an [*isolated*]{} substellar object (i.e. without nuclear energy source) cools down, its gravitational potential energy is converted into radiation at the surface, with a flux $\sigma T_\mathrm{eff}^4$. Let us now suppose that the same object is immersed into an isotropic medium characterised by a pressure $P=$220 bars and a temperature $T\sim 4000$ K, typical conditions in the deep atmosphere of 51Peg-B like hot Jupiters. [Once the object’s original inner adiabat (after its birth) has cooled down to 4000 K at 220 bars, the thermal gradient between the external and internal media will be null, which essentially reduces the local convective flux and the local optically thick radiative flux to zero. Thus the cooling flux will be reduced to almost zero. At this point, the core]{} cannot significantly cool any more and is simply in thermal equilibrium with the surrounding medium. Both the contraction and the cooling flux are essentially insignificant: $dR/dt\approx 0, \sigma T_\mathrm{int}^4 \approx 0$, [in which we define $\sigma T_\mathrm{int}^4$ as the radiative and convective cooling flux at the interior-atmosphere boundary]{}. [Indeed, convection will become inefficient in transporting energy and any remaining radiative loss in the optically thick deep core will be compensated by downward energy transport from the hot outer atmosphere. ]{} For a highly irradiated gas giant, the irradiation flux is not isotropic, but the combination of irradiation and atmospheric circulation will lead to a similar situation, with a deep atmosphere adiabatic profile of $\sim$4000 K at 220 bars for all latitudes and longitudes. Therefore, the planet’s interior does not significantly cool any more and we also have $\sigma T_\mathrm{int}^4 \approx 0$. The evolution of the planet is stopped ($dS/dt\approx0$), or let say its cooling time is now prohibitively long, and the planet lies on a constant adiabat determined by the equilibrium between the inner and atmospheric ones at the interior-atmosphere boundary. The situation will last as long as the planet-star characteristics will remain the same, illustrating the robustness of the potential temperature advection mechanism to explain the anomalous inflation of these bodies.\ Therefore, the irradiation induced advection of potential temperature appears to be the most natural and robust processes to resolve the radius-inflation puzzle. Note that, this does not exclude other processes (e.g. dissipative ones) from operating within hot Jupiter atmospheres, but they are unlikely to be the dominant mechanisms responsible for the radius inflation. FSM and PT would like to acknowledge and thank the ERC for funding this work under the Horizon 2020 program project ATMO (ID: 757858). NJM is part funded by a Leverhulme Trust Research Project Grant and partly supported by a Science and Technology Facilities Council Consolidated Grant (ST/R000395/1). JL acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 679030/WHIPLASH). GC was supported by the Programme National de Planétologie (PNP) of CNRS-INSU cofunded by CNES. IB thanks the European Research Council (ERC) for funding under the H2020 research and innovation programme (grant agreement 787361 COBOM). FD thanks the European Research Council (ERC) for funding under the H2020 research and innovation programme (grant agreement 740651 NewWorlds). BD acknowledges support from a Science and Technology Facilities Council Consolidated Grant (ST/R000395/1).\ The authors also wish to thank Idris, CNRS, and Mdls for access to the supercomputer Poincare, without which the long time-scale calculations featured in this work would not have been possible. The calculations for Appendix A were performed using Met Office software. Additionally, the calculations in Appendix A used the DiRAC Complexity system, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment is funded by BIS National E-Infrastructure capital grant ST/K000373/1 and STFC DiRAC Operations grant ST/K0003259/1. DiRAC is part of the National E-Infrastructure.\ Finally the authors would like to thank the Adam Showman for their careful and insightful review of the original manuscript. [^1]: [^2]: DYNAMICO is available at http://forge.ipsl.jussieu.fr/dynamico/wiki, and our hot Jupiter patch available at https://gitlab.erc-atmo.eu/erc-atmo/dynamico\_hj. [^3]: Specifically, to generate the grid we start with a sphere that consists of 20 spherical triangles (sharing 12 vertex, i.e. grid, points) then, we subdivide each side of each triangle $d$ times, using the new points to generate a new grid of spherical triangles with $N$ total vertices. These vertices then from the icosahedral grid. [^4]: However this does not mean that our models have problems conserving angular momentum, they maintain $97.44\%$ of the initial angular momentum after over 1700 years of simulation time (which compares well to other GCMs: @2014Icar..229..355P).
--- abstract: 'Results on the behaviour of the rightmost particle in the $n$th generation in the branching random walk are reviewed and the phenomenon of anomalous spreading speeds, noticed recently in related deterministic models, is considered. The relationship between such results and certain coupled reaction-diffusion equations is indicated.' author: - 'J. D. Biggins' bibliography: - 'biblio-kingman.bib' --- Branching out ============= ###### AMS subject classification (MSC2010) 60J80 Introduction ------------ I arrived at the University of Oxford in the autumn of 1973 for postgraduate study. My intention at that point was to work in Statistics. The first year of study was a mixture of taught courses and designated reading on three areas (Statistics, Probability, and Functional Analysis, in my case) in the ratio 2:1:1 and a dissertation on the main area. As part of the Probability component, I attended a graduate course that was an exposition, by its author, of the material in @MR0370721, which had grown out of his contribution to the discussion of John’s invited paper on subadditive ergodic theory [@MR0356192]. A key point of Hammersley’s contribution was that the postulates used did not cover the time to the first birth in the $n$th generation in a Bellman–Harris process.[^1] @MR0370721 showed, among other things, that these quantities did indeed exhibit the anticipated limit behaviour in probability. I decided not to be examined on this course, which was I believe a wise decision, but I was intrigued by the material. That interest turned out to be critical a few months later. By the end of the academic year I had concluded that I wanted to pursue research in Probability rather than Statistics and asked to have John as supervisor. He agreed. Some time later we met and he asked me whether I had any particular interests already—I mentioned Hammersley’s lectures. When I met him he was in the middle of preparing something (which I could see, but not read upside down). He had what seemed to be a pile of written pages, a part written page and a pile of blank paper. There was nothing else on the desk. A few days later a photocopy of a handwritten version of @MR0400438, essentially identical to the published version, appeared in my pigeon-hole with the annotation “the multitype version is an obvious problem”—I am sure this document was what he was writing when I saw him. (Like all reminiscences, this what I recall, but it is not necessarily what happened.) This set me going. For the next two years, it was a privilege to have John as my thesis supervisor. He supplied exactly what I needed at the time: an initial sense of direction, a strong encouragement to independence, an occasional nudge on the tiller about what did or did not seem tractable, the discipline of explaining orally what I had done, and a ready source on what was known, and where to look for it. However, though important, none of these get to the heart of the matter, which is that I am particularly grateful to have had that period of contact with, and opportunity to appreciate first-hand, such a gifted mathematician. @MR0400438 considered the problem Hammersley had raised in its own right, rather than as an example of, and adjunct to, the general theory of subadditive processes. Here, I will say something about some recent significant developments on the first-birth problem. I will also go back to my beginnings, by outlining something new about the multitype version that concerns the phenomenon of ‘anomalous spreading speeds’, which was noted in a related context in @MR2322849. Certain martingales were deployed in @MR0400438. These have been a fruitful topic in their own right, and have probably received more attention since then than the first-birth problem itself (see @MR2471666 for a recent nice contribution on when these martingales are integrable). However, those developments will be ignored here. The basic model --------------- The branching random walk (BRW) starts with a single particle located at the origin. This particle produces daughter particles, which are scattered in ${\mathbb{R}}$, to give the first generation. These first generation particles produce their own daughter particles similarly to give the second generation, and so on. Formally, each family is described by the collection of points in ${\mathbb{R}}$ giving the positions of the daughters relative to the parent. Multiple points are allowed, so that in a family there may be several daughter particles born in the same place. As usual in branching processes, the $n$th generation particles reproduce independently of each other. The process is assumed supercritical, so that the expected family size exceeds one (but need not be finite—indeed even the family size itself need not be finite). Let ${\mathbf {P}}$ and ${\mathbf {E}}$ be the probability and expectation for this process and let $Z$ be the generic reproduction process of points in ${\mathbb{R}}$. Thus, ${\mathbf {E}}Z$ is the intensity measure of $Z$ and $Z({\mathbb{R}})$ is the family size, which will also be written as $N$. The assumption that the process is supercritical becomes that ${\mathbf {E}}Z({\mathbb{R}})={\mathbf {E}}N>1$. To avoid burdening the description with qualifications about the survival set, let ${\mathbf {P}}(N=0)=0$, so that the process survives almost surely. The model includes several others. One is when each daughter receives an independent displacement, another is when all daughters receive the same displacement, with the distribution of the displacement being independent of family size in both cases. These will be called the BRW with *independent* and *common* displacements respectively. Obviously, in both of these any line of descent follows a trajectory of a random walk. (It is possible to consider an intermediate case, where displacements have these properties conditional on family size, but that is not often done.) \[brw\] Since family size and displacements are independent, these two processes can be coupled in a way that shows that results for one will readily yield results for the other. In a common displacement BRW imagine each particle occupying the (common) position of its family. Then the process becomes an independent displacement BRW, with a random origin given by the displacement of the first family, and its $n$th generation occupies the same positions as the $(n+1)$th generation in the original common displacement BRW. Really this just treats each family as a single particle. In a different direction, the points of $Z$ can be confined to $(0,\infty)$ and interpreted as the mother’s age at the birth of that daughter: the framework adopted in @MR0400438. Then the process is the general branching process associated with the names of Ryan, Crump, Mode and Jagers. Finally, when all daughters receive the same positive displacement with a distribution independent of family size the process is the Bellman–Harris branching process: the framework adopted in @MR0370721. There are other ‘traditions’, which consider the BRW but introduce and describe it rather differently and usually with other problems in focus. There is a long tradition phrased in terms of ‘multiplicative cascades’ (see for example @MR1741808 and the references there) and a rather shorter one phrased in terms of ‘weighted branching’ (see for example @MR2199054 and the references there). The model has arisen in one form or another in a variety of areas. The most obvious is as a model for a population spreading through an homogeneous habitat. It has also arisen in modelling random fractals [@MR1785625] commonly in the language of multiplicative cascades, in the theoretical study of algorithms [@MR1140708], in a problem in [group theory]{} [@MR2114819] and as an ersatz for both lattice-based models of spin glasses in physics [@MR1601733] and a [number theory]{} problem [@MR1143401]. Spreading out: old results -------------------------- Let $Z^{(n)}$ be the positions occupied by the $n$th generation and ${{B}^{(n)}}$ its rightmost point, so that $${{B}^{(n)}}= \sup\{z: z \mbox{~a point of~} Z^{(n)}\}.$$ One can equally well consider the leftmost particle, and the earliest studies did that. Reflection of the whole process around the origin shows the two are equivalent: all discussion here will be expressed in terms of the rightmost particle. The first result, stated in a moment, concerns ${{B}^{(n)}}/n$ converging to a constant, $\Gamma$, which can reasonably be interpreted as the speed of spread in the positive direction. \[spreading out\] A critical role in the theory is played by the Laplace transform of the intensity measure ${\mathbf {E}}Z$: let $\kappa(\phi)= \log \int e^{\phi z} {\mathbf {E}}Z(dz)$ for $\phi \geq 0$ and $\kappa(\phi)=\infty$ for $\phi<0$. It is easy to see that when this is finite for some $\phi>0$ the intensity measures of $Z$ and $Z^{(n)} $ are finite on bounded sets, and decay exponentially in their right tail. The behaviour of the leftmost particle is governed by the behaviour of the transform for negative values of its argument. The definition of $\kappa$ discards these, which simplifies later formulations by automatically keeping attention on the right tail. In order to give one of the key formulae for $\Gamma$ and for later explanation, let $ {{\kappa}^{\ast}}$ be the Fenchel dual of $\kappa$, which is the convex function given by $$\label{Fenchel dual} {{\kappa}^{\ast}}(a)=\sup_\theta \{\theta a-\kappa(\theta)\}.$$ This is sufficient notation to give the first result. \[first theorem\] When there is a $\phi>0$ such that $$\label{m good} \kappa(\phi) < \infty,$$ there is a constant $\Gamma$ such that $$\label{limit} \frac{{{B}^{(n)}}}{n}\rightarrow \Gamma\mbox{~~~a.s}.$$ and $ \Gamma= \sup\{a: {{\kappa}^{\ast}}(a)<0\}=\inf\{\kappa(\theta)/\theta: \theta\}$. This result was proved for the common BRW with only negative displacements with convergence in probability in @MR0370721 [Theorem 2]. It was proved in @MR0400438 [Theorem 5] for $Z$ concentrated on $(-\infty,0)$ and with $0<\kappa(\phi)<\infty$ instead of (\[m good\]). The result stated above is contained in @MR0420890 [Theorem 4], which covers the irreducible multitype case also, of which more later. The second of the formulae for $\Gamma$ is certainly well-known but cannot be found in the papers mentioned—I am not sure where it first occurs. It is not hard to establish from the first one using the definition and properties of ${{\kappa}^{\ast}}$. The developments described here draw on features of transform theory, to give properties of $\kappa$, and of convexity theory, to give properties of ${{\kappa}^{\ast}}$ and the speed $\Gamma$. There are many presentations of, and notations for, these, tailored to the particular problem under consideration. In this review, results will simply be asserted. The first of these provides a context for the next theorem and aids interpretation of $\sup\{a: {{\kappa}^{\ast}}(a)<0\}$ in the previous one. It is that when $\kappa$ is finite somewhere on $(0,\infty)$, ${{\kappa}^{\ast}}$ is an increasing, convex function, which is continuous from the left, with minimum value $-\kappa(0)=-\log {\mathbf {E}}N$, which is less than zero. A slight change in focus derives Theorem \[first theorem\] from the asymptotics of the numbers of particles in suitable half-infinite intervals. As part of the derivation of this the asymptotics of the expected numbers are obtained. Specifically, it is shown that when (\[m good\]) holds $$n^{-1} \log \left({\mathbf {E}}Z^{(n)}[na,\infty) \right) \rightarrow - {{\kappa}^{\ast}}(a)$$ (except, possibly, at one $a$). The trivial observation that when the expectation of integer-valued variables decays geometrically the variables themselves must ultimately be zero implies that $\log Z^{(n)}[na,\infty)$ is ultimately infinite on $\{a: {{\kappa}^{\ast}}(a)>0\}$. This motivates introducing a notation for sweeping positive values of ${{\kappa}^{\ast}}$, and later other functions, to infinity and so we let $$\label{sweep} {{f}^{{\circ}}}(a)=\left\{\begin{array}{ll}f(a) &\mbox{when~}f(a)\leq 0\\ \infty&\mbox{when~} f(a)> 0 \end{array}\right.$$ and ${{\kappa}^{\ast \!\!\!\circ}}={{({{\kappa}^{\ast}})}^{{\circ}}}$. The next result can be construed as saying that in crude asymptotic terms this is the only way actual numbers differ from their expectation. \[second theorem\] When holds, $$\label{describe numbers} \frac{1}{n} \log \left(Z^{(n)}[na,\infty) \right) \rightarrow - {{\kappa}^{\ast \!\!\!\circ}}(a) \mbox{~~~a.s.},$$ for all $a \neq \Gamma$. From this result, which is @MR0464415 [Theorem 2], and the properties of ${{\kappa}^{\ast \!\!\!\circ}}$, Theorem \[first theorem\] follows directly. A closely related continuous-time model arises when the temporal development is a Markov branching process (Bellman–Harris with exponential lifetimes) or even a Yule process (binary splitting too) and movement is Brownian, giving binary branching Brownian motion. The process starts with a single particle at the origin, which then moves with a Brownian motion with variance parameter $V$. This particle splits in two at rate $\lambda$, and the two particles continue, independently, in the same way from the splitting point. (Any discrete skeleton of this process is a branching random walk.) Now, let ${{B}^{(t)}}$ be the position of the rightmost particle at time $t$. Then $ u(x,t)={\mathbf {P}}({{B}^{(t)}} \leq x)$ satisfies the (Fisher/Kolmogorov–Petrovski–Piscounov) equation $$\label{F-KPP} \frac{\partial u}{\partial t}=V\frac{1}{2}\frac{\partial ^{2}u}{\partial x^{2} } - \lambda u(1-u),$$ which is easy to see informally by conditioning on what happens in $[0, \delta t]$. The deep studies of @MR0494541 [@MR705746] show, among other things, that (with $V=\lambda=1$) ${{B}^{(t)}}$ converges in distribution when centred on its [median]{} and that median is (to $O(1)$) $$\sqrt{2}t-\frac{1}{\sqrt{2}}\left(\frac{3}{2} \log t\right) ,$$ which implies that $\Gamma=\sqrt{2}$ here. For the skeleton at integer times, $\kappa(\theta)=\theta^2/2+1$ for $\theta\geq 0$, and using Theorem \[first theorem\] on this confirms that $\Gamma=\sqrt{2}$. Furthermore, for later reference, note that $\theta \Gamma -\kappa(\theta)=0$ when $\theta=\sqrt{2}$. \[bbm\] Theorem \[first theorem\] is for discrete time, counted by generation. There are corresponding results for continuous time, where the reproduction is now governed by a random collection of points in time and space (${\mathbb{R}}^+ \! \times {\mathbb{R}}$). The first component gives the mother’s age at the birth of this daughter and the second that daughter’s position relative to her mother. Then the development in time of the process is that of a general branching process rather than the Galton–Watson development that underpins Theorem \[first theorem\]. This extension is discussed in @MR1384364 and @MR1601689. In it particles may also move during their lifetime and then branching Brownian motion becomes a (very) special case. Furthermore, there are also natural versions of Theorems \[first theorem\] and \[second theorem\] when particle positions are in ${\mathbb{R}}^d$ rather than ${\mathbb{R}}$—see @MR1384364 [§4.2] and references there. Spreading out: first refinements -------------------------------- Obviously rate-of-convergence questions follow on from (\[limit\]). An aside in @MR0433619 [p33] noted that, typically, ${{B}^{(n)}}-n \Gamma$ goes to $-\infty$. The following result on this is from @MR1629030 [Theorem 3], and much of it is contained also in @MR1618888 [Lemma 7.2]. When ${\mathbf {P}}(Z(\Gamma,\infty)>0)>0$, so displacements greater than $\Gamma$ are possible, and (\[m good\]) holds, there is a finite $\vartheta>0$ with $\vartheta \Gamma-\kappa(\vartheta) = 0$. Thus the condition here, which will recur in later theorems, is not restrictive. \[theorem to infinity\] If there is a finite $\vartheta>0$ with $\vartheta \Gamma-\kappa(\vartheta) = 0$, then $$\label{to infinity} {{B}^{(n)}}-n \Gamma \rightarrow -\infty\mbox{~~~a.s.,}$$ and the condition is also necessary when ${\mathbf {P}}(Z(\Gamma,\infty)>0)>0$. The theorem leaves some loose ends when ${\mathbf {P}}(Z(\Gamma,\infty)=0)=1$. Then ${{B}^{(n)}}-n \Gamma$ is a decreasing sequence, and so it does have a limit, but whether (\[to infinity\]) holds or not is really the explosion (i.e. regularity) problem for the general branching process: whether, with a point $z$ from $Z$ corresponding to a birth time of $\Gamma-z$, there can be an infinite number of births in a finite time. This is known to be complex—see @MR0359040 for example. In the simpler cases it is properties of $Z(\{\Gamma\})$, the number of daughters displaced by exactly $\Gamma$, that matters. If $Z(\{\Gamma\})$ is the family size of a surviving branching process (so either ${\mathbf {E}}Z(\{\Gamma\}) >1$ or ${\mathbf {P}}(Z(\{\Gamma\})=1)=1$) it is easy to show that $({{B}^{(n)}}-n \Gamma)$ has a finite limit—so (\[to infinity\]) fails—using embedded surviving processes resulting from focusing on daughters displaced by $\Gamma$: see @my-thesis [Proposition II.5.2] or @MR1133373 [Theorem 1]. In a similar vein, with extra conditions, @Addarioberryreed [Theorem 4] show ${\mathbf {E}}({{B}^{(n)}}-n \Gamma) $ is bounded. Suppose now that (\[m good\]) holds. When ${\mathbf {P}}(Z(a,\infty)=0)=1$, simple properties of transforms imply that $\theta a-\kappa(\theta) \uparrow - \log {\mathbf {E}}Z(\{a\}) $ as $\theta \uparrow \infty$. Then, when ${\mathbf {E}}Z(\{a\}) <1$ a little [convexity]{} theory shows that $\Gamma<a$ and that there is a finite $\vartheta$ with $\vartheta \Gamma-\kappa(\vartheta) = 0$, so that Theorem \[theorem to infinity\] applies. This leaves the case where (\[m good\]) holds, ${\mathbf {P}}(Z(\Gamma,\infty)=0)=1$ and ${\mathbf {E}}Z(\{\Gamma\}) =1$ but ${\mathbf {P}}( Z(\{\Gamma\})=1)<1$, which is sometimes called, misleadingly in my opinion, the *critical* branching random walk because the process of daughters displaced by exactly $\Gamma$ from their parent forms a critical Galton–Watson process. For this case, @MR510529 [Theorem 1] and @MR1133373 [§9] show that (\[to infinity\]) holds under extra conditions including that displacements lie in a lattice, and that the convergence is at rate $\log \log n$. @MR510529 [Theorem 2] also gives conditions under which (\[to infinity\]) fails. Spreading out: recent refinements --------------------------------- The challenge to derive analogues for the branching random walk of the fine results for branching Brownian motion has been open for a long time. Progress was made in @MR1325045 and, very recently, a nice result has been given in @hu-shi [Theorem 1.2], under reasonably mild conditions. Here is its translation into the current notation. It shows that the numerical identifications noted in the branching Brownian motion case in §\[bbm\] are general. \[h-s\] Suppose that there is a $\vartheta>0$ with $\vartheta \Gamma-\kappa(\vartheta) = 0$, and that, for some $\epsilon>0$, $ {\mathbf {E}}(N^{1+\epsilon})<\infty$, $\kappa(\vartheta+\epsilon)<\infty$ and $\int e^{-\epsilon z}{\mathbf {E}}Z(dz)<\infty$. Then $$-\frac{3}{2}=\liminf_n \frac{\vartheta({{B}^{(n)}}-n \Gamma)}{ \log n} <\limsup_n \frac{\vartheta({{B}^{(n)}}-n \Gamma)}{ \log n}=-\frac{1}{2}\mbox{~~~a.s.}$$ and $$\frac{\vartheta({{B}^{(n)}}-n \Gamma)}{ \log n} \rightarrow -\frac{3}{2}\mbox{~~in probability}.$$ Good progress has also been made on the tightness of the distributions of ${{B}^{(n)}}$ when centred suitably. Here is a recent result from @Bramsonzeitouni [Theorem 1.1]. Suppose the BRW has independent or common displacements according to the random variable $X$. Suppose also that for some $\epsilon>0$, $ {\mathbf {E}}(N^{1+\epsilon})<\infty$ and that for some $\psi>0$ and $y_0>0$ $$\label{BZ} {\mathbf {P}}(X>x+y)\leq e^{-\psi y}{\mathbf {P}}(X>x) ~~~~\forall x>0, y>y_0.$$ Then the distributions of $\{{{B}^{(n)}}\}$ are tight when centred on their medians. It is worth noting that (\[BZ\]) ensures that (\[m good\]) holds for all $\phi \in [0,\psi)$. There are other results too—in particular, @MR1325045 [Theorem 1] and @MR1133373 [§3] both give tightness results for the (general) BRW, but with $Z$ concentrated on a half-line. Though rather old for this section, @MR1133373 [Theorem 2] is worth recording here: the authors assume the BRW is concentrated on a lattice, but they do not use that in the proof of this theorem. To state it, let $\widetilde{D}$ be the second largest point in $Z$ when $N\geq 2$ and the only point otherwise. If the points of $Z$ are confined to $(-\infty,0]$ and ${\mathbf {E}}\widetilde{D}$ is finite, then ${\mathbf {E}}{{B}^{(n)}}$ is finite and the distributions of $\{{{B}^{(n)}}\}$ are tight when centred on their expectations. The condition that ${\mathbf {E}}\widetilde{D}$ is finite holds when $\int e^{\phi z}{\mathbf {E}}Z(dz)$ is finite in a neighbourhood of the origin, which is contained within the conditions in Theorem \[h-s\]. In another recent study @Addarioberryreed [Theorem 3] give the following result, which gives tightness and also estimates the centring. \[ab-r\] Suppose that there is a $\vartheta>0$ with $\vartheta \Gamma-\kappa(\vartheta) = 0$, and that, for some $\epsilon>0$, $\kappa(\vartheta+\epsilon)<\infty$ and $\int e^{-\epsilon z}{\mathbf {E}}Z(dz)<\infty$. Suppose also that the BRW has a finite maximum family size and independent displacements. Then $${\mathbf {E}}{{B}^{(n)}}=n \Gamma -\frac{3}{2 \vartheta}\log n+O(1),$$ and there are $C>0$ and $\delta>0$ such that $${\mathbf {P}}\left(|{{B}^{(n)}}-{\mathbf {E}}{{B}^{(n)}}|>x\right) \leq Ce^{-\delta x} ~~~\forall x.$$ The conditions in the first sentence here have been stated in a way that keeps them close to those in Theorem \[h-s\] rather than specialising them for independent displacements. Now, moving from tightness to convergence in distribution—which cannot be expected to hold without a non-lattice assumption—the following result, which has quite restrictive conditions, is taken from @MR1765165 [Theorem 1]. \[Bach\] Suppose that the BRW has $ {\mathbf {E}}N<\infty$ and independent displacements according to a random variable with density function $f$ where $- \log f$ is convex. Then the variables ${{B}^{(n)}}$ converge in distribution when centred on medians. It is not hard to use the coupling mentioned in §\[brw\] to see that Theorems \[ab-r\] and \[Bach\] imply that these two results also hold for common displacements. Deterministic theory {#deter} -------------------- There is another, deterministic, stream of work concerned with modelling the spatial spread of populations in a homogeneous habitat, and closely linked to the study of reaction-diffusion equations like (\[F-KPP\]). The main presentation is @MR653463, with a formulation that has much in common with that adopted in @MR0370721. Here the description of the framework is pared-down. This sketch draws heavily on @MR1943224, specialised to the homogeneous (i.e. aperiodic) case and one spatial dimension. The aim is to say enough to make certain connections with the BRW. Let $u^{(n)}(x)$ be the density of the population (or the gene frequency, in an alternative interpretation) at time $n$ and position $ x \in {\mathbb{R}}$. This is a discrete-time theory, so there is an updating operator $Q$ satisfying $u^{(n+1)}=Q(u^{(n)})$. More formally, let ${\cal F}$ be the non-negative continuous functions on ${\mathbb{R}}$ bounded by $\beta$. Then $Q$ maps ${\cal F}$ into itself and $u^{(n)}=Q^{(n)}(u^{(0)})$, where $u^{(0)}$ is the initial density and $Q^{(n)}$ is the $n$th iterate of $Q$. The operator is to satisfy the following restrictions. The constant functions at $0$ and at $\beta$ are both fixed points of $Q$. For any function $u \in {\cal F}$ that is not zero everywhere, $Q^{(n)}(u) \rightarrow \beta$, and $Q(\alpha) \geq \alpha$ for non-zero constant functions in ${\cal F}$. (Of course, without the spatial component, this is all reminiscent of the basic properties of the generating function of the family-size.) The operator $Q$ is order-preserving, in that if $u \leq v$ then $Q(u)\leq Q(v)$, so increasing the population anywhere never has deleterious effects in the future; it is also [translation-invariant]{}, because the habitat is homogeneous, and suitably continuous. Finally, every sequence $u_m \in {\cal F}$ contains a subsequence $u_{m(i)}$ such that $Q(u_{m(i)})$ converges uniformly on compacts. Such a $Q$ can be obtained by taking the development of a reaction-diffusion equation for a time $\tau$. Then $Q^{(n)}$ gives the development to time $n \tau$, and the results for this discrete formulation transfer to such equations. Specialising @MR1943224 [Theorem 2.1], there is a spreading speed $\Gamma$ in the following sense. If $u^{(0)}(x)=0$ for $x\geq L$ and $u^{(0)}(x)\geq \delta>0$ for all $x\leq K$, then for any $\epsilon >0$ $$\label{spreading} \sup_{x\geq n(\Gamma+\epsilon)} |u^{(n)}(x)| \rightarrow 0 \mbox{~~and~~} \sup_{x\leq n(\Gamma-\epsilon)} |u^{(n)}(x)-\beta|\rightarrow 0.$$ In some cases the spreading speed can be computed through linearisation—see @MR1943224 [Corollary 2.1] and @Lui1989269 [Corollary to Theorem 3.5]—in that the speed is the same as that obtained by replacing $Q$ by a truncation of its linearisation at the zero function. So $Q(u)=Mu$ for small $u$ and $Q(u)$ is replaced by $\min\{\omega, Mu\}$, where $\omega$ is a constant, positive function with $M \omega > \omega$. The linear functional $Mu(y)$ must be represented as an integral with respect to some measure, and so, using the translation invariance of $M$, there is a measure $\mu$ such that $$\label{mu} Mu(y)=\int u(y-z)\,\mu(dz).$$ Let $\tilde{\kappa}(\theta)=\log \int e^{\theta z} \mu(dz)$. Then the results show that the speed $\Gamma$ in (\[spreading\]) is given by $$\label{W-Gamma} \Gamma = \inf_{\theta>0} \frac{\tilde{\kappa}(\theta) }{\theta}.$$ Formally, this is one of the formulae for the speed in Theorem \[first theorem\]. In fact, the two frameworks can be linked, as indicated next. In the BRW, suppose the generic reproduction process $Z$ has points $\{z_i\}$. Define $Q$ by $$Q\left(u(x)\right)=1-{\mathbf {E}}\left[1-\prod_i u (x-z_i)\right].$$ This has the general form described above with $\beta=1$. On taking $u^{(0)}(x)={\mathbf {P}}({{B}^{(0)}}>x)$ (i.e. Heaviside initial data) it is easily established by induction that $ u^{(n)}(x)={\mathbf {P}}({{B}^{(n)}}>x)$. This is in essence the same as the observation that the distribution of the rightmost particle in branching Brownian motion satisfies the differential equation (\[F-KPP\]). The idea is explored in the spatial spread of the ‘deterministic simple epidemic’ in @Mollison1993147, a continuous-time model which, like branching Brownian motion, has BRW as its discrete skeleton. Now Theorem \[first theorem\] implies that (\[spreading\]) holds, and that, for $Q$ obtained in this way, the speed is indeed given by the (truncated) linear approximation. The other theorems about ${{B}^{(n)}}$ also translate into results about such $Q$. For example, Theorem \[Bach\] gives conditions for $u^{(n)}$ when centred suitably to converge to a fixed ([travelling wave]{}) profile. The multitype case {#multitype} ------------------ Particles now have types drawn from a finite set, ${\cal S}$, and their reproduction is defined by random points in ${\cal S}\times {\mathbb{R}}$. The distribution of these points depends on the parent’s type. The first component gives the daughter’s type and the second component gives the daughter’s position, relative to the parent’s. As previously, $Z$ is the generic reproduction process, but now let $Z_\sigma$ be the points (in ${\mathbb{R}}$) corresponding to those of type $\sigma$; $Z^{(n)}$ and $Z^{n}_{\sigma}$ are defined similarly. Let ${\mathbf {P}}_{\!\nu}$ and ${\mathbf {E}}_\nu$ be the probability and expectation associated with reproduction from an initial ancestor with type $\nu \in {\cal S}$. Let ${{B}^{(n)}_{\sigma}}$ be the rightmost particle of type $\sigma$ in the $n$th generation, and let ${{B}^{(n)}}$ be the rightmost of these, which is consistent with the one-type notation. The type space can be classified, using the relationship ‘can have a descendant of this type’, or, equivalently, using the non-negative expected family-size matrix, ${\mathbf {E}}_\nu Z_\sigma ({\mathbb{R}})$. Two types are in the same class when each can have a descendant of the other type in some generation. When there is a single class the family-size matrix is *irreducible* and the process is similarly described. When the expected family-size matrix is *aperiodic* (i.e. primitive) the process is also called aperiodic, and it is supercritical when this matrix has Perron–Frobenius (i.e. non-negative and of maximum modulus) eigenvalue greater than one. Again, to avoid qualifications about the survival set, assume extinction is impossible from the starting type used. For $\theta \geq 0$, let $\exp(\kappa(\theta))$ be the Perron–Frobenius eigenvalue of the matrix of transforms $\int e^{\theta z} {\mathbf {E}}_\nu Z_\sigma(dz)$, and let $\kappa(\theta)=\infty$ for $\theta<0$. If there is just one type, this definition agrees with that of $\kappa$ at the start of §\[spreading out\]. The following result, which is @MR0420890 [Theorem 4], has been mentioned already. \[multitype first theorem\] Theorem $\ref{first theorem}$ holds for any initial type in a supercritical irreducible BRW. The simplest multitype version of Theorem \[second theorem\] is the following, which is proved in [@JDB-anom]. When $\sigma=\nu$ it is a special case of results indicated in @MR1601689 [§4.1]. \[supercrit\] For a supercritical aperiodic BRW for which holds, $$\label{exp growth ub 2} \frac{1}{n}\log \left(Z^{(n)}_\sigma[na, \infty) \right) \rightarrow - {{\kappa}^{\ast \!\!\!\circ}}(a) \mbox{~~~a.s.-}{\mathbf {P}}_{\!\nu}$$for $a \neq \sup\{a:{{\kappa}^{\ast}}(a)<0\}=\Gamma$, and $$\frac{{{B}^{(n)}_{\sigma}}}{n} \rightarrow \Gamma \mbox{~~~a.s.-}{\mathbf {P}}_{\!\nu}.$$ Again there is a deterministic theory, following the pattern described in §\[deter\] and discussed in @Lui1989269 [@Lui1989297], which can be related to Theorem \[multitype first theorem\]. Recent developments in that area raise some interesting questions that are the subject of the next two sections. Anomalous spreading ------------------- In the multitype version of the deterministic context of §\[deter\], recent papers [@MR1930974; @MR2322849; @MR1930975; @Li200582] have considered what happens when the type space is reducible. Rather than set out the framework in its generality, the simplest possible case, the reducible two-type case, will be considered here, for the principal issue can be illustrated through it. The two types will be $\nu$ and $\eta$. Now, the vector-valued non-negative function $u^{(n)}$ gives the [population density]{} of two species—the two types, $\nu$ and $\eta$—at $x \in {\mathbb{R}}$ at time $n$, and $Q$ models growth, interaction and migration, as the populations develop in discrete time. The programme is the same as that indicated in §\[deter\], that is to investigate the existence of spreading speeds and when these speeds can be obtained from the truncated linear approximation. In this case the approximating linear operator, generalising that given in (\[mu\]), is $$\begin{aligned} (Mu(y))_\eta&=\int u_\eta (y-z) \,\mu_{\eta \eta}(dz),\\ (Mu(y))_\nu&=\int u_\nu (y-z)\,\mu_{\nu \nu} (dz) +\int u_\eta (y-z)\,\mu_{\nu \eta } (dz).\end{aligned}$$ Simplifying even further, assume there is no spatial spread associated with the ‘interaction’ term here, so that $\int u_\eta (y-z)\,\mu_{\nu\eta} (dz)= c u_\eta (y)$ for some $c>0$. The absence of $\mu_{ \eta \nu}$ in the first of these makes the linear approximation reducible. The first equation is really just for the type $\eta$ and so will have the speed that corresponds to $\mu_{\eta \eta}$, given through its transform by (\[W-Gamma\]), and written $\Gamma_\eta$. In the second, on ignoring the interaction term, it is plausible that the speed must be at least that of type $\nu$ alone, which corresponds to $\mu_{\nu \nu}$ and is written $\Gamma_\nu$. However, it can also have the speed of $u_\eta$ from the ‘interaction’ term. It is claimed in @MR1930974 [Lemma 2.3] that when $Q$ is replaced by the approximating operator $\min\{\omega, Mu\}$ this does behave as just outlined, with the corresponding formulae for the speeds: thus that of $\eta$ is $\Gamma_\eta$ and that for $\nu$ is $\max \{\Gamma_\eta,\Gamma_\nu\}$. However, in @MR2322849 a flaw in the argument is noted, and an example is given where the speed of $\nu$ in the truncated linear approximation can be faster than this, the anomalous spreading speed of their title, though the actual speed is not identified. The relevance of the phenomenon to a biological example is explored in @MR2322849 [§5]. As in §\[deter\], the BRW provides some particular examples of $Q$ that fall within the general scope of the deterministic theory. Specifically, suppose the generic reproduction process $Z$ has points $\{\sigma_i,z_i\}\in {\cal S}\times {\mathbb{R}}$. Now let $Q$, which operates on vector functions indexed by the type space ${\cal S}$, be defined by $$Q\left(u(x)\right)_{\nu}=1-{\mathbf {E}}_\nu \left[1-\prod_i u_{\sigma_i}(x-z_i)\right].$$ Then, just as in the one-type case, when $u^{(0)}_{\nu}(x)={\mathbf {P}}_{\!\nu} ({{B}^{(0)}}>x)$ induction establishes that $u^{(n)}_{\nu}(x)={\mathbf {P}}_{\!\nu} \left({{B}^{(n)}}>x\right)$. It is perhaps worth noting that in the BRW the index $\nu$ is the starting type, whereas it is the ‘current’ type in @MR2322849. However, this makes no formal difference. Thus, the anomalous spreading phenomenon should be manifest in the BRW, and, given the more restrictive framework, it should be possible to pin down the actual speed there, and hence for the corresponding $Q$ with Heaviside initial data. This is indeed possible. Here the discussion stays with the simplifications already used in looking at the deterministic results. Consider a two-type BRW in which each type $\nu$ always produces at least one daughter of type $\nu$, on average produces more than one, and can produces daughters of type $\eta$—but type $\eta$ never produce daughters of type $\nu$. Also for $\theta \geq 0$ let $$\kappa_{\nu}(\theta)=\log \int e^{\theta z} {\mathbf {E}}_\nu Z_\nu(dz) \mbox{~~and~~}\kappa_{\eta}(\theta)=\log \int e^{\theta z} {\mathbf {E}}_\eta Z_\eta(dz)$$ and let these be infinite for $\theta < 0$. Thus Theorem \[second theorem\] applies to type $\nu$ considered alone to show that $$\frac{1}{n}\log \left(Z^{(n)}_\nu[na, \infty) \right) \rightarrow - {{\kappa}^{\ast \!\!\!\circ}}_\nu(a) \mbox{~~~a.s.-}{\mathbf {P}}_{\!\nu}.$$ It turns out that this estimation of numbers is critical in establishing the speed for type $\eta$. It is possible for the growth in numbers of type $\nu$, through the numbers of type $\eta$ they produce, to increase the speed of type $\eta$ from that of a population without type $\nu$. This is most obvious if type $\eta$ is subcritical, so that any line of descent from a type $\eta$ is finite, for the only way they can then spread is through the ‘forcing’ from type $\nu$. However, if in addition the dispersal distribution at reproduction for $\eta$ has a much heavier tail than that for $\nu$ it is now possible for type $\eta$ to spread faster than type $\nu$. For any two functions $f$ and $g$, let ${{\mathfrak C}[f,g]}$ be the greatest ([lower semi-continuous]{}) convex function beneath both of them. The following result is a very special case of those proved in @JDB-anom. The formula given in the next result for the speed $\Gamma^{\dagger}$ is the same as that given in @MR2322849 [Proposition 4.1] as the upper bound on the speed of the truncated linear approximation. \[prelim main theorem\] Suppose that $\max\{\kappa_{\nu}(\phi_\nu), \kappa_{\eta}(\phi_\eta)\}$ is finite for some $\phi_\eta \geq \phi_\nu >0 $ and that $$\label{off-diag} \int e^{\theta z} {\mathbf {E}}_\nu Z_\eta(dz)<\infty ~~~\forall \theta \geq 0.$$ Let $r={{{{\mathfrak C}[{{\kappa}^{\ast \!\!\!\circ}}_\nu,{{\kappa}^{\ast}}_\eta]}}^{{\circ}}}$. Then $$\label{key result} \frac{1}{n} \log \left(Z^{(n)}_{\eta}[na,\infty) \right) \rightarrow -r(a) \mbox{~~~a.s.-}{\mathbf {P}}_{\!\nu},$$ for $a \neq \sup\{a: r(a)<0\}=\Gamma^{\dagger}$, and $$\label{seq speed} \frac{{{B}^{(n)}_{\eta}}}{n} \rightarrow \Gamma^{\dagger}. \mbox{~~~a.s.-}{\mathbf {P}}_{\!\nu}.$$ Furthermore, $$\label{two classes} \Gamma^{\dagger} = \inf_{0<\varphi \leq \theta} \max\left\{\frac{\kappa_\nu(\varphi)} {\varphi},\frac{\kappa_\eta(\theta)} {\theta} \right\}.$$ From this result it is possible to see how $ \Gamma^{\dagger}$ can be anomalous. Suppose that $r( \Gamma^{\dagger})=0$, so that $ \Gamma^{\dagger}$ is the speed, and that $r$ is strictly below both ${{\kappa}^{\ast \!\!\!\circ}}_\nu$ and ${{\kappa}^{\ast}}_\eta$ at $ \Gamma^{\dagger}$. This will occur when the minimum of the two convex functions ${{\kappa}^{\ast}}_\nu$ and ${{\kappa}^{\ast}}_\eta$ is not convex at $\Gamma^{\dagger}$, and then the largest convex function below both will be linear there. In these circumstances, ${{\kappa}^{\ast}}_\nu(\Gamma^{\dagger})>0$, which implies that ${{\kappa}^{\ast \!\!\!\circ}}_\nu(\Gamma^{\dagger})=\infty$, and ${{\kappa}^{\ast}}_\eta(\Gamma^{\dagger})>0$. Thus $\Gamma^{\dagger}$ will be strictly greater than both $\Gamma_{\nu}$ and $\Gamma_{\eta}$, giving a ‘super-speed’—Figure \[ff1\] illustrates a case that will soon be described fully where $\Gamma_{\nu}$ and $\Gamma_{\eta}$ are equal and $\Gamma^{\dagger }$ exceeds them. Otherwise, that is when $\Gamma^{\dagger}$ is not in a linear portion of $r$, $\Gamma^{\dagger}$ is just the maximum of $\Gamma_{\nu}$ and $\Gamma_{\eta}$. The example in @MR2322849 that illustrated anomalous speed was derived from coupled reaction-diffusion equations. When there is a branching interpretation, which it must be said will be the exception not the rule, the actual speed can be identified through Theorem \[prelim main theorem\] and its generalisations. This will now be illustrated with an example. Suppose type $\eta$ particles form a binary branching Brownian motion, with variance parameter and splitting rate both one. Suppose type $\nu$ particles form a branching Brownian motion, but with variance parameter $V$, splitting rate $\lambda$ and, on splitting, type $\nu$ particles produce a (random) family of particles of both types. There are $1+N_\nu$ of type $\nu$ and $N_\eta$ of type $\eta$, so that the family always contains at least one daughter of type $\nu$; the corresponding bivariate probability generating function is ${\mathbf {E}}a^{1+N_\nu}b^{N_\eta}=af(a,b)$. Let $v(x,t)={\mathbf {P}}_\eta({{B}^{(t)}} \leq x)$ and $w(x,t)={\mathbf {P}}_\nu({{B}^{(t)}} \leq x)$. These satisfy $$\begin{aligned} \frac{\partial v}{\partial t}&=\frac{1}{2}\frac{\partial ^{2}v}{\partial x^{2} }-v(1-v),\\ \frac{\partial w}{\partial t}&=V\frac{1}{2}\frac{\partial ^{2}w}{\partial x^{2} }-\lambda w(1-f(w,v)).\end{aligned}$$ Here, when the initial ancestor is of type $\nu$ and at the origin the initial data are $w(x,0)=1$ for $x \geq 0$ and 0 otherwise and $v(x,0) \equiv 0$. Note that, by a simple change of variable, these can be rewritten as equations in ${\mathbf {P}}_\eta({{B}^{(t)}} > x)$ and ${\mathbf {P}}_\nu({{B}^{(t)}} > x)$ where the differential parts are unchanged, but the other terms look rather different. Now suppose that $af(a,b)=a^2 (1-p+p b)$, so that a type $\nu$ particle always splits into two type $\nu$ and with probability $p$ also produces one type $\eta$. Looking at the discrete skeleton at integer times, $ \kappa_\nu(\theta)=V\theta^2/2+\lambda $ for $\theta \geq 0$, giving $${{\kappa}^{\ast}}_\nu(a)=\left\{\begin{array}{ll} -\lambda &a < 0 \\ \displaystyle -\lambda+ \frac{1}{2} \frac{a^2}{V}& a \geq 0 \end{array}\right.$$ and speed $(2V\lambda)^{1/2}$, obtained by solving ${{\kappa}^{\ast}}_\nu(a)=0$. The formulae for ${{\kappa}^{\ast}}_\eta$ are just the special case with $V=\lambda=1$. Now, for convenience, take $V=\lambda^{-1}$, so that both types, considered alone, have the same speed. Then, sweeping positive values to infinity, $${{\kappa}^{\ast \!\!\!\circ}}_\nu(a)= \left\{\begin{array}{ll}-\lambda &a< 0, \\ \displaystyle -\lambda\left(1- \frac{a^2}{2}\right)& a \in [0, 2^{1/2}],\\ \infty& a > 2^{1/2}.\end{array}\right.$$ Now ${{\mathfrak C}[{{\kappa}^{\ast \!\!\!\circ}}_\nu,{{\kappa}^{\ast}}_\eta]}$ is the largest convex function below this and ${{\kappa}^{\ast}}_\eta$. When $\lambda=3$ these three functions are drawn in Figure \[ff1\]. ![Illustration of how anomalous speed arises. []{data-label="ff1"}](fullplot) The point where each of them meets the horizontal axis gives the value of speed for that function. Thus, $\Gamma^{\dagger}$ exceeds the other two, which are both $\sqrt{2}$. Here $\Gamma^{\dagger}=4/\sqrt{6}$. In general, for $\lambda>1$, it is $(1+\lambda)/\sqrt{2 \lambda}$, which can be made arbitrarily large by increasing $\lambda$ sufficiently. Discussion of anomalous spreading --------------------------------- The critical function in Theorem \[prelim main theorem\] is $r= {{{{\mathfrak C}[{{\kappa}^{\ast \!\!\!\circ}}_\nu,{{\kappa}^{\ast}}_\eta]}}^{{\circ}}}$. Here is how it arises. The function ${{\kappa}^{\ast \!\!\!\circ}}_\nu$ describes the growth in numbers and spread of the type $\nu$. Conditional on these, ${{\mathfrak C}[{{\kappa}^{\ast \!\!\!\circ}}_\nu,{{\kappa}^{\ast}}_\eta]}$ describes the growth and spread in expectation of those of type $\eta$. To see why this might be so, take a $b$ with ${{\kappa}^{\ast \!\!\!\circ}}_\nu(b)<0$ so that (\[describe numbers\]) describes the exponential growth of $Z^{(m)}_\nu[mb, \infty)$: there are roughly $\exp( -m{{\kappa}^{\ast \!\!\!\circ}}_\nu(b))$ such particles in generation $m$. Suppose now, for simplicity, that each of these produces a single particle of type $\eta$ at the parent’s position. As noted just before Theorem \[second theorem\], the expected numbers of type $\eta$ particles in generation $r$ and in $[rc,\infty)$ descended from a single type $\eta$ at the origin is roughly $\exp(-r {{\kappa}^{\ast}}_\eta(c))$. Take $\lambda \in (0,1)$ with $m=\lambda n$ and $r=(1-\lambda)n$. Then, conditional on the development of the first $m$ generations, the expectation of the numbers of type $\eta$ in generation $n$ and to the right of $mb+rc=n (\lambda b+(1-\lambda) c)$ will be (roughly) at least $\exp(-n(\lambda {{\kappa}^{\ast \!\!\!\circ}}_\nu(b)+(1-\lambda) {{\kappa}^{\ast}}_\eta(c))) $. As $b$, $c$ and $\lambda$ vary with $\lambda b+(1-\lambda) c=a$, the least value for $\lambda {{\kappa}^{\ast \!\!\!\circ}}_\nu(b)+(1-\lambda) {{\kappa}^{\ast}}_\eta(c)$ is given by ${{\mathfrak C}[{{\kappa}^{\ast \!\!\!\circ}}_\nu,{{\kappa}^{\ast}}_\eta]}(a)$. There is some more work to do to show that this lower bound on the conditional expected numbers is also an upper bound—it is here that (\[off-diag\]) comes into play. Finally, as indicated just before Theorem \[second theorem\], this corresponds to actual numbers only when negative, so the positive values of this convex minorant are swept to infinity. When the speed is anomalous, this indicative description of how $r= {{{{\mathfrak C}[{{\kappa}^{\ast \!\!\!\circ}}_\nu,{{\kappa}^{\ast}}_\eta]}}^{{\circ}}}$ arises makes plausible the following description of lines of descent with speed near $\Gamma^{\dagger}$. They will arise as a ‘dog-leg’, with the first portion of the trajectory, which is a fixed proportion of the whole, being a line of descent of type $\eta$ with a speed less than $\Gamma_{\eta}$. The remainder is a line of descent of type $\nu$, with a speed faster than $\Gamma_{\mu}$ (and also than $\Gamma^{\dagger}$). Without the truncation, the linear operator approximating (near $u\equiv1$) a $Q$ associated with a BRW describes the development of its expected numbers, and so it is tempting to define the speed using this, by looking at when expected numbers start to decay. In the irreducible case, Theorem \[supercrit\] has an analogue for expected numbers, that $$\frac{1}{n}\log \left({\mathbf {E}}_{\nu} Z^{(n)}_\sigma[na, \infty) \right) \rightarrow - {{\kappa}^{\ast}}(a),$$ and so here the speed can indeed be found by looking at when expected numbers start to decay. In contrast, in the set up in Theorem \[prelim main theorem\] $$\frac{1}{n} \log \left({\mathbf {E}}_{\nu} Z^{(n)}_{\eta}[na,\infty) \right) \rightarrow -{{\mathfrak C}[{{\kappa}^{\ast}}_\nu,{{\kappa}^{\ast}}_\eta]}(a),$$ and the limit here can be lower than ${{\mathfrak C}[{{\kappa}^{\ast \!\!\!\circ}}_\nu,{{\kappa}^{\ast}}_\eta]}$—the distinction between the functions is whether or not positive values are swept to infinity in the first argument. Hence the speed computed by simply asking when expectations start to decay can be too large. In Figure \[ff1\], ${{\mathfrak C}[{{\kappa}^{\ast \!\!\!\circ}}_\nu,{{\kappa}^{\ast}}_\eta]}$ is the same as ${{\mathfrak C}[{{\kappa}^{\ast}}_\nu,{{\kappa}^{\ast}}_\eta]}$, but it is easy to see, reversing the roles of ${{{\kappa}^{\ast}}_\nu}$ and ${{{\kappa}^{\ast}}_\eta}$, that ${{\mathfrak C}[{{\kappa}^{\ast \!\!\!\circ}}_\eta,{{\kappa}^{\ast}}_\nu]}$ is the same as ${{\kappa}^{\ast}}_\nu$. Thus if $\eta$ could produce $\nu$, rather than the other way round, expectations would still give the speed $\Gamma^{\dagger}$ but the true speed would be $\Gamma_\nu (=\Gamma_\eta)$. The general case, with many classes, introduces a number of additional challenges (mathematical as well as notational). It is discussed in @JDB-anom. The matrix of transforms now has irreducible blocks on its diagonal, corresponding to the classes, and their Perron–Frobenius eigenvalues supply the $\kappa$ for each class, as would be anticipated from §\[multitype\]. Here a flavour of some of the other complications. The rather strong condition (\[off-diag\]) means that the spatial distribution of type $\eta$ daughters to a type $\nu$ mother is irrelevant to the form of the result. If convergence is assumed only for some $\theta>0$ rather than all this need not remain true. One part of the challenge is to describe when these ‘off-diagonal’ terms remain irrelevant; another is to say what happens when they are not. If there are various routes through the classes from the initial type to the one of interest these possibilities must be combined: in these circumstances, the function $r$ in (\[key result\]) need not be convex (though it will be increasing). It turns out that the formula for $\Gamma^{\dagger}$, which seems as if it might be particular to the case of two classes, extends fully—not only in the sense that there is a version that involves more classes, but also in the sense that the speed can usually be obtained as the maximum of that obtained using (\[two classes\]) for all pairs of classes where the first can have descendants in the second (though the line of descent may have to go through other classes on the way). [^1]: Subsequently, @MR806224 established the theorem under weaker postulates.
--- abstract: 'Most multi-sided transfinite surfaces require cross-derivatives at the boundaries. Here we show a general $n$-sided patch that interpolates all boundaries based on only positional information. The surface is a weighted sum of $n$ Coons patches, using a parameterization based on Wachspress coordinates.' author: - | Péter Salvi\ Budapest University of Technology and Economics bibliography: - 'sajat.bib' - 'cikkek.bib' title: | A multi-sided generalization\ of the $C^{0}$ Coons patch --- Introduction ============ Filling an $n$-sided hole with a multi-sided surface is an important problem in CAGD. Usually the patch should connect to the adjacent surfaces with at least $G^{1}$ continuity, but in some applications only positional ($C^{0}$) continuity is needed, and normal vectors or cross derivatives at the boundary curves are not available. For $n=4$, the $C^{0}$ Coons patch [@Coons:1967] solves this problem; in this paper we show how to generalize it to any number of sides. Previous work ============= Most transfinite surface representation in the literature assume $G^{1}$ constraints, and the patch equations make use of the fixed cross-derivatives at the boundary. This can be circumvented by generating a *normal fence* automatically, e.g. with a rotation minimizing frame [@Wang:2008]; however, in a $C^{0}$ setting this is an overkill, simpler methods exist. One well-known solution is the harmonic surface, which creates a “soap film” filling the boundary loop by solving the harmonic equation on a mesh with fixed boundaries. This, however, minimizes the total area of the surface, which often has unintuitive results, see an example in Section \[sec:Examples\]. The basic idea of the proposed method, i.e., to define the surface as the weighted sum of $n$ Coons patches, each interpolating three consecutive sides, is the same as in the CR patch [@Salvi:2014]. The multi-sided $C^{0}$ Coons patch =================================== Let $C_{i}(t):[0,1]\to\mathbb{R}^{3}$ denote the $i$-th boundary curve. Let us also assume $C_{i}(1)=C_{i+1}(0)$ for all $i$ (with circular indexing). Then the *ribbon* $R_{i}$ is defined as a $C^{0}$ Coons patch interpolating $C_{i-1}$, $C_{i}$, $C_{i+1}$, and $C_{i}^{\mathrm{opp}}$ [–]{} a cubic curve fitted onto the initial and (negated) end derivatives of sides $i+2$ and $i-2$, respectively (see Figure \[fig:Construction-ribbon\]). ![\[fig:Construction-ribbon\]Construction of a four-sided Coons ribbon.](ribbon){width="0.4\columnwidth"} Formally, $$\begin{aligned} R_{i}(s_{i},d_{i}) & =(1-d_{i})C_{i}(s_{i})+d_{i}C_{i}^{\mathrm{opp}}(1-s_{i})\nonumber \\ & +(1-s_{i})C_{i-1}(1-d_{i})+s_{i}C_{i+1}(d_{i})\nonumber \\ & -\left[\begin{array}{c} 1-s_{i}\\ s_{i} \end{array}\right]^{\intercal}\left[\begin{array}{cc} C_{i}(0) & C_{i-1}(0)\\ C_{i}(1) & C_{i+1}(1) \end{array}\right]\left[\begin{array}{c} 1-d_{i}\\ d_{i} \end{array}\right],\end{aligned}$$ where $C^{\mathrm{opp}}$ is defined as the Bézier curve[^1] determined by the control points $$\begin{aligned} P_{0} & =C_{i+1}(1), & P_{1} & =P_{0}+\frac{1}{3}C_{i+2}'(0),\\ P_{2} & =P_{3}-\frac{1}{3}C_{i-2}'(1), & P_{3} & =C_{i-1}(0).\end{aligned}$$ The surface is defined over a regular $n$-sided polygon. The Wachspress coordinates of a domain point $p$ are defined as $$\lambda_{i}=\lambda_{i}(p)=\frac{\prod_{j\neq i-1,i}D_{j}(p)}{\sum_{k=1}^{n}\prod_{j\neq k-1,k}D_{j}(p)},$$ where $D_{i}(p)$ is the perpendicular distance of $p$ from the edge of the domain polygon. Ribbon parameterization is based on these generalized barycentric coordinates: $$\begin{aligned} d_{i} & =d_{i}(u,v)=1-\lambda_{i-1}-\lambda_{i}, & s_{i} & =s_{i}(u,v)=\frac{\lambda_{i}}{\lambda_{i-1}+\lambda_{i}}.\label{eq:sd}\end{aligned}$$ It is easy to see that $s_{i},d_{i}\in[0,1]$, and that $d_{i}$ has the following properties: 1. $d_{i}=0$ on the $i$-th side. 2. $d_{i}=1$ on the “far” sides (all sides except $i-1$, $i$ and $i+1$). 3. $d_{i-1}+d_{i+1}=1$ on the $i$-th side. Finally, we define the patch as $$S(p)=\sum_{i=1}^{n}R_{i}(s_{i},d_{i})B_{i}(d_{i}),$$ where $B_{i}$ is the blending function $$B_{i}(d_{i})=\frac{1-d_{i}}{2}.$$ The interpolation property is satisfied due to the properties of $d_{i}$ mentioned above. (Note: $s_{i}$ in Eq. (\[eq:sd\]) cannot be evaluated when $d_{i}=1$, but at these locations the weight $B_{i}(d_{i})$ also vanishes.) \[sec:Examples\]Examples ======================== Figure \[fig:Comparison\] shows a comparison with the harmonic surface, which [–]{} due to its area minimizing property [–]{} results in an unnaturally flat patch. Figure \[fig:pocket\] shows a model with 5 patches: two 3-sided, one 4-sided, one 5-sided, and one 6-sided. The mean curvature map and contouring both show good surface quality. Conclusion {#conclusion .unnumbered} ========== We have defined a natural generalization of the $C^{0}$ Coons patch [–]{} a lightweight and efficient multi-sided surface representation, applicable when only positional data is available. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by the Hungarian Scientific Research Fund (OTKA, No. 124727). [^1]: Except for $n=3$, where $C^{\mathrm{opp}}$ degenerates to the point $C_{i+1}(1)$.
--- title: Supplementary Information --- Bunching or Anti-Bunching ========================= The quantity $R$ depends on the ratio of the nonlocal number fluctuations, $\delta n_i \delta n_j={\left\langle n_i n_j \right\rangle}-{\left\langle n_i \right\rangle}{\left\langle n_j \right\rangle}$ between sites $i$ and $j$, to the local number fluctuations $\delta n_i^2=\delta n_i \delta n_i$. Specifically, $$R=\frac{\kappa_i}{\delta n_i^2}=\beta\left(1+\frac{\sum_{j\neq i}\delta n_i \delta n_j}{V\delta n_i^2}\right). \label{Req}$$ At high temperatures, the nonlocal number fluctuations are vanishingly small and $R\approx\beta$. Any finite nonlocal number fluctuations cause $R$ to deviate from $\beta$. Boson statistics favor bunching, or positively correlated nonlocal number fluctuations, and this bunching behavior is observed in $R$ in the Bose-Hubbard model at very low densities. Bunching implies the second term in equation (\[Req\]) is positive and $R>\beta$, as shown in Fig. \[bunching\]. ![Boson Bunching due to Statistics at Low Density. \[bunching\] The main plot shows $R$ (circles) becomes larger than $\beta$ (dashed line) at low temperatures. The inset plots the density (circles) over the same temperature range. The parameters are $t/U=0.15$, $\mu/U=-0.2$ and the error in $\beta^*$ is indicated by the horizontal bar. Note that this system does not become superfluid at the temperatures probed. ](Bunching){width="40.00000%"} Anti-bunching arises from the repulsive interactions in the BHM. Interactions energetically favor negatively correlated nonlocal number fluctuations and overwhelm the statistical tendency to bunch at finite density ($\gtrsim0.1$) and at (t/U) near the critical $(t/U)_c$. This implies the second term in equation (\[Req\]) is negative and $R<\beta$ at low temperatures. This is observed to be the case in any quantum phase near $(t/U)_c$ and is exemplified in Fig. \[antibunching\]. ![Interactions cause Bosons to Anti-Bunch at Moderate Densities. \[antibunching\] The main plot shows $R$ (circles) becomes smaller than $\beta$ (dashed line) at low temperatures when the system enters a quantum phase (superfluid in this case). The large error in $\beta_{max}$ (horizontal bar) is characteristic of the superfluid in the presence of large interactions, where the peak in $R$ is difficult to resolve. The inset shows density (circles) as a function of $\beta$. The parameters are $t/U=0.15$, $\mu/U=0.05$. ](AntiBunching){width="40.00000%"} The Particle-Hole Gap ===================== The particle-hole gap $\Delta_{ph}$ is the minimal energy required to either insert or remove a particle from a Mott state. This energy scale can be extracted from $R$ by fitting its decay from the peak to zero with the simple exponential form $e^{-\beta\Delta_{ph}}$. This form also describes the decay of the compressibility $\kappa_i$ as a function of $\beta$ within the Mott state. We compare extracted $\Delta_{ph}$ with $T=0$ QMC results from Capogrosso-Sansone, [*et al.*]{}, in Fig. \[phgap\]. ![Extracting the Particle-Hole Gap from $R$. \[phgap\] The righthand column shows $R$ (circles) at various mu/U values and the exponential fit to extract $\Delta_{ph}$ (solid line). Plotted in the lefthand figure is the extracted $\Delta_{ph}$ (circles) and the results from $T=0$ QMC calculations by Capogrosso-Sansone, [*et al.*]{} ](MIphgap){width="40.00000%"} The Fluctuation-Dissipation Theorem =================================== The fluctuation-dissipation theorem (FDT) relates the imaginary part of the generalized susceptibility to the correlation function. For completeness, we discuss (a) the salient points in the derivation of the quantum FDT; (b) the static structure factor; (c) the high temperature limit of the quantum FDT and its reduction to the classical FDT. (d) Further, we show that for a conserved quantity, the classical FDT expression is valid even in the quantum regime, which may seem somewhat surprising. And lastly, as a test, we provide the compressibility and the number fluctuations of the ideal gas in the high temperature limit. The Quantum Fluctuation-Dissipation Theorem ------------------------------------------- Consider a quantum system of volume $\Omega$ defined by a time-independent Hamiltonian $\hat{H}$ with many body states and energies $\hat{H}{\left|\Psi_n\right>}=\epsilon_n{\left|\Psi_n\right>}$, perturbed by a probe $\hat{H}^\prime(t)$. We assume that the external perturbation $F_A(t)$ couples to an operator $\hat A$ of the system via $\hat{H}^\prime(t)=-\hat{A}F_A(t)$. For spatially varying probe $\hat{H}^\prime(t)=-\int d{\bf r}\hat{A}({\bf r}) F_A({\bf r}, t )=-{1\over \Omega} \sum_{\bf q} \hat{A}_{-\bf q}F_{\bf q} (t)$. For example, in our case $\hat{A}({\bf r})$ is the density operator ${\hat n}({\bf r})={\hat \Psi}^\dagger({\bf r}) {\hat \Psi}({\bf r})={1\over \Omega}\sum_{\bf q} e^{i {\bf q}\cdot{\bf r}} {\hat n}({\bf q})$ where ${\hat n}({\bf q})=\sum_{\bf k}{\hat a}^\dagger_{{\bf k}-{\bf q}/2} {\hat a}_{{\bf k}+{\bf q}/2}$. The response $\langle {\hat B}\rangle$ to linear order in the perturbation is $\langle {\hat B}\rangle (t) = \int_{-\infty}^{\infty} dt^\prime \chi_{BA}(t-t^\prime)F_A(t^\prime)$ where $\chi_{BA}(t-t')=i\theta(t-t'){\left\langle \left[\hat{B}(t),\hat{A}(t^\prime)\right] \right\rangle}$. By using a spectral representation in terms of exact eigenstates of $\hat{H}$ and the Heisenberg representation of the time dependent operators, we obtain $$\chi_{BA}({\bf q},\omega)= \frac{1}{\Omega}\sum_{m,n}\frac{e^{-\beta\epsilon_m}}{\mathcal{Z}}\left[ { {(A_{\bf -q})_{mn}(B_{\bf q})_{nm}} \over {\omega+i\eta + \epsilon_{nm}} } - { {(B_{\bf q})_{mn}(A_{\bf -q})_{nm}} \over {\omega+i\eta - \epsilon_{nm}} } \right] \label{fullchi}$$ where $\epsilon_{nm}=\epsilon_n-\epsilon_m$ and $\eta=0^+$ is a small positive number to ensure convergence as $t\rightarrow \infty$. The well-known identity $\displaystyle\lim_{\eta\rightarrow 0^+} {{1}\over {x\pm i\eta}} = P({1\over x}) \pm i\pi \delta (x)$ yields the imaginary part of the response function, $$\chi''_{BA}({\bf q},\omega)=\frac{\pi}{\Omega}\sum_{m,n}\frac{e^{-\beta\omega_m}}{\mathcal{Z}}\left[(A_{-{\bf q}})_{mn}(B_{{\bf q}})_{nm}\delta(\omega-\epsilon_{nm})-(B_{{\bf q}})_{mn}(A_{-{\bf q}})_{nm}\delta(\omega+\epsilon_{nm})\right]. \label{imchi}$$ Next we consider the corresponding correlation function typically measured in a scattering experiment defined by S\_[BA]{}([**r**]{},t; [**r**]{}\^, t\^) & =[([**r**]{}, t)([**r**]{}\^, t\^) ]{}\ S\_[BA]{}([**q**]{},t-t\^) & =[\_[**q**]{}(t)\_[-[**q**]{}]{}( t\^) ]{} \[correlation\] for a translationally invariant system. Using the spectral representation, we obtain $$S_{BA}({\bf q},\omega)=\frac{2\pi}{\Omega}\sum_{m,n}\frac{e^{-\beta\epsilon_m}}{\mathcal{Z}}(B_{\bf q})_{mn}(A_{-{\bf q}})_{nm}\delta(\omega-\epsilon_{nm}). \label{corr2}$$ By exchanging the indices in the second term in Eq.\[imchi\], we obtain the quantum fluctuation dissipation theorem (QFDT) ”\_[BA]{}([**q**]{},)&=(1-e\^[-]{})\_[m,n]{}(B\_[**q**]{})\_[mn]{}(A\_[-[**q**]{}]{})\_[nm]{}(-\_[nm]{})\ &=S\_[BA]{}([**q**]{},). \[QFDT\] ### Static Structure Factor The static structure factor is defined by S\_[BA]{}([**q**]{}) S\_[BA]{}([**q**]{},t=0)&=\_[-]{}\^ S\_[BA]{}([**q**]{},)\ &=\_[-]{}\^”\_[BA]{}([**q**]{},). Using the oddness property $\chi''_{BA}(-\omega)=-\chi''_{BA}(\omega)$ yields $$S_{BA}({\bf q})={1\over \Omega}{\left\langle \hat{B}_{\bf q}\hat{A}_{-{\bf q}} \right\rangle} =\int_{0}^{\infty}\frac{d\omega}{\pi}\coth\left(\frac{\beta\omega}{2}\right)\chi''_{BA}({\bf q},\omega).$$ ### The High Temperature Limit of the QFDT At temperatures $k_BT\gg \hbar \omega$ larger than any characteristic frequencies of the system, $\coth\left(\frac{\beta\omega}{2}\right)\rightarrow 2/\beta\omega$ and the static structure factor reduces to S\_[BA]{}([**q**]{})&=\_[0]{}\^\ &=k\_B T ’\_[BA]{}([**q**]{},=0) \[highTlimit\] where we have used the Kramers-Krönig relation $\chi^\prime_{BA}({\bf q},\omega)=P\int_{-\infty}^{\infty}\frac{d\omega^\prime}{\pi}\frac{\chi''_{BA}(q,\omega^\prime)}{\omega^\prime -\omega}$ to relate the real and imaginary parts of the response function. Since at $\omega=0$ the imaginary part is zero, we can replace $\chi^\prime$ by simply $\chi$. The QFDT for Conserved Quantities --------------------------------- We next use (i) the definition of the correlation function for a conserved quantity, (ii) the quantum FDT, and (iii) Kramers-Krönig relation to finally derive $\chi_{AA}(q\rightarrow0,\omega=0)=\beta S_{AA}(q\rightarrow0,t=0)$. The derivation is detailed below. A conserved quantity ${\hat A}({\bf q=0})\equiv A_0$ such as ${\hat n}({\bf q=0})=\sum_{\bf k}{\hat a}^\dagger_{\bf k} {\hat a}_{\bf k}=N$, the total number of particles, commutes with the Hamiltonian, $\left[\hat{A}_0,\hat{H}\right]=0$. This implies that the matrix element $\langle \Psi_m \mid \left[\hat{A}_0,\hat{H}\right] \mid \Psi_n\rangle =0$ or equivalently $(\epsilon_n-\epsilon_m) (A_0)_{m,n}=0$. If $m\neq n$, we must have $(A_0)_{m,n}=0$ which results in $$\lim_{\omega\rightarrow 0} \lim_{{\bf q}\rightarrow 0} \chi_{AA}({\bf q},\omega)= \lim_{\omega\rightarrow 0} \frac{2}{\Omega}\sum_{m,n}\frac{e^{-\beta\epsilon_m}}{\mathcal{Z}}\left[ \frac{\epsilon_{nm} \mid (A_0)_{mn}\mid^2}{(\omega+i\eta)^2 - \epsilon^2_{nm}}\right] =0$$ using Eq. \[fullchi\]. Thus $ \chi_{AA}({\bf q}=0,\omega\rightarrow 0)=0$ if $A({\bf q}=0)$ is a conserved quantity. However, the situation is totally different if we change the order of limits. $$\lim_{{\bf q}\rightarrow 0} \lim_{\omega\rightarrow 0} \chi_{AA}({\bf q},\omega)= \lim_{{\bf q}\rightarrow 0} \frac{2}{\Omega}\sum_{m,n}\frac{e^{-\beta\epsilon_m}}{\mathcal{Z}}\left[ { { \epsilon_{nm} \mid (A_{\bf q})_{mn}\mid^2} \over {(\omega+i\eta)^2 - \epsilon^2_{nm} } } \right] = n^2 \kappa \label{above}$$ For density operators, the last equality in Eq. \[above\] follows from the perturbation ${\hat H}^\prime=-\int d{\bf r} \delta {\hat n}({\bf r}) \delta {\hat \mu}({\bf r},t) =-{1\over \Omega} \sum_{\bf q} \delta {\hat n}_{-{\bf q}} \delta {\hat \mu}_{\bf q}(t)$ which produces a response in the system \_[nn]{}([**q**]{}0,=0) & =[ [[\_[-[**q**]{}0]{} ]{}]{} \_[[**q**]{}0]{} ]{} (=0)\ & = ([[n]{} ]{})\_[T,]{} = n\^2 where $\kappa$ is the isothermal compressibility. Another way to understand the behavior of QFDT for a conserved quantity ${\left\langle \hat{A}({\bf q=0},t) \right\rangle}$ is to note that it is independent of $t$. In addition the correlator ${\left\langle \hat{A}({\bf q=0},t)\hat{A} ({-{\bf q}=0},t^\prime) \right\rangle}$ is independent of $t-t^\prime$ and hence its Fourier transform must be a delta function in frequency. Thus from Eq. \[correlation\] $$S_{AA}({\bf q}=0,\omega)= 2 \pi \delta(\omega) S_{AA}({\bf q}=0) \label{S-cons}$$ From the quantum FDT Eq. \[QFDT\] we obtain for a conserved quantity whose correlation function is given by Eq. \[S-cons\], $$\chi''_{AA}({\bf q}=0,\omega)=(1-e^{-\beta\omega}) \pi S_{AA}({\bf q}=0) \delta(\omega)$$ By using the Kramers-Krönig relation we get \_[AA]{}([**q**]{}=0,0) & = S\_[AA]{}([**q**]{}=0) \_[-]{}\^ d ()\ & = S\_[AA]{}([**q**]{}=0) =[\^2([**q**]{}=0) ]{} For density operators we thus get $$\chi_{nn}({\bf q}=0) = {\beta \over \Omega}{\left\langle \hat{n}^2({\bf q}=0) \right\rangle} = n^2\kappa$$ We would like to stress that while this result may look similar to the high temperature limit of equation (\[highTlimit\]), it is valid in the quantum regime for a conserved quantity. The Ideal Gas ------------- We derive the high temperature behavior of the compressibility, $\partial n/\partial\mu$, and number fluctuations $\delta N^2\equiv\langle N^2\rangle - \langle N\rangle^2$ of the ideal gas. From the equation of state $PV=Nk_BT$, we get $$\kappa_T\equiv-\frac{1}{V}{ \left(\frac{\partial V}{\partial P}\right)_{T,N} } \equiv {1\over n^2} \frac{\partial n}{\partial\mu}= \frac{\beta}{n}$$ where $n=N/V$. For an ideal gas, the chemical potential is related to the density by $\beta \mu= -\;{\rm log}\left(\frac{1}{n\lambda_T^d}\right)$, where the thermal deBroglie wavelength $\lambda_T=h/\sqrt{2\pi mk_BT}$. For a fixed $\mu$, the high temperature expansion of $n(\mu,T)$ is $$n=\frac{e^{\beta\mu}}{\lambda_T^d}\sim T^{d/2}\left(1+\frac{\mu}{k_BT}+\frac{1}{2}\left(\frac{\mu}{k_BT}\right)^2\right).$$ which implies that the temperature dependence of the local compressibility is $$n^2\kappa=\frac{\partial n}{\partial\mu}\sim T^{d/2-1}\left(1+\mu\;T^{-1}\right).$$ Using the local fluctuation-dissipation theorem at high temperatures, we find $${\left\langle \delta n^2 \right\rangle}\approx\frac{\partial n}{\partial\mu} T\sim T^{d/2}\left(1+\mu T^{-1}\right),$$ as we might have guessed from a simple Brownian motion or diffusion model of the number fluctuations. Thus, we find that in 2D, while the local compressibility $n^2\kappa$ is independent of temperature (note that in the Letter we have absorbed the $n^2$ factor in the definition of $\kappa$) and the number fluctuations ${\left\langle \delta n^2 \right\rangle}$ scale linearly with $T$ . In the presence of interactions, both number fluctuations and compressibility deviate from their classical values, however their ratio $R=n^2\kappa/{\left\langle \delta n^2 \right\rangle}$ continues to be linear in $T^{-1}$ as determined by the fluctuation-dissiaption theorem (see Fig. 2 of Letter). Only when quantum effects become important does $R$ exhibit non-trivial behavior.
--- abstract: 'We consider Sturm-Liouville operators $-y''''+v(x)y$ on $[0,1]$ with Dirichlet boundary conditions $y(0)=y(1)=0$. For any $1\le p<\infty$, we give a short proof of the characterization theorem for the spectral data corresponding to $v\in L^p(0,1)$.' author: - Dmitry Chelkak title: | An application of the fixed point theorem\ to the inverse Sturm-Liouville problem --- \[section\] \[theorem\][**Lemma**]{} \[theorem\][**Corollary**]{} \[theorem\][**Proposition**]{} \[theorem\][*Remark*]{} c ł Ł § ¶ ø Ø ß 2[\^[2]{}]{} [^1] Introduction ============ In this paper we consider the inverse spectral problem for self-adjoint Sturm-Liouville operators $$\label{SLOp} \cL y=-y''+v(x)y,\qquad y(0)=y(1)=0,$$ acting in the Hilbert space $L^2(0,1)$, with $v\in L^p(0,1)$ for some (fixed) $1\!\le\! p\!<\!\infty$ (we denote by $\|v\|_p<\infty$ the standard $L^p$ norm of $v$). The spectrum of $\cL$ is denoted by $$\l_1(v)<\l_2(v)<\l_3(v)<\dots$$ It is purely discrete, simple and satisfies the asymptotics $$\l_n(v)=\pi^2 n^2 + {{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(0)} + \mu_n(v),$$ where $$\textstyle {{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(0)}:= \int_0^1v(t)dt\qquad \mathrm{and} \qquad \mu_n(v)=o(1)\ \ \mathrm{as}\ \ n\to\infty.$$ Note that ${{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(0)}$ can be immediately reconstructed from the Dirichlet spectrum as the leading term in the asymptotics of $\l_n(v)-\pi^2n^2$. Starting with the famous uniqueness theorem of Borg [@Bo], the inverse spectral theory of scalar 1D differential operators was developed in detail, and currently there are several classical monographs devoted to different approaches to these problems (see, e.g. [@MaBook], [@LeBook], [@PT]). Traditionally, the principal attention is paid to explicit reconstruction procedures that allow one to find the unknown potential starting from a given spectral data. The careful analysis of these procedures (a) has the significant practical interest; one to find the necessary and sufficient conditions for spectral data to correspond to some potential from a given class. The latter results are usually called [*characterization theorems*]{}. In order words, they say that the mapping $$\cM\ :\ \{\mathrm{potentials}~v(x)\}\to\{\mathrm{spectral\ data}\}$$ is a bijection between some fixed space of potentials $\cE$ and a class of spectral data $\cS$ which is described explicitly. Our main result is a [*short proof*]{} of characterization theorems for spectral data of Sturm-Liouville operators (\[SLOp\]) corresponding to $L^p$ potentials (see Theorem \[CharEven\] and Theorems \[CharMarchenko\], \[CharTrubowitz\] below). At least for $p=1$ and $p=2$ these results are well known in the literature, but, unfortunately, we don’t know any reference that cover all $p$’s simultaneously. It is worthwhile to emphasize that the main goal of our paper is to present the [**method**]{} (more precisely, the [*simplification*]{} of Trubowitz’s scheme, see below) rather than new results. We hope that this method is applicable to other inverse spectral problems too. For simplicity, we first focus on symmetric (or [*even*]{}) potentials $$v(x)\equiv v(1-x),\qquad x\in [0,1].$$ Then, it is well known that the spectrum itself determines a potential uniquely (see, e.g. [@PT], pp. 55–57 and p. 62 for a very short proof). Let $$\label{MapEven} \cM\ :\ v\mapsto (\,{{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(0)}\,,\, \{\,\mu_n(v)\,\}_{n=1}^{\infty}\,)$$ and $$\textstyle \cF_\mathrm{cos}: v\mapsto \left({{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(0)}; \{-{{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(cn)}\}_{n=1}^{\infty}\right),\quad\mathrm{where}\quad {{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(cn)}= \int_0^1 v(t)\cos (2\pi n t) dt,$$ denote (up to a sign) the cosine-Fourier transform. \[CharEven\] Let $1\le p<\infty$. The mapping $\cM$ given by (\[MapEven\]) is a bijection between the space of all symmetric $L^p$-potentials $\cE=L^p_\mathrm{even}(0,1)$ and the subset $\cS$ of the Fourier image $\cF_\mathrm{cos}L^p_\mathrm{even}(0,1)$ consisting of all sequences $ \mu^*=(\mu^*_0,\mu^*_1,\mu^*_2,\dots)\in \cF_\mathrm{cos}L^p_\mathrm{even}(0,1) $ such that $$\label{MuLe} \pi^2n^2+\mu^*_0+\mu^*_n<\pi^2(n\!+\!1)^2+\mu^*_0+\mu^*_{n+1}\quad\mathit{for~all}~n\ge 1.$$ In general, in order to prove the characterization theorem, one needs \(i) to solve the direct problem, i.e., to show that $\cM$ maps $\cE$ into $\cS$; \(ii) to prove the uniqueness theorem, i.e., the fact that the mapping $\cM$ is $1$-to-$1$; \(iii) to prove that $\cM$ is a surjection. Usually, the first part is rather straightforward and the second part can be simply done in a nonconstructive way without any references to explicit reconstruction procedures. Thus, the hardest part of such theorems is the third one. It was suggested by Trubowitz and co-authors (see [@PT]) to use the following abstract scheme in order to prove (iii). To omit inessential technical details concerning the particular structure of the infinite-dimensional manifold $\cS$ (the restriction (\[MuLe\]) in our case), in the next paragraph we think of $\cS$ as of a Banach space equipped with the usual addition operation. Following Trubowitz’s scheme, it is sufficient \(a) to show that $\cM(\cE)$ contains some open set $\cO\ss\cS$ (say, some neighborhood of $0$); \(b) to show that for some dense subset $\cL\ss\cS$ the following is fulfilled: for any $s\in\cM(\cE)$ and $l\in\cL$, one has $s+l\in\cM(\cE)$. Since, for any $s\in\cS$, the set $s-\cL$ is dense in $\cS$, one has $s=o+l$ for some $o\in\cO$ and $l\in\cL$. Thus, (b) implies $s\in\cM(\cE)$ because, due to (a), $o\in\cO\ss\cM(\cE)$. Loosely speaking, to prove (b) one needs to apply the reconstruction procedure only for [*“nice” perturbations*]{} $l\in\cL$ of spectral data (but starting with an arbitrary $s\in\cO$). Following [@PT], (a) can be deduced from the implicit function theorem applied to the mapping $\cM$ near $v=0$. In order to do this, it is necessary to prove that $\cM$ is continuously differentiable (in appropriate spaces) [*everywhere near $v=0$*]{}. Actually, the proving of the differentiability of $\cM$ near $0$ is not much simpler than the differentiability of $\cM$ [*everywhere in $\cE$*]{}, since the information about the norm $\|v\|$ doesn’t help to prove the existence of the Fréchet derivative $d_v\cM$ at $v$. The main purpose of our paper is to point out that, in fact, one can notably simplify this part of the proof, using some abstract fixed point theorem and (a1) the differentiability of $\cM$ (in the Fréchet sense) at [*only one point $v=0$*]{}; (a2) the continuity of $\cM$ in the weak-$*$ topology (if $1<p<\infty$). The paper is organized as follows. We start with some preliminaries in Sect. \[SectPre\]. The very simple but crucial application of the Leray-Schauder-Tyhonoff fixed point theorem which allows us to (almost immediately) derive (a) from (a1) and (a2) is given in Sect. \[SectLocSurjCore\]. The properties (a1), (a2), and (a) for the mapping (\[MapEven\]) are proved in Sect. \[SectLocalSurjLp\], if $1<p<\infty$. The necessary modifications for $p=1$ are given in Sect. \[SectL1\]. The proof of Theorem \[CharEven\] is finished in Sect. \[SectGlob\]. For the sake of completeness, in Sect. \[SectGen\] we also consider nonsymmetric potentials. For both usual choices of additional spectral data (Marchenko’s normalizing constants as well as Trubowitz’z norming constants), we prove the characterization Theorems \[CharTrubowitz\] and \[CharMarchenko\] similar to Theorem \[CharEven\]. The scheme described above is quite general and can be used to prove similar characterization theorems for other “reasonable” spaces of potentials instead of $L^p(0,1)$. Another approach to these results (for $W^\theta_2$ potentials with $\theta\ge -1$) based on the interpolation technique was suggested in [@SS08]. [**Acknowledgements.**]{} It is my pleasure to dedicate this paper to Nina , whose lectures on PDE I had a chance to attend, as many other generations of students. The author is also grateful to Evgeny Korotyaev, Boris M. Makarov and Sasha Pushnitski for helpful discussions. Symmetric case, proof of Theorem \[CharEven\] ============================================= Preliminaries {#SectPre} ------------- Let $\vp(x,\l,v)$ denote the solution to the differential equation $$\label{DiffEq} -y''+v(x)y=\l y$$ satisfying the initial conditions $\vp(0,\l,v)=0$, $\vp'(0,\l,v)=1$. It can be constructed by iterations as $$\label{Vp=Series} \vp(x,\l,v)=\sum_{k=0}^{\infty}\vp_k(x,\l,v),\qquad \mathrm{where}\qquad \vp_0(x,\l)={\sin\sqrt{\l}x}\big/{\sqrt{\l}}$$ and $$\begin{aligned} \label{VpK=} \vp_k(x,\l,q) & =\int_0^x\vp_0(x\!-\!t,\l)\vp_{k-1}(t,\l,v)v(t)dt \\ &=\iintt_{0=t_0\le t_1\le\dots \le t_k\le t_{k+1}=x} {\textstyle \prod_{m=0}^k \vp_0(t_{m+1}\!-\!t_m,\l)}\cdot v(t_1)\dots v(t_k) dt_1\dots dt_k\,. \notag\end{aligned}$$ Since $|\vp_0(t,\l)|\le e^{|\Im\sqrt{\l}|t}/\sqrt{\lambda}$, one immediately obtains the estimate $$\label{VpkEstimate} |\vp_k(x,\l,v)| \le \frac{\|v\|_1^k} {k!}\cdot \frac{e^{|\Im\sqrt{\l}|x}}{|\l|^{(k+1)/2}}\,.$$ In particular, the series (\[Vp=Series\]) converges uniformly in $\l$ and $v$ on bounded subsets of $\C$ and $L^1(0,1)$, respectively. Since $\l_n(v)$ are the zeros of the entire function $$w(\l,v):=\vp(1,\l,v)=\sum_{k=0}^{\infty}\vp_k(1,\l,v),\qquad \l\in\C,$$ and the zeros of $\vp_0(1,\l)$ are $\pi^2n^2$, (\[VpkEstimate\]) easily gives $\l_n(v)=\pi^2n^2+O(\|v\|_1 e^{\|v\|_1})$. Taking into account the second term $\vp_1(1,\l,v)$, one obtains $$\label{Lasympt} \l_n(v)=\pi^2n^2+ {{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(0)} - {{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(cn)} + O\lt(\frac{\|v\|^2_1\, e^{\|v\|_1}}{n}\rt),$$ with some absolute constant in the $O$-bound. We also need the following simple Lemma \[WeakCont\] Let $1<p<\infty$ and $v_s,v\in L^p(0,1)$ be such that $v_s\to v$ weakly in $L^p(0,1)$ as $s\to\infty$. Then $\l_n(v_s)\to\l_n(v)$ for any $n\ge 1$. Cf. [@PT] p. 18. Since $\l_n(s)$ are the zeros of the entire functions $w(\cdot,v_s)$, it is sufficient to prove that $w(\l,v_s)\to w(\l,v)$ uniformly in $\l$ on bounded subsets of $\C$. Let $q=p/(p-1)$. For any $k\ge 1$, the functions $$f_{\l}(t_1,t_2,...,t_k):=\textstyle \chi_{\{0=t_0\le t_1\le \dots \le t_k\le t_{k+1}=1\}}\cdot\prod_{m=0}^k\vp_0(t_{m+1}-t_m,\l),\qquad |\l|\le M,$$ form a compact set in $L^q([0,1]^k)$. Thus, since $\prod_{m=1}^k v_s(t_m)\to \prod_{m=1}^k v_s(t_m)$ weakly in $L^p([0,1]^k)$, one has $\vp_k(1,\l,v_s)\to \vp_k(1,\l,v)$ uniformly in $\l:|\l|\le M$. As the norms $\|v_s\|_p$ are uniformly bounded and the series (\[Vp=Series\]) converges uniformly in $\l$ and $v$ on bounded subsets, it implies $w(\l,v_s)\to w(\l,v)$ uniformly in $\l$ on bounded subsets. Local surjection near $\bm{v=0}$. Core argument. {#SectLocSurjCore} ------------------------------------------------ \[LocSurjLemma\] Let $E$ be a reflexive Banach space and a mapping $\Phi:B_E(0,r)\to E$ be defined in some neighborhood $B_E(0,r)=\{v\in E:\|v\|_E<r\}$ of $0$. If $\Phi$ is (a1) differentiable in the Fréchet sense at $0$ and $d_0\Phi=I$, i.e., $$\|\Phi(v)-v\|_E=o(\|v\|_E)~\mathit{as}~\|v\|_E\to 0;$$ (a2) continuous in the weak topology, i.e., $$v_n\to v~\mathit{weakly}~~~\Rightarrow~~~\Phi(v_n)\to\Phi(v)~\mathit{weakly},$$ then $\Phi$ is a local surjection at $0$, i.e., $\Phi(B_E(0,r))\supset B_E(0,\delta)$ for some $\delta>0$. Let $\wt\Phi(v):=\Phi(v)-v$. Then, if $\delta$ is sufficiently small, one has $$\wt\Phi\ :\ \ol{B}_E(0,2\delta)\to \ol{B}_E(0,\delta),$$ where $\ol{B}_E$ denotes the closed ball in $E$. Let $f\in B_E(0,\delta)$. Then, the mapping $$\wt{\Phi}_f\ :\ v\ \mapsto\ f - \wt\Phi(v)$$ maps the ball $\ol{B}_E(0,2\delta)$ into itself . Also, $\wt{\Phi}_f$ is continuous in the weak topology (which is also the weak-$*$ topology, since $E$ is reflexive). Due to the Banach-Alaouglu theorem (see, e.g. [@RS] p. 115), $\ol{B}_E(0,2\delta)$ is a compact set in this topology. Moreover, $\ol{B}_E(0,2\delta)$ is convex and $E$ equipped with the weak topology is a locally convex space (see, e.g. [@RS] Chapter V). Therefore, by the Leray-Schauder-Tyhonoff fixed point theorem (see, e.g. [@RS] p. 151), there exists $v\in \ol{B}_E(0,2\delta)$ such that $\wt\Phi_f (v)=v$, i.e., $\Phi (v)=f$. Local surjection. $\bm{L^p}$ potentials, $\bm{p}\bm{>}\bm{1}$. {#SectLocalSurjLp} -------------------------------------------------------------- Recall that $$\cF_\mathrm{cos}: v\mapsto ({{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(0)}; \{-{{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(cn)}\}_{n=1}^{\infty})$$ is (up to a sign) the cosine-Fourier transform. Let $\cM = \cF_\mathrm{cos} +\wt{\cM}$, where $$\wt \cM: v\mapsto (0\,;\{\wt\mu_n(v)\}_{n=1}^{\infty}):=(0\,;\{\mu_n(v)+{{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(cn)}\}_{n=1}^{\infty})\,.$$ Let $$\cF_\mathrm{cos}^{-1}:(a_0,a_1,\dots)\mapsto a_0 - 2\sum_{n=1}^{\infty} a_n\cos (2\pi nx)$$ denote the (formal) inverse mapping to $\cF_\mathrm{cos}$. \[LocalSurjP&gt;1\] Let $1<p<\infty$. Then, \(i) the (nonlinear) mapping $\cF_\mathrm{cos}^{-1}\cM$ maps the space $L^p_\mathrm{even}(0,1)$ into itself; \(ii) the image $(\cF_\mathrm{cos}^{-1}\cM)(L^p_\mathrm{even}(0,1))$ contains some neighborhood of $0$. \(i) It follows from (\[Lasympt\]) and $\|v\|_1\le\|v\|_p$ that $$\wt \cM \left(L^p_\mathrm{even}(0,1)\right)\ss \wt \cM \left(L^1_\mathrm{even}(0,1)\right)\ss\ell^{\min\{2,q\}},\qquad q=p/(p\!-\!1),$$ since $\wt{\mu}_n(v)=O(n^{-1}\|v\|_1^2\cdot e^{\|v\|_1})$. Thus, the Hausdorff-Young inequality gives $$(\cF^{-1}_\mathrm{cos}\wt \cM)(L^p_\mathrm{even}(0,1))\ss L^{\max\{2,p\}}_\mathrm{even}(0,1)\ss L^p_\mathrm{even}(0,1)\,.$$ Moreover, for some constant $C(p)$, one has $$\label{MapBound} \|(\cF^{-1}_\mathrm{cos}\wt{\cM})(v)\|_p\le C(p)\cdot \|v\|_1^2\cdot e^{\|v\|_1}.$$ \(ii) We are going to apply Lemma \[LocSurjLemma\] to the mapping $\cF_\mathrm{cos}^{-1}\cM$. Note that, for , $L^p_\mathrm{even}(0,1)$ is a reflexive Banach space. Due to (\[MapBound\]), $\cF_\mathrm{cos}^{-1}\cM$ is differentiable (in the Fréchet sense) at $0$ and $d_0(\cF_\mathrm{cos}^{-1}\cM)=I$. This gives the assumption (a1) of Lemma \[LocSurjLemma\]. Thus, it is sufficient to check the assumption (a2), i.e. the continuity of $\cF_\mathrm{cos}^{-1}\cM$ (or, equivalently, $\cF_\mathrm{cos}^{-1}\wt{\cM}$) in the weak topology. Let $v_s\to v$ weakly in $L^p_\mathrm{even}(0,1)$. Let $u\in L^q_\mathrm{even}(0,1)$ and $h=(h_0,h_1,\dots):=\cF_\mathrm{cos} u$. Then, in order to prove that $(\cF_\mathrm{cos}^{-1}\wt{\cM})(v_s)\to(\cF_\mathrm{cos}^{-1}\wt{\cM})(v)$ weakly in $L^p_\mathrm{even}(0,1)$, one needs to show that $$\int_0^1 \left((\cF_\mathrm{cos}^{-1}\wt{\cM})(v_s)-(\cF_\mathrm{cos}^{-1}\wt{\cM})(v)\right)\!\!(t)u(t) dt = 2\sum_{n=1}^{\infty} (\wt{\mu}_n(v_s)\!-\!\wt{\mu}_n(v))h_n\ \to\ 0.$$ Note that $h\in \ell^{\max\{2,p\}}$ by the Hausdorff-Young inequality, and the norms of the sequences $\{\wt{\mu}_n(v_s)-\wt{\mu}_n(v)\}_{n=1}^{\infty}$ in $\ell^{\min\{2,q\}}$ are uniformly bounded due to (\[Lasympt\]). Thus, Lemma \[WeakCont\] and the dominated convergence theorem imply the result. Local surjection. $\bm{L^1}$ potentials {#SectL1} --------------------------------------- The core argument given in Lemma \[LocSurjLemma\] doesn’t work for the space $L^1$ since this space is not reflexive (and is not equipped with any weak-$*$ topology). Nevertheless, the main result still holds true and the most part of the proof still works well. We start with some modification of Lemma \[LocSurjLemma\]. \[LocSurjLemma1\] Let $E$ be a Banach space and $\Phi:B_E(0,r)\to E$. Let $F\subset E$ be a reflexive Banach space and $\|v\|_E\le c\cdot \|v\|_F$ for any $v\in F$ and some constant $c>0$. If (a1) $\Phi$ is such that $\Phi(v)\!-\!v\in F$ for any $v\in E$ and, moreover, $$\|\Phi(v)-v\|_F=o(\|v\|_E)~\mathit{as}~\|v\|_E\to 0;$$ (a2) $\Phi$ is continuous in the weak $F$-topology, i.e., for any $v\in E$ and $v_s\!-\!v\in F$, $$v_s\!-\!v\to 0~\mathit{weakly~in~}F~~~\Rightarrow~~~\Phi(v_s)\!-\!\Phi(v)\to 0~\mathit{weakly~in~}F,$$ then $\Phi$ is a local surjection, i.e., $\Phi(B_E(0,r))\supset B_E(0,\delta)$ for some $\delta>0$. Let $\wt\Phi(v):=\Phi(v)-v$. It follows from (a1) that, if $\delta$ is sufficiently small, $$\wt\Phi\ :\ \ol{B}_E(0,(c\!+\!1)\delta)\to \ol{B}_F(0,\delta).$$ Let $f\in E$, $\|f\|_E<\delta$, the mapping $\wt\Phi_f:v\mapsto f\!-\!\wt\Phi(v)$ be defined as in Lemma \[LocSurjLemma\], and $$\ol{B}_F(f,\delta)\ :=\ \{v\in E: v\!-\!f\in \ol{B}_F(0,\delta)\}.$$ (note that $f$ and $v$, in general, don’t belong to $F$). Since $\ol{B}_F(0,\delta)\ss\ol{B}_E(0,c\delta)$, one has $\ol{B}_F(f,\delta)\ss \ol{B}_E(0,(c\!+\!1)\delta)$, and so $$\wt\Phi_f\ :\ \ol{B}_F(f,\delta)\to \ol{B}_F(f,\delta).$$ Moreover, due to (a2), the mapping $\wt\Phi_f$ is continuous in this “ball” equipped with the weak $F$-topology (which is a locally convex topology on this convex compact set). Exactly as in Lemma \[LocSurjLemma\], the Leray-Schauder-Tyhonoff theorem implies that $\wt\Phi_f(v)=v$ for some $v\in \ol{B}_F(f,\delta)$, i.e., $\Phi(v)=f$. Now we need some modification of Lemma \[WeakCont\], which, together with (\[Lasympt\]), implies the assumption (a1) of Lemma \[LocSurjLemma1\]. \[WeakCont1\] Let $v_s,v\in L^1(0,1)$ be such that $v_s-v\in L^p(0,1)$ for some $1<p<\infty$ and $v_s-v\to 0$ weakly in $L^p(0,1)$. Then $\l_n(v_s)\to\l_n(v)$ for any $n\ge 1$. Let $u_s:=v_s-v$. Plugging the trivial decomposition $v_s=v+u_s$ into the formula (\[VpK=\]) for $\vp_k(1,\l,v_s)$, one arrives at $$\vp_k(x,\l,q) = \sum_{\{i_1,\dots,i_r\}\ss\{1,\dots,k\}} \iintt_{0\le t_{i_1}\le\dots \le t_{i_r}\le x} \Phi_\l(t_{i_1},\dots t_{i_r}) \cdot u_s(t_{i_1})\dots u_s(t_{i_r}) dt_{i_1}\dots dt_{i_r}\,,$$ where $$\Phi_\l(t_{i_1},\dots t_{i_r}) = \!\!\!\!\!\!\!\iintt_{0\le t_{j_1}\le\dots \le t_{j_{k-r}}\le x}\!\! {\textstyle \prod_{m=0}^k \vp_0(t_{m+1}\!-\!t_m,\l)}\cdot v(t_{j_1})\dots v(t_{j_{k-r}})dt_{j_1}\dots dt_{j_{k-r}},$$ the sum is taken over all subsets $\{i_1,\dots,i_r\}\ss\{1,\dots,k\}$ of indices and $\{j_1,\dots,j_{k-r}\}$ denotes the complementary subset. Again, for any fixed $\{i_1,\dots i_r\}$ and , the functions $$\textstyle f_\l(t_{i_1},\dots,t_{i_r})\ :=\ \chi_{\{0\le t_{i_1}\le \dots \le t_{i_r}\le x\}}\cdot \Phi_\l(t_{i_1},\dots,t_{i_r}),\qquad |\l|\le M,$$ form the compact set in $L^q([0,1]^r)$, which gives the result exactly as in Lemma \[WeakCont\]. \[LocalSurjP=1\] (i) The mapping $\cF_\mathrm{cos}^{-1}\cM$ maps the space $L^1_\mathrm{even}(0,1)$ into itself. \(ii) The image $(\cF_\mathrm{cos}^{-1}\cM) (L^1_\mathrm{even}(0,1))$ contains some neighborhood of $0$. \(i) Recall that (see the proof of Proposition \[LocalSurjP&gt;1\](i)) one has $\cF_\mathrm{cos}^{-1}\cM=I+\cF_\mathrm{cos}^{-1}\wt{\cM}$, and the nonlinear part of our mapping actually maps $L^1$ potentials into $L^p$ functions (say, for $p=2$, see (\[MapBound\])). In particular, $\cF_\mathrm{cos}^{-1}\cM$ maps the space $L^1_\mathrm{even}(0,1)$ into itself. \(ii) Moreover, one has $(\cF_\mathrm{cos}^{-1}\cM])(v)-v\in L^2_\mathrm{even}(0,1)$ for any $v\in L^1_\mathrm{even}(0,1)$ and $$\|(\cF_\mathrm{cos}^{-1}\cM)(v)-v\|_2 =o(\|v\|_1)\quad\mathrm{as}\quad\|v\|_1\to 0.$$ Thus, the assumption (a1) of Lemma \[LocSurjLemma1\] holds with $E=L^1_\mathrm{even}(0,1)$ and $F=L^2_\mathrm{even}(0,1)$. Further, exactly as in Proposition \[LocalSurjP&gt;1\](ii), Lemma \[WeakCont1\] and the dominated convergence theorem give the continuity of the mapping $\cF_\mathrm{cos}^{-1}\cM$ in the weak $L^2$-topology, i.e., the assumption (a2). So, the result follows from Lemma \[LocSurjLemma1\]. Global surjection {#SectGlob} ----------------- To complete the proof of Theorem \[CharEven\], we follow Trubowitz’s approach (cf. [@PT], pp. 115–116) word for word, if $p>1$, and slightly modify the main argument, if $p=1$ (cf. the paper [@CKK] devoted to an inverse problem for the perturbed 1D harmonic oscillator, where the same modification was used). The proof of the global surjection is based on (a) local surjection near $v=0$ and (b) explicit solution of the inverse problem for the perturbation of [*finitely many*]{} eigenvalues. The latter is given by \[DarbouxL\] Let $v\in L^1_\mathrm{even}(0,1)$ be a symmetric potential, $n\ge 1$ and $t$ be such that $\l_{n-1}(v)<\l_n(v)+t<\l_{n+1}(v)$. Then there exists a symmetric potential $v_{n,t}\in L^1_\mathrm{even}(0,1)$ such that $$\l_m(v_{n,t})=\l_m(v)\ \mathit{for\ all}\ m\ne n\quad \mathit{and}\quad \l_n(v_{n,t})=\l_n(v)+t.$$ Moreover, if $v\in L^p_\mathrm{even}(0,1)$ for some $1\le p<\infty$, then $v_{n,t}\in L^p_\mathrm{even}(0,1)$ too. See [@PT], pp. 107–113, where the modified potential $v_{n,t}$ is constructed explicitly using the Darboux transform. Namely, $$\label{V_n_t=} v_{n,t}=v-2\frac{d^2}{dx^2}\log\{\xi_n(\cdot,\l_n(v)\!+\!t,v);\vp(\cdot,\l_n(v),v)\},$$ where $\{f;g\}:=fg'-f'g$ and $\xi_n(\cdot)=\xi_n(\cdot,\l,v)$ denotes the solution of (\[DiffEq\]) satisfying the boundary conditions $\xi_n(0)=1$, $\xi_n(1)=(\vp'(1,\l_n(v),v))^{-1}$ (in particular, the Wronskian is strictly positive on $[0,1]$). If $v$ is symmetric, then $\xi_n(1)=(-1)^n$ and $\{\xi_n(\cdot,\l_n(v)+t,v);\vp(\cdot,\l_n(v),v)\}$ is symmetric too. Since $\{\xi_n;\vp\}'=t\xi_n\vp$, the Wronskian is twice continuously differentiable. In particular, $v_{n,t}-v_n\in L^p(0,1)$. The mapping $\cM$ maps $L^p_\mathrm{even}$ into $\cF_\mathrm{cos}L^p_\mathrm{even}$ (see Propositions \[LocalSurjP&gt;1\](i), \[LocalSurjP=1\](i)) and is injective due to the well known uniqueness theorems. Thus, the main problem is to prove that it is surjective. Let $$\mu^*=(\mu_0^*,\mu_1^*,\mu_2^*,\dots)\in \cF_\mathrm{cos}L^p_\mathrm{even}(0,1)$$ be such that $\pi^2+\mu_1^*<4\pi^2+\mu_2^*<\dots$. Since trigonometric polynomials are dense in $L^p_\mathrm{even}(0,1)$, for any $\delta>0$ there exist some (large) $N$ and a sequence $$\mu^\delta=(\mu_0^\delta,\mu_1^\delta,\dots, \mu_N^\delta,\mu_{N+1}^*,\mu_{N+2}^*,\dots)$$ such that $\pi^2+\mu_1^\delta<4\pi^2+\mu_2^\delta<\dots$ and $\|\cF_\mathrm{cos}^{-1} \mu^\delta\|_p<\delta$. Indeed, if $p>1$, then the Fourier series of a function $\cF_\mathrm{cos}^{-1} \mu^*\in L^p_\mathrm{even}(0,1)$ converge to this function in $L^p$-topology (see, e.g. [@Edw] Section 12.10), i.e., $$\|\cF_\mathrm{cos}^{-1} \mu^* - \cF_\mathrm{cos}^{-1} (\mu_0^*,\mu_1^*,...,\mu_N^*,0,0,...)\|_p= \|\cF_\mathrm{cos}^{-1}(0,0,...,0,\mu_{N+1}^*,\mu_{N+2}^*,...) \|_p\to 0$$ as $N\to\infty$, and one can simply take $\mu_0^\delta=\dots=\mu_N^\delta=0$. If $p=1$, one can still find a finite sequence $(\mu_0^{(N)},\mu_1^{(N)},...,\mu_N^{(N)})$ (or, equivalently, a trigonometric polynomial $2\sum_{n=0}^N\mu_n^{(N)}\!\cos(2\pi n x)$) such that $$\|\cF_\mathrm{cos}^{-1} \mu^* - \cF_\mathrm{cos}^{-1} (\mu_0^{(N)},\mu_1^{(N)},...\,,\mu_N^{(N)},0\,,0\,,...)\|_1\le\delta,$$ and take $\mu_n^\delta:=\mu_n^*-\mu_n^{(N)}$ for $j=0,..,N$. Note that $|\mu_n^\delta|\le\|\cF_\mathrm{cos}^{-1} \mu^\delta\|_1\le\|\cF_\mathrm{cos}^{-1} \mu^\delta\|_p\le\delta$ for all $n\ge 1$, so the restriction (\[MuLe\]) holds true. Due to Proposition \[LocalSurjP&gt;1\](ii) (or Proposition \[LocalSurjP=1\](ii), if $p=1$), there exists a potential $v^\delta\in L^p_\mathrm{even}(0,1)$ such that $\l_n(v^\delta)=\pi^2n^2+ \mu_0^\delta + \mu_n^\delta$ for all $n\ge 1$, and so $$\l_n(v^\delta)=\pi^2n^2+ \mu_0^\delta + \mu_n^*\qquad \mathrm{for\ all}\quad n\ge N\!+\!1.$$ Adding to $v^\delta$ the constant $\mu_0^*-\mu_0^\delta$ and changing the first $N$ eigenvalues using the procedure given in Lemma \[DarbouxL\], one obtains the potential $v^*\in L^p_\mathrm{even}(0,1)$ such that $$\l_n(v^*)=\pi^2n^2+\mu_0^*+\mu_n^*\qquad \mathrm{for\ all}\quad n\ge 1$$ (to avoid the possible crossing of eigenvalues, i.e., violation of (\[MuLe\]), during this procedure, one can always move $\l_1,..,\l_N$ to the far left beginning with $\l_1$ and then move them to the desired positions beginning with $\l_N$). Nonsymmetric case ================= \[SectGen\] Preliminaries. Normalizing and norming constants ------------------------------------------------ If $v$ is not symmetric, then one needs some additional spectral data to determine the potential uniquely. The possible choices are (cf. [@CK09] Appendix B and references therein): - the [*normalizing constants*]{} (first appeared in Marchenko’s paper [@Mar]) $$\a_n(v)= \|\vp(\cdot,\l_n(v),v)\|_2^2=\int_0^1\vp^2(t,\l_n(v),v)dt =(\dot\vp\vp')(1,\l_n(v),v)\,,$$ where $\dot\vp$ denotes the derivative with respect to $\l$; - the [*norming constants*]{} introduced by Trubowitz and co-authors (see [@PT]) $$\n_n(v)=\log[(-1)^n\vp'(1,\l_n(v),v)]\,.$$ Note that $$\label{a=nu} \a_n(v)=|\dot{w}(\l_n(v),v)| \cdot e^{\n_n(v)},$$ where $$w(\l,v)\equiv\prod_{m=1}^{\infty}\frac{\l_m(v)-\l}{\pi^2m^2}\,,\qquad \l\in\C,$$ due to the Hadamard factorization theorem, and so the first factor $$\label{DotWln=} |\dot{w}(\l_n(v),v)| = \lt|\frac{1}{\pi^2n^2}\prod_{m\ne n}\frac{\l_m(v)-\l_n(v)}{\pi^2m^2}\rt| = \frac{1}{2\pi^2n^2}\prod_{m\ne n}\frac{\l_m(v)-\l_n(v)}{\pi^2(m^2-n^2)}$$ is uniquely determined by the spectrum. Let $$\cF_\mathrm{sin}: v\mapsto \{{{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(sn)}\}_{n=1}^{\infty},\qquad {{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(sn)}=\int_0^1 v(t)\sin (2\pi nt) dt,$$ be the sine-Fourier transform, and $$\cF_\mathrm{sin}^{-1}:(b_1,b_2,\dots)\mapsto 2\sum_{n=1}^{\infty} b_n\sin (2\pi nx)$$ denote its (formal) inverse. We also use the notation $L^p_\mathrm{odd}(0,1)$ for the space of all anti-symmetric (or [*odd*]{}) potentials $v(x)\equiv -v(1\!-\!x)$, $x\in [0,1]$, from $L^p(0,1)$. Characterization theorem for norming constants ---------------------------------------------- \[CharTrubowitz\] Let $1\le p<\infty$. The mapping $$\label{MNmap} \textstyle v\mapsto \left(\cM(v);\cN(v)\right),\qquad \cN(v):=\{2\pi n\cdot \nu_n(v)\}_{n=1}^{\infty}\,,$$ is a bijection between the space of potentials $L^p(0,1)$ and the set of spectral data $\cM(L^p_\mathrm{even}(0,1))\ts \cF_\mathrm{sin}L^p_\mathrm{odd}(0,1)$. In other words, the norming constants $\n_n(v)$ multiplied by $n$ can form an arbitrary sequence in $\cF_\mathrm{sin}L^p_\mathrm{odd}(0,1)$, while the characterization of the possible spectra is the same as in Theorem \[CharEven\]. The uniqueness (i.e., the fact that (\[MNmap\]) is a $1$-to-$1$ map) theorem is well known (see, e.g. [@PT] p. 62). Further, it directly follows from (\[Vp=Series\]), (\[VpkEstimate\]) that $$\n_n(v)=\frac{1}{2\pi n}\cdot {{\mathop{v}\limits^{{}_{\,\bf{\wedge}}}}\vphantom{v}}^{(sn)} + O\lt(\frac{\|v\|_1^2e^{\|v\|_1}}{n^2}\rt)\,.$$ In particular, (\[MNmap\]) maps $L^p(0,1)$ into $\cM(L^p_\mathrm{even}(0,1))\ts \cF_\mathrm{sin}L^p_\mathrm{odd}(0,1)$. Moreover, each $\nu_n(v)$ is a continuous function of the potential in the same sense as in Lemma \[WeakCont\]. Repeating the proof of Proposition \[LocalSurjP&gt;1\] (or Proposition \[LocalSurjP=1\], if $p=1$) word for word, one obtains that (\[MNmap\]) is a local surjection near $v=0$. Finally (exactly as in Theorem \[CharEven\]), the proof of the global surjection can be finished changing a finite number of spectral data, which is given by the application of the next (explicit) lemma step by step. \(i) Let $v\in L^1(0,1)$, $n\ge 1$ and $\l_{n-1}(v)<\l_n(v)+t<\l_{n+1}(v)$. Then there exists a potential $v_{n,t}\in L^1(0,1)$ such that $$\l_m(v_{n,t})=\l_m(v)+t\delta_{nm}\qquad \mathit{and}\qquad \n_m(v_{n,t})=\n_m(v)\ \mathit{for\ all}\ m\ge 1.$$ \(ii) Let $v\in L^1(0,1)$, $n\ge 1$ and $t\in\R$. Then there exists $v_n^t\in L^1(0,1)$ such that $$\l_m(v_n^t)=\l_m(v)\qquad \mathit{and}\qquad \n_m(v_n^t)=\n_m(v)+t\delta_{nm}\ \ \mathit{for\ all}\ \ m\ge 1.$$ Moreover, if $v\in L^p(0,1)$ for some $1\le p<\infty$, then $v_{n,t},v_n^t\in L^p(0,1)$ too. See [@PT], pp. 91–94, 107–113. The explicit formula for $v_{n,t}$ is given by (\[V\_n\_t=\]) and $$v_n^t(x)=v(x)-2\frac{d^2}{dx^2}\log\lt(1-(e^t\!-\!1)\int_x^1\psi_n^2(t,v)dt\rt),$$ where $\psi_n(\cdot,v)$ is the $n$-th normalized eigenfunction. Characterization theorem for normalizing constants -------------------------------------------------- \[CharMarchenko\] Let $1\le p<\infty$. The mapping $$\textstyle v\mapsto \left(\cM(v)\,;\,\cA(v)\right),\qquad \cA(v):=\{\pi n\cdot \log[2\pi^2n^2\a_n(v)]\}_{n=1}^{\infty}\,,$$ is a bijection between the space of potentials $L^p(0,1)$ and $\cM(L^p_\mathrm{even}(0,1))\ts \cF_\mathrm{sin}L^p_\mathrm{odd}(0,1)$. Due to (\[a=nu\]), (\[DotWln=\]) and Theorem \[CharTrubowitz\], it is sufficient to check that $$\lt\{ \pi n\cdot \log \prod_{m\ne n}\frac{\l_m(v)\!-\!\l_n(v)}{\pi^2(m^2-n^2)} \rt\}_{n=1}^{\infty}\in \cF_\mathrm{sin}L^p_\mathrm{odd}(0,1).$$ Since $\m_m(v)$ are bounded, $$\log \frac{\l_m(v)-\l_n(v)}{\pi^2(m^2\!-\!n^2)} = \log \lt( 1+ \frac{\mu_m(v)-\mu_n(v)}{\pi^2(m^2\!-\!n^2)}\rt) = \frac{\mu_m(v)-\mu_n(v)}{\pi^2(m^2-n^2)} + O\lt(\frac{1}{(m^2\!-\!n^2)^2}\rt).$$ Summing up over $m\ne n$ (and taking into account that $\mu_n(v)=O(1)$), one obtains $$\pi n\cdot \log \prod_{m\ne n}\frac{\l_m(v)\!-\!\l_n(v)}{\pi^2(m^2-n^2)} =\frac{1}{2\pi}\lt(\sum_{m\ne n}\lt(\frac{1}{m\!-\!n}-\frac{1}{m\!+\!n}\rt)\mu_m(v) - \frac{1}{2n}\,\mu_n(v) \rt)+ O\lt(\frac{1}{n}\rt)\,.$$ The error terms belong to $\cF_\mathrm{sin}L^p_\mathrm{odd}(0,1)$ by the Hausdorff-Young inequality. Denote $$\textstyle f:=\cF_\mathrm{cos}^{-1}\left(0,\{\mu_m(v)\}_{m=1}^{\infty}\right)= -2\sum_{m=1}^{\infty}\mu_m(v)\cos (2\pi mx).$$ Then, simple straightforward calculations give $$\lt\{\frac{1}{2\pi}\lt(\sum_{m\ne n}\lt(\frac{1}{m\!-\!n}-\frac{1}{m\!+\!n}\rt)\mu_m(v) - \frac{1}{2n}\,\mu_n(v)\rt) \rt\}_{n=1}^{\infty} = \cF_\mathrm{sin}[(\tfrac{1}{2}\!-\!x)f] \,.$$ Since $(0,\{\mu_m(v)\}_{m=1}^{\infty})\in \cF_\mathrm{cos} L^p_\mathrm{even}(0,1)$, one has $\cF_\mathrm{sin}[(\tfrac{1}{2}\!-\!x)f]\in \cF_\mathrm{sin}L^p_\mathrm{odd}(0,1)$. [CKK04]{} Borg, G.: Eine Umkehrung der Sturm-Liouvilleschen Eigenwertaufgabe. Bestimmung der Differentialgleichung durch die Eigenwerte. (German) Acta Math. 78, (1946). 1–96. Chelkak, D.; Kargaev, P.; Korotyaev, E.: Inverse problem for harmonic oscillator perturbed by potential, characterization. Comm. Math. Phys. 249 (2004), no. 1, 133–196. Chelkak, D.; Korotyaev, E.: Weyl-Titchmarsh functions of vector-valued Sturm-Liouville operators on the unit interval. Journal of Functional Analysis, 257 (2009), 1546–-1588. Edwards, R. E.: Fourier series. A modern introduction, Vol.2. Springer-Verlag, New York, 1982. Levitan, B. M.: Inverse Sturm-Liouville problems. Translated from the Russian by O. Efimov. VSP, Zeist, 1987. x+240 pp. Marčenko, V. A.: Concerning the theory of a differential operator of the second order. (Russian) Doklady Akad. Nauk SSSR. (N.S.), 72 (1950). 457–460. Marchenko, V. A.: Sturm-Liouville operators and applications. Operator Theory: Advances and Applications, 22. Birkhauser Verlag, Basel, 1986. xii+367 pp. Pöschel, J.; Trubowitz, E.: Inverse spectral theory. Pure and Applied Mathematics, 130. Academic Press, Inc., Boston, MA, 1987. x+192 pp. Reed M., Simon B.: Methods of Modern Mathematical Physics, Vol.I: Functional Analysis. Academic Press, New York, 1980. Savchuk, A. M.; Shkalikov A. A.: On the properties of maps connected with inverse Sturm-Liouville problems. Proceedings of the Steklov Institute of Mathematics, 260 (2008), 218-–237. [^1]: <span style="font-variant:small-caps;">Dept. of Math. Analysis, St. Petersburg State University. 28 Universitetskij pr., Staryj Petergof, 198504 St. Petersburg, Russia.</span> Partly funded by the RF President grants MK-4306.2008.1, NSh-2409.2008.1, and P.Deligne’s 2004 Balzan prize in Mathematics.
--- abstract: 'We consider the effect of binary stars on the main-sequence luminosity functions observed in the core of globular clusters, with specific reference to NGC 6752. We find that mass segregation results in an increased binary fraction at fainter magnitudes along the main-sequence. If this effect is not taken into account when analyzing luminosity functions, erroneous conclusions can be drawn regarding the distribution of single stars, and the dynamical state of the cluster. In the core of NGC 6752, our HST data reveal a flat luminosity function, in agreement with previous results. However, when we correct for the increasing binary fraction at faint magnitudes, the LF begins to fall immediately below the turn-off. This effect appears to be confined to the inner core radius of the cluster.' author: - 'Eric P. Rubenstein & Charles D. Bailyn' title: 'HST Observations of the Central-Cusp Globular Cluster NGC 6752. The Effect of Binary Stars on the Luminosity Function in the Core' --- Subject headings: binaries: general — globular clusters: general — globular clusters: individual (NGC 6752) — stars: luminosity function — stars: population II Introduction ============ The study of a globular cluster’s luminosity function (LF) provides insight into its present dynamical state and the stellar populations of which it is comprised. However, the presence of binary stars can alter the appearance of the luminosity function. A LF constructed from a population containing a significant fraction of binary stars is not a single star LF at all, but rather an amalgam of single stars and binaries. Because of mass segregation, the binary fraction in a cluster is likely to vary with magnitude and radial distance to the cluster center. Thus we expect in general that the presence of binaries may make the single star LF different from the observed main-sequence LF, which includes stars on the binary sequence. Here we attempt to quantify this effect using our HST data of NGC 6752, in which we have previously discovered a large, centrally concentrated population of main-sequence binary stars in the core (Rubenstein & Bailyn, 1997 hereafter Paper II). Data reduction and calibration procedures are discussed in Paper II. Here we discuss the implication of the binary sequence we discovered for the cluster LF. Determining The Luminosity Function and the Effects of Mass Segregation {#LF} ======================================================================= To disentangle the true LF from an uncorrected LF it is necessary to perform artificial star tests. In Paper II we describe the procedure we employed to digitally add nearly $10^7$ artificial stars to the images. We demonstrated that the artificial stars had photometric errors which were very similar to the real stars, and should therefore have the same recovery probabilities as real stars. The artificial stars were added with a flat LF which was similar to the observed LF. We calculated the recovery rate of artificial stars in a fashion similar to Bolte (1994), although we used magnitude bins of 0.5 mag. Briefly, the fraction of artificial stars recovered in the $i^{th}$ magnitude bin, $f_i$, is the number of stars recovered within that magnitude bin, divided by the number of stars added to the data in that magnitude bin. We calculated the incompleteness correction factor by inverting the recovery fraction, 1/$f_i$. Since we have only split the data into “inner” and “outer” regions, we did not construct a two-dimensional completeness look-up table as did Bolte. Rather, we separately calculated the incompleteness corrections for each region. We then smoothed the results by performing a least-squares fit to the recovery fraction values as a function of magnitude. We did not fit the data beyond V=23 since at this point the completeness drops suddenly by more than a factor of 2 to below 50%. Due to the large saturated regions near a few extremely bright stars, where no objects are recovered at all, even relatively bright stars are only about 75% complete. However, only the relative completeness from magnitude bin to magnitude bin is relevant to a discussion of the LF of the cluster stars. To construct the LFs, we bin the stars into 0.5 mag bins. Each star receives a weight equal to the inverse of its recovery probability. The results, and our interpolation, are shown in Figure \[LFpic\]. We have scaled the LF of the inner regions by an arbitrary factor so that the inner and outer regions have the same values along the sub-giant branch and at the main-sequence turn-off (MSTO). In the inner region ($\sim 1 r_{core} \simeq 12\arcsec = 0.2$pc; Djorgovski 1993), the V-band LF is flat for 5 magnitudes below the MSTO, which is located at V$\sim16.5$. Beyond 5 magnitudes below the MSTO, the LF falls very rapidly. Both the plateau and the sudden drop can be attributed to the advanced dynamical state of this stellar population. In particular, mass segregation will eject the low mass stars from the cluster center and force the more massive objects from the outer parts of the cluster in towards the center (see review by Heggie & Meylan 1997). The “outer” region retains more of its low mass objects, in that it has a flat LF 2 magnitudes further down from the MSTO than the inner region. Converting from luminosity to mass using the Yale Isochrones (Chaboyer et al. 1995), it is clear that low and moderate mass objects are strongly depleted relative to a Salpeter IMF. Indeed, the inner region shows an inverted mass function beyond 5 magnitudes below the furnoff. This result is statistically significant, but caution is warranted because the completeness level has dropped by about a factor of three. Binary Fraction as a Function of Luminosity {#binseg} =========================================== In general, one would expect that mass segregation in a GC core would result in a larger binary fraction of the lower main sequence than near the turnoff, because low mass main sequence stars would be preferentially ejected from the core. To test this hypothesis, we checked to see if the color distribution redward of the main-sequence ridge-line (MSRL, as defined in Paper II) was a function of magnitude. We split the magnitude interval $16.5\leq$V$\leq 19.0$ into two equal portions, $16.5\leq$V$\leq 17.75$ (hereafter the “bright” stars) and $17.75 \leq$ V $\leq 19.0$ (hereafter the “dim” stars). Then we performed an analysis identical to the one described in § 3.2 of Paper II. Briefly, we performed Monte Carlo experiments using the photometric results of both the real stars and the artificial stars. We calculated the difference in color, $\Delta$C, between the MSRL and each star (both real and artificial). We then determined a parameter $Y$ for each real star, equal to the fraction of artificial stars of similar magnitude and crowding which have $\Delta$C smaller than that of the real star. If the real stars and artifical stars are drawn from the same input distribution, the values of $Y$ should be evenly distributed from zero to unity. The fact that they were not demonstrated the need for an underlying population of binary stars (see Paper II for more details). We found that the distribution of $Y$ values was significantly further from uniform among the dim stars than among the bright stars (Figure 2), indicating a greater fraction of binary stars in the dim group, as expected. The formal probability that the two groups of stars were drawn from a population having the same input distribution was $10^{-7}$. To quantify the variation of binary fraction with magnitude, one would ideally determine the absolute binary fraction among both the bright and dim star populations. Unfortunately, by splitting the stars into two groups the statistical significance of the results are degraded. Specifically, the 3$\sigma$ limits of the binary fraction determined in the manner of Paper II are poorly constrained, 4%—50% for the bright stars, 18%—42% among the dim stars. Fortunately, it is possible to obtain statistically significant results by calculating the [*difference*]{} in binary fraction between the bright and faint stars. To perform this differential analysis, we modified the Monte Carlo procedure discussed in Paper II for determining the binary fraction. In this case, we increased the magnitude of some of the stars in the bright group to simulate the effect of additional binaries. We then compared the $Y$ distribution of this altered bright star population with that of the dim stars — the fraction of bright stars which had to have light added for the two distributions to be comparable is a measure of the difference in the binary star population of the two groups. In carrying out this procedure, we had to specify the ratio of brightness of the two stars in our fake binary systems. Since the distribution of binary mass ratios and luminosity ratios is unknown, we used the equation $V_2=\frac{ V_1} {R^\xi },$ where $R$ is a random number between 0 and 1. This relation, while convenient to work with, is not meant to accurately model what is, after all, an unknown distribution. The free parameter $\xi $ determines the luminosity ratio of the binary distribution: $\xi=0$ corresponds to the case where all binaries have components with equal luminosity, while larger values of $\xi $ result in distributions increasingly weighted toward smaller luminosity ratios (and thus small mass ratios). Figure \[bey\] shows the results of these calculations. We found that for $\xi=0$, $\sim 5$% of the bright stars must have binary companions added to match the dim stars’ distribution. For the physically more realistic cases $\xi=$1 or 2, $\sim 10$% of the bright stars must “become binaries” to match the distribution of the dim stars. Correcting the Luminosity Function for Binaries {#binlf} =============================================== The relative LF of single stars would be minimally altered if the binary fraction were constant at all magnitudes. If that were the case, there would be equal numbers of binaries in all LF bins, and the underlying single star LF would not be masked. However, in NGC 6752 we now know that the binary fraction (BF) does change with magnitude. Therefore, the single star LF [*is*]{} altered by the binary population and can not be observed unless we first account for the binaries. In this section we will give an example of how to perform this correction. This calculation is not intended to be definitive, but rather to demonstrate the potential size of the effect on the LF. Since the binary frequency changes with magnitude, but is only defined in two magnitude ranges, we must make assumptions about the way the binary population changes. To minimize the number of free parameters, we will assume that the BF varies linearly with magnitude. To determine the BF in each magnitude bin, we combine the absolute binary fraction (from the results of Paper II) with a simple, linear extension of the magnitude dependence of the BF (found in above). The average magnitude of the stars analyzed in Paper II, in the interval from V=16.5 to V=19.0, is 17.75. We assume that at this representative magnitude, the binary fraction is the mean of the $3\sigma$ limits derived in Paper II, ie. $(0.15+0.38)/2=0.265$. We determined above that the binary fraction increases 10% from the bright stars to the dim stars. Using the midpoints of those groups’ magnitudes we determine that at V=17.125, the binary fraction is $0.215$ and that every 0.5 mag fainter on the CMD corresponds to an increase of 4% in the binary fraction. We then construct a new LF in which the number of stars in each magnitude bin is reduced by an amount equal to the binary fraction. In other words, for a given magnitude bin the “binary fraction corrected star count”, ${\rm N}_b = {\rm N}_c{\rm *(1-BF), } $ where N$_c$ is the completeness corrected star count and BF is the binary fraction in that magnitude bin. In Figure \[lfnobinaries\] we plot the resulting V-band LF from the inner region, along with the version not corrected by the removal of binaries. Not surprisingly, the LF is even more depressed at faint magnitudes when the effect of binaries is considered. The inversion noticed at fainter magnitudes in the LF of the core, plotted in Figure \[LFpic\], is now present virtually all the way to the turn-off region. This calculation shows that neglecting binaries will lead to qualitative errors in the derived LF. However, the assumptions required about changes in binary fraction with magnitude mean that these particular results may not be quantitatively reliable. In Paper II we speculated that at faint magnitudes the MSRL might be shifted significantly to the red due to a preponderance of binaries at the faint end dominating over low luminosity single stars. The results obtained here support an interpretation that the observed ridge-line is not the [*main-sequence*]{} ridge-line below about 3.5 magnitudes below the MSTO, but rather, at fainter and fainter magnitudes, it is increasingly a [*binary*]{} ridge-line. Under our crude assumptions, the BF at V=21 is over 50%. Ground based studies by Da Costa (1982) and Richer et al. (1991) found that the LF in NGC 6752 away from the cores rises all the way to the faint object cutoff at $m_v=22.5$ and $m_v=23.5$ respectively. Recently, Shara et al. (1995) and Ferraro et al. (1997) have shown from HST data that the LF flattens closer to the core. Both our inner LF uncorrected for binaries, and our outer LF are in general agreement with the flat LF found in the core by Shara et al. (1995). However, the inverted single star LF suggested for the core by the binary correction described here implies a greater degree of dynamical evolution than does the previous work. Conclusions =========== Our analysis of the luminosity function in the core of this cluster indicates that for about 5 magnitudes down from the main-sequence turn-off there is a flat LF and beyond that point the LF is falling. Below this point there is an inverted mass function. However, this LF does not represent the LF of single main-sequence stars because there are more binaries at fainter magnitudes, presumably due to mass segregation. We find that the population of binaries increases by about 8% per magnitude over the small interval we were able to test. When we extrapolate this trend to fainter magnitudes, which may not be justified, we find that for single stars there is evidence of an inverted mass function nearly all the way up to the MSTO. Another implication of this extrapolation is that below about 3.5 magnitudes below the MSTO the observed ridge-line is dominated by binaries to the extent that it is significantly different from the single star ridge-line, and should therefore not be used for isochrone fitting. Future studies of the stellar populations of GC cores must account for the effect of binaries. Failure to correct for the binaries on the lower main sequence will have the effect of over estimating the number of low mass stars. Since the binaries are located preferentially in the core it is unlikely that they will alter the results in the outer regions of clusters. Acknowledgments =============== EPR would like to thank Peter Stetson, Ken Janes and Jim Heasley for making newer versions available of DAOFIND and SPS. CDB is grateful for a National Young Investigator award from the NSF. We thank Adrienne Cool, Pierre Demarque, Richard Larson, Mario Mateo, Jerry Orosz & Alison Sills for comments and suggestions. Sukyung Yi provided detailed instructions on how to transform Yale Isochrones to HST WFPC2 filters (detailed in Yi, Demarque & Oemler 1995), and Alison Sills carried out this transformation. Mary-Katherine McGovern assisted with the artificial star tests. This research has made use of the Simbad database, operated at CDS, Strasbourg, France. This work has been supported by NASA through LTSA grants NAGW-2469 & NAG5-6404 and grant number HST-GO-5318 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Bolte, M. 1994, , 431, 223 Chaboyer, B., Demarque, P., Guenther, D.B., Pinsonneault, M.H.. & Pinsonneault, L.L. 1995, in The Formation of the Milky Way, eds. E.J.Alfaro and G. Tenorio-Tagle (Cambridge: Cambridge U.P.)., p 289 Da Costa, G. 1982, , 87, 990 Djorgovski, S. 1993, in [*Structure and Dynamics of Globular Clusters*]{}, (ASP Conf. Ser. Vol. 50), eds. S. Djorgovski & G. Meylan, p. 373 Ferraro, F.R., Carretta, E., Bragaglia, A., Renzini, A. & Ortolani, S. 1997, , 286, 1012 Heggie, D.C. & Meylan, G. 1997, , 8, 1 Richer, H.B., Fahlman, G., Buonanno, R., Fusi Pecci, F., Searle, L. & Thompson, I.B. 1991, , 381, 147 Rubenstein, E.P. & Bailyn, C.D. 1997, , 474, 701 (Paper II) Shara, M.M., Drissen, L., Bergeron, L.E. & Paresce, F. 1995, , 441, 617 Yi, S., Demarque, P. & Oemler Jr., A. 1995, , 107, 273
--- abstract: 'We extend the recently proposal of hidden conformal symmetry to the self-dual warped AdS$_3$ black holes in topological massive gravity. It is shown that the wave equation of massive scalar field with sufficient small angular momentum can be reproduced by the SL(2, R) Casimir quadratic operator. Due to the periodic identification in the $\phi$ direction, it is found that only the left section of hidden conformal symmetry is broken to U(1), while the right section is unbroken, which only gives the left temperature of dual CFT. As a check of the dual CFT conjecture of self-warped AdS$_3$ black hole, we further compute the Bekenstein-Hawking entropy and absorption cross section and quasinormal modes of scalar field perturbation and show these are just of the forms predicted by the dual CFT.' author: - Ran Li - 'Ming-Fan Li' - 'Ji-Rong Ren' title: ' Hidden Conformal Symmetry of Self-Dual Warped AdS$_3$ Black Holes in Topological Massive Gravity ' --- [^1] [^2] [^3] Introduction ============ Topological massive gravity (TMG) is described by the theory of three dimensional Einstein gravity with a gravitational Chern-Simons correction and the cosmological constant [@TMG1; @TMG2]. The well-known spacelike warped AdS$_3$ black hole [@warpedads] (previously obtained in [@clement]), which is a vacuum solution of topological massive gravity, is conjectured to be dual to a two dimensional conformal field theory (CFT) with non-zero left and right central charges [@TMGcentralcharge]. The spacelike warped AdS$_3$ black hole is a quotient of warped AdS$_3$ spacetime, just as BTZ black hole is a quotient of AdS$_3$ spacetime. This leads to the breaking of the SL(2, R)$\times$SL(2, R) isometry of AdS$_3$ to the SL(2, R)$\times$U(1) isometry of warped AdS$_3$ black hole. It is shown in [@hiddenwarped] that, for a certain low energy limit, the wave equation of the massive scalar field in the background of spacelike warped AdS$_3$ black hole can be written as the Casimir operator of SL(2, R)$_L\times$SL(2, R)$_R$ Lie algebra, which uncovers the hidden SL(2, R)$\times$SL(2, R) symmetry of the wave equation of scalar field. Recently, a new class of solutions in three dimensional topological massive gravity named as self-dual warped AdS$_3$ black hole is proposed by Chen et al in [@chenselfdual]. It is conjectured that the self-dual warped AdS$_3$ black hole is dual to a chiral CFT with only nonvanishing left central charge, which is very different from the spacelike warped AdS$_3$ black hole. The self-dual warped AdS$_3$ black hole is locally equivalent to spacelike warped AdS$_3$ spacetime via a coordinates transformation. The isometry group is just U(1)$_L\times$SL(2,R)$_R$, similar to the warped AdS$_3$ black hole. Under the consistent boundary condition, the U(1)$_L$ isometry is enhanced to a Virasoro algebra with nonvanishing left central charge, while the SL(2,R)$_R$ isometry becomes trivial with the vanishing right central charge, which is similar to the case of extremal Kerr/CFT correspondence [@GTSS; @HMNS]. This suggests a novel example of warped AdS/CFT dual. In this paper, motivated by the recently proposed hidden conformal symmetry of the wave equation of scalar field propagating in the background of the general rotating black hole [@Castro], we consider the case of self-dual warped AdS$_3$ black holes in TMG. It is shown that the wave equation of massive scalar field propagating in the background of self-dual warped AdS$_3$ black hole can be rewritten in the form of Casimir operator of the SL(2, R)$_L\times$SL(2, R)$_R$ Lie algebra. Unlike the higher dimensional black holes where the near-horizon limit should be taken into account to match the wave equation with the Casimir operator, in the present case, only the condition of the small angular momentum of scalar field is imposed, which suggests that the hidden conformal symmetry is valid for the scalar field with arbitrary energy. So we have uncovered the hidden SL(2, R)$_L\times$SL(2, R)$_R$ symmetry of the wave equation of massive scalar field in self-dual warped AdS$_3$ black hole. Then, we show that, due to the periodic identification in the $\phi$ direction, only one copy of hidden SL(2, R) symmetry is broken to U(1), while the another copy is unbroken. This only gives the left temperature $T_L$ of dual CFT, while the right temperature $T_R$ can not read from this approach. This point is also different from the higher dimensional black holes [@Castro; @Krishnan; @Chensun; @Wang; @chenlong; @ranli; @chendeyou; @becker; @chenlong1; @chendeyou1; @chenhuang; @chenhuang1; @addref]. Despite the right temperature can not be directly read from the periodic identification in the $\phi$ direction, one can still conjuncture that self-dual warped AdS$_3$ black hole is holographically dual to a two dimensional CFT with the left temperature $T_L=\frac{\alpha}{2\pi}$ and the right temperature $T_R=\frac{x_+-x_-}{4\pi}$, which is exactly matches with the warped AdS/CFT correspondence suggested in [@chenselfdual]. As a check of this conjecture, we also show the entropy of the dual conformal field theory given by the Cardy formula matches exactly with the Bekenstein-Hawking entropy of self-dual warped AdS$_3$ black hole. Furthermore, the absorption cross section of scalar field perturbation calculated from the gravity side is in perfect match with that for a finite temperature 2D CFT. At last, we present an algebraic calculation of quasinormal modes for scalar field perturbation firstly proposed by Sachs et al in [@BTZTMG]. It is shown that the quasinormal modes coincide with the poles in the retarded Green’s function obtained in [@chenselfdual], which is a prediction of AdS/CFT dual. This paper is organized as follows. In section II, we give a brief review of self-dual warped AdS$_3$ black hole in topological massive gravity. In section III, we study the hidden conformal symmetry of this black hole by analysing the wave equation of massive scalar field. In section IV, we give some interpretations of the dual conformal field description of self-warped AdS$_3$ black hole by computing the entropy, absorption cross section and quasinormal modes of scalar field and comparing the results from both gravity and CFT sides. The last section is devoted to discussion and conclusion. Self-dual warped AdS$_3$ black hole =================================== In this section, we will give a brief review of self-dual warped AdS$_3$ black hole in topological massive gravity. The action of topological massive gravity with a negative cosmological constant is given by $$\begin{aligned} I_{TMG}&=&\frac{1}{16\pi G}\int_\mathcal{M} d^3 x\sqrt{-g}\left(R+\frac{2}{l^2}\right) \nonumber\\&& +\frac{l}{96\pi G\nu}\int_\mathcal{M} d^3 x\sqrt{-g}\epsilon^{\lambda\mu\nu} \Gamma^{\alpha}_{\lambda\sigma}\left(\partial_{\mu}\Gamma^{\sigma}_{\alpha\nu} +\frac{2}{3}\Gamma^{\sigma}_{\mu\tau}\Gamma^{\tau}_{\nu\alpha}\right)\;. \end{aligned}$$ Varying the above action with respect to the metric yields the equation of motion, which is given by $$\begin{aligned} G_{\mu\nu}-\frac{1}{l^2}g_{\mu\nu}+\frac{l}{3\nu}C_{\mu\nu}=0\;, \end{aligned}$$ where $G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}$ is the Einstein tensor and $C_{\mu\nu}$ is the Cotton tensor $$\begin{aligned} C_{\mu\nu}=\epsilon_{\mu}^{\;\;\alpha\beta}\nabla_{\alpha} \left(R_{\beta\nu}-\frac{1}{4}R g_{\beta\nu}\right)\;. \end{aligned}$$ Recently, a new class of solutions of topological massive gravity named as the self-dual warped AdS$_3$ black hole is investigated by Chen et al in [@chenselfdual]. The metric is given by $$\begin{aligned} ds^2&=&\frac{1}{\nu^2+3}\left(-\left(x-x_+\right)\left(x-x_-\right)d\tau^2 +\frac{1}{\left(x-x_+\right)\left(x-x_-\right)}dx^2\right.\nonumber\\ &&+\left.\frac{4\nu^2}{\nu^2+3}\left(\alpha d\phi+ \frac{1}{2}\left(2x-x_+-x_-\right)d\tau\right)^2 \right)\;, \end{aligned}$$ where $x_+$ and $x_-$ are the location of the outer and inner horizons respectively, and we have set $l=1$ for simplicity. The mass $M$ and angular momentum $J$ of this black hole are given by $$\begin{aligned} M=0\;,\;\;\;J=\frac{(\alpha^2-1)\nu}{6G(\nu^2+3)}\;. \end{aligned}$$ The Hawking temperature $T_H$, angular velocity of the event horizon $\Omega_H$ and the Bekenstein-Hawking entropy $S_{BH}$ of this solution are respectively given by $$\begin{aligned} T_H&=&\frac{x_+-x_-}{4\pi}\;,\nonumber\\ \Omega_H&=&-\frac{x_+-x_-}{2\alpha}\;,\nonumber\\ S_{BH}&=&\frac{2\pi\alpha\nu}{3G(\nu^2+3)}\;. \end{aligned}$$ This solution is asymptotic to the spacelike warped AdS$_3$ spacetime. It is shown in [@chenselfdual] that the self-dual warped AdS$_3$ black hole is locally equivalent to spacelike warped AdS$_3$ spacetime. Under coordinate transformation $$\begin{aligned} \label{eq7} v&=&\tan^{-1}\left[\frac{2\sqrt{(x-x_+)(x-x_-)}}{2x-x_+-x_-} \sinh\left(\frac{x_+-x_-}{2}\tau\right)\right]\;,\nonumber\\ \sigma&=&\sinh^{-1}\left[\frac{2\sqrt{(x-x_+)(x-x_-)}}{2x-x_+-x_-} \cosh\left(\frac{x_+-x_-}{2}\tau\right)\right]\;,\nonumber\\ u&=&\alpha\phi+\tan^{-1}\left[\frac{2x-x_+-x_-}{x_+-x_-} \coth\left(\frac{x_+-x_-}{2}\tau\right)\right]\;, \end{aligned}$$ the metric of self-dual warped AdS$_3$ black hole solution can be transformed to the metric of spacelike warped AdS$_3$ spacetime $$\begin{aligned} ds^2=\frac{1}{\nu^2+3}\left(-\cosh^2\sigma dv^2+d\sigma^2 +\frac{4\nu^2}{\nu^2+3}(du+\sinh\sigma dv)^2\right)\;. \end{aligned}$$ It will be shown that this coordinates transformation is just the appropriate coordinates transformation to uncover the hidden conformal symmetry. The isometry group of this solution is U(1)$_L\times$SL(2,R)$_R$, which is generated by the killing vectors $$\begin{aligned} J_2=2\partial_u\;, \end{aligned}$$ and $$\begin{aligned} \tilde{J}_1&=&2\sin v\tanh\sigma\partial_v -2\cos v\partial_\sigma+\frac{2\sin v}{\cosh\sigma}\partial_u\;,\nonumber\\ \tilde{J}_2&=&-2\cos v\tanh\sigma\partial_v-2\sin v\partial_\sigma -\frac{2\cos v}{\cosh\sigma}\partial_u\;,\nonumber\\ \tilde{J}_0&=&2\partial_v\;. \end{aligned}$$ It is also shown in [@chenselfdual] that, under the consistent boundary condition, the U(1)$_L$ isometry is enhanced to a Virasoro algebra with the central charge $$\begin{aligned} \label{eq11} c_L=\frac{4\nu}{\nu^2+3}\;, \end{aligned}$$ while the SL(2, R)$_R$ isometry becomes trivial with the vanishing central charge $c_R=0$, which is similar to the case of extremal Kerr/CFT dual [@GTSS; @HMNS]. The entropy of self-dual warped AdS$_3$ black hole can be reproduced by the Cardy formula. So it is conjectured that the self-dual warped AdS$_3$ black hole is holographically dual to a two dimensional chiral conformal field theory with nonvanishing left central charge. Hidden conformal symmetry ========================= In this section, we study the hidden conformal symmetry by analyzing the massive scalar field propagating in the background of self-dual warped AdS$_3$ black hole. Firstly, it is found that the scalar field equation can be exactly solved by the hypergeometric function. Then, by introducing the SL(2, R)$_L\times$SL(2, R)$_R$ generators and using the coordinates transformation (\[eq7\]), we show the wave equation can be reproduced by the SL(2, R) Casimir operator. Scalar field perturbation ------------------------- Let us consider the scalar field $\Phi$ with mass $m$ in the background of self-dual warped AdS$_3$ black hole, where the wave equation is given by the Klein-Gordon equation $$\begin{aligned} \left(\frac{1}{\sqrt{-g}}\partial_\mu\left (\sqrt{-g}g^{\mu\nu}\partial_\nu\right)-m^2\right)\Phi=0\;. \end{aligned}$$ The scalar field wave function $\Phi(\tau, x, \phi)$ can be expanded in eigenmodes as $$\begin{aligned} \label{eq13} \Phi=e^{-i\omega\tau+ik\phi}R(x)\;, \end{aligned}$$ where $\omega$ and $k$ are the quantum numbers. Then the radial wave equation can be written as $$\begin{aligned} \label{eq14} \left[\partial_x\left((x-x_+)(x-x_-)\partial_x\right) +\frac{\left(\omega+\frac{x_+-x_-}{2\alpha}k\right)^2}{(x-x_+)(x_+-x_-)} -\frac{\left(\omega-\frac{x_+-x_-}{2\alpha}k\right)^2}{(x-x_-)(x_+-x_-)}\right]R(x)&& \nonumber\\ =\left(-\frac{3(\nu^2-1)}{4\nu^2}\frac{k^2}{\alpha^2}+\frac{1}{\nu^2+3}m^2\right) R(x)&&. \end{aligned}$$ This radial wave equation can be exactly solved by the hypergeometric function. In order to solve the radial equation, it is convenient to introduce the variable $$\begin{aligned} z=\frac{x-x_+}{x-x_-}\;. \end{aligned}$$ Then, the radial equation can be rewritten in the form of hypergeometric equation $$\begin{aligned} z(1-z)\frac{d^2 R(z)}{dz^2}+ (1-z)\frac{dR(z)}{dz}+\left( \frac{A}{z}+B+\frac{C}{1-z}\right)R(z)=0\;, \end{aligned}$$ with the parameters $$\begin{aligned} A&=&\left(\frac{k}{2\alpha}+\frac{\omega}{x_+-x_-}\right)^2 \;,\nonumber\\ B&=&-\left(\frac{k}{2\alpha}-\frac{\omega}{x _+-x_-}\right)^2 \;,\nonumber\\ C&=&\frac{3(\nu^2-1)}{4\nu^2}\frac{k^2}{\alpha^2}-\frac{1}{\nu^2+3}m^2\;. \end{aligned}$$ For later convenience, we consider the solution with the ingoing boundary condition at the horizon which is given by the hypergeometric function $$\begin{aligned} R(z)=z^{\alpha}(1-z)^{\beta}F(a,b,c,z)\;, \end{aligned}$$ where $$\begin{aligned} \alpha=-i\sqrt{A}\;,\;\;\; \beta=\frac{1}{2}-\sqrt{\frac{1}{4}-C}\;, \end{aligned}$$ and $$\begin{aligned} c&=&2\alpha+1\;,\nonumber\\ a&=&\alpha+\beta+i\sqrt{-B}\;,\nonumber\\ b&=&\alpha+\beta-i\sqrt{-B}\;. \end{aligned}$$ It should be noted that, generally, the wave equation cannot be analytically solved and the solution must be obtained by matching solutions in an overlap region between the near-horizon and asymptotic regions. But, in the present case, we have shown that the radial equation can be exactly solved by hypergeometric functions. As hypergeometric functions transform in the representations of SL(2,R), this suggests the existence of a hidden conformal symmetry. In the next subsection, we will try to explore this hidden conformal symmetry. SL(2, R)$_L\times$SL(2, R)$_R$ ------------------------------ Now, we will uncover the hidden conformal symmetry by showing that the radial equation can also be obtained by using of the SL(2, R) Casimir operator. Let us define vector fields $$\begin{aligned} H_0&=&-\frac{i}{2}\tilde{J}_2\;,\nonumber\\ H_1&=&\frac{i}{2}(\tilde{J}_0+\tilde{J}_1)\;,\nonumber\\ H_{-1}&=&\frac{i}{2}(\tilde{J}_0-\tilde{J}_1)\;, \end{aligned}$$ and $$\begin{aligned} \tilde{H}_0&=&\frac{i}{2}J_2\;,\nonumber\\ \tilde{H}_1&=&\frac{1}{2}(J_1+J_0)\;,\nonumber\\ \tilde{H}_{-1}&=&\frac{1}{2}(J_1-J_0)\;, \end{aligned}$$ with $$\begin{aligned} J_1&=&-\frac{2\sinh u}{\cosh\sigma}\partial_v -2\cosh u\partial_\sigma+2\tanh\sigma\sinh u\partial_u\;,\nonumber\\ J_0&=&\frac{2\cosh u}{\cosh\sigma}\partial_v +2\sinh u\partial_\sigma-2\tanh\sigma\cosh u\partial_u\;. \end{aligned}$$ Note that $(J_1, J_2, J_0)$ and $(\tilde{J}_1, \tilde{J}_2, \tilde{J}_0)$ are the SL(2, R)$_L\times$SL(2, R)$_R$ Killing vectors of AdS$_3$ spacetime. The vector fields $(H_1, H_0, H_{-1})$ obey the SL(2, R) Lie algebra $$\begin{aligned} [H_0,H_{\pm 1}]=\pm iH_{\pm 1}\;,\;\;[H_{-1},H_1]=2iH_0\;, \end{aligned}$$ and similarly for $(\tilde{H}_1, \tilde{H}_0, \tilde{H}_{-1})$. According to the coordinates transformation (\[eq7\]), the SL(2, R) generators can be expressed in terms of the black hole coordinates $(\tau, x, \phi)$ as $$\begin{aligned} \label{eq25} H_0&=&\frac{i}{2\pi T_R}\partial_\tau\;,\nonumber\\ H_{-1}&=&ie^{-2\pi T_R\tau}\left[ \sqrt{(x-x_+)(x-x_-)}\;\partial_x -\frac{1}{\sqrt{(x-x_+)(x-x_-)}}\cdot\frac{T_R}{T_L}\partial_\phi\right. \nonumber\\ &&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \left.+\frac{\left(x-\frac{x_++x_-}{2}\right)}{\sqrt{(x-x_+)(x-x_-)}} \cdot\frac{1}{2\pi T_R}\partial_\tau \right]\;,\nonumber\\ H_{1}&=&ie^{2\pi T_R\tau}\left[ -\sqrt{(x-x_+)(x-x_-)}\;\partial_x -\frac{1}{\sqrt{(x-x_+)(x-x_-)}}\cdot\frac{T_R}{T_L}\partial_\phi\right. \nonumber\\ &&\;\;\;\;\;\;\;\;\;\;\;\;\;\; \left.+\frac{\left(x-\frac{x_++x_-}{2}\right)}{\sqrt{(x-x_+)(x-x_-)}} \cdot\frac{1}{2\pi T_R}\partial_\tau \right]\;, \end{aligned}$$ and $$\begin{aligned} \label{eq26} \tilde{H}_0&=&\frac{i}{2\pi T_L}\partial_\phi\;,\nonumber\\ \tilde{H}_{-1}&=&ie^{-2\pi T_L\phi}\left[ \sqrt{(x-x_+)(x-x_-)}\;\partial_x +\frac{\left(x-\frac{x_++x_-}{2}\right)}{\sqrt{(x-x_+)(x-x_-)}} \cdot\frac{1}{2\pi T_L}\partial_\phi\right.\nonumber\\ &&\;\;\;\;\;\;\;\;\;\;\;\;\;\; \left.-\frac{1}{\sqrt{(x-x_+)(x-x_-)}}\partial_\tau\right]\;, \nonumber\\ \tilde{H}_{1}&=&ie^{2\pi T_L\phi}\left[ -\sqrt{(x-x_+)(x-x_-)}\;\partial_x +\frac{\left(x-\frac{x_++x_-}{2}\right)}{\sqrt{(x-x_+)(x-x_-)}} \cdot\frac{1}{2\pi T_L}\partial_\phi\right.\nonumber\\ &&\;\;\;\;\;\;\;\;\;\;\;\;\;\; \left.-\frac{1}{\sqrt{(x-x_+)(x-x_-)}}\partial_\tau\right]\;, \end{aligned}$$ where $T_L$ and $T_R$ are defined by $$\begin{aligned} \label{eq27} T_L=\frac{\alpha}{2\pi}\;,\;\;\; T_R=\frac{x_+-x_-}{4\pi}\;. \end{aligned}$$ The SL(2, R) quadratic Casimir operator is defined by $$\begin{aligned} \mathcal{H}^2=\mathcal{\tilde{H}}^2&=&-H_0^2+\frac{1}{2}(H_1H_{-1}+H_{-1}H_1)\;. \end{aligned}$$ In terms of the $(\tau, x, \phi)$ coordinates, the SL(2, R) quadratic Casimir operator becomes $$\begin{aligned} \mathcal{H}^2&=&\partial_x\left((x-x_+)(x-x_-)\partial_x\right) -\frac{x_+-x_-}{x-x_+}\left[\frac{1}{4\pi T_R}\partial_\tau -\frac{1}{4\pi T_L}\partial_\phi\right]^2\nonumber\\ &&+\frac{x_+-x_-}{x-x_-}\left[\frac{1}{4\pi T_R}\partial_\tau +\frac{1}{4\pi T_L}\partial_\phi\right]^2\;. \end{aligned}$$ For the case of $\nu=1$, the first term of right hand side of Eq.(\[eq14\]) is vanishing. It should be noted that, unlike the case of higher dimensional black holes [@Castro; @Krishnan; @Chensun; @Wang; @chenlong; @ranli; @chendeyou; @becker; @chenlong1; @chendeyou1; @chenhuang; @chenhuang1; @addref], where the near-region limit for the radial wave equation is taken into account, in the present case, no extra approximation is needed to match the wave equation of scalar field with the Casimir operator. So for the case of $\nu=1$, the self-dual warped black holes exhibit the local SL(2, R)$_L\times$SL(2, R)$_R$ symmetry just like the BTZ black hole. We consider the nontrivial case of $\nu^2>1$, when these solutions are free of naked CTCs. Additional condition must be imposed to match the wave equation with the Casimir. In order to neglect the first term of right hand side of Eq.(\[eq14\]), we impose the condition that the angular momentum $k$ of scalar field is sufficient small $$\begin{aligned} \label{eq29} \frac{3(\nu^2-1)}{4\nu^2}\frac{k^2}{\alpha^2} \ll 1\;. \end{aligned}$$ Then, we find that the wave equation of massive scalar field with the sufficient small angular momentum $k$ can be rewritten as the Casimir operator $$\begin{aligned} \mathcal{H}^2\Phi=\mathcal{\tilde{H}}^2\Phi =\frac{1}{\nu^2+3}m^2 \Phi\;. \end{aligned}$$ and the conformal weights of dual operator of the massive scalar field $\Phi$ is given by $$\begin{aligned} \label{eq32} (h_L,h_R)=\left( \frac{1}{2}+\sqrt{\frac{1}{4}+\frac{m^2}{\nu^2+3}}, \frac{1}{2}+\sqrt{\frac{1}{4}+\frac{m^2}{\nu^2+3}}\right)\;. \end{aligned}$$ So we have found that, similar to the case of higher dimensional black holes, the hidden SL(2, R)$_L\times$SL(2,R)$_R$ symmetry of self-dual warped AdS$_3$ black hole is uncovered by investigating the wave equation of scalar field in its background. Note that the hidden conformal symmetry is not derived from the conformal symmetry of spacetime geometry itself. It is also interesting that the hidden conformal symmetry of self-dual warped AdS$_3$ black hole is locally the isometry of AdS$_3$ spacetime, which means that scalar fields with sufficient small angular momentum $k$ satisfying the condition (\[eq29\]) do not feel the warped property of spacetime. While for the case of spacelike warped AdS$_3$ black hole investigated in [@hiddenwarped], this observation is valid for the scalar fields with sufficient low energy. CFT interpretation ================== Temperature and entropy ----------------------- As pointed out in [@Castro], for the case of higher dimensional black hole, the vector fields of SL(2, R) generators are not globally defined. Because of the periodic identification in the $\phi$ direction, the hidden SL(2, R)$_L\times$SL(2,R)$_R$ symmetry is spontaneously broken to U(1)$_L\times$U(1)$_R$ subgroup, which gives rise to the left and right temperatures of dual CFT. The story is somewhat different in the present case. The generators of SL(2, R) presented in Eq.(\[eq26\]) are affected by the periodic identification in the $\phi$ direction, while these in Eq.(\[eq25\]) are not because they are just the Killing vectors associated to SL(2, R) isometry of this solution. This means that only one copy of hidden conformal symmetry is broken to U(1), while the another copy is unbroken, which only gives the left temperature $T_L$ of dual CFT. The right temperature $T_R$ can not read from this approach. The periodical identification makes no contribution to the right temperature. This point can also be reached from the coordinates transformation (\[eq7\]) which indicates that the self-dual warped AdS$_3$ black hole can not be obtained as a quotient of warped AdS$_3$ vaccum. As discussed in [@chenselfdual], the left and right temperatures of dual CFT can be defined with respect to the Frolov-Thorne vacuum [@frolov]. Consider the quantum field with eigenmodes of the asymptotic energy $\omega$ and angular momentum $k$. After tracing over the region inside the horizon, the vacuum is a diagonal density matrix in the energy-angular momentum eigenbasis with a Boltzmann weighting factor $e^{-\frac{\omega-k\Omega}{T_H}}$. The left and right charges $\omega_L$, $\omega_R$ associated to $\partial_\phi$ and $\partial_t$ are $k$ and $\omega$ respectively. In terms of these variables, the Boltzmann factor is $$\begin{aligned} e^{-\frac{\omega-k\Omega}{T_H}}= e^{-\frac{\omega_L}{T_L}-\frac{\omega_R}{T_R}}\;, \end{aligned}$$ which gives the definition of left and right temperatures (\[eq27\]). So one can conjecture that the self-dual warped AdS$_3$ black hole is holographically dual to a two dimensional CFT with the left temperature $T_L=\frac{\alpha}{2\pi}$ and the right temperature $T_R=\frac{x_+-x_-}{4\pi}$. As a check of this conjecture, we now want to calculate the microscopic entropy of the dual CFT, and compare it with the Bekenstein-Hawking entropy of self-dual warped AdS$_3$ black hole. By imposing the consistent boundary condition, Chen et al [@chenselfdual] have calculated the central charge of the asymptotic symmetry group, where the conclusion is presented in Eq.(\[eq11\]). So the microscopic entropy of the dual conformal field can be calculated by using the Cardy formula $$\begin{aligned} S_{CFT}=\frac{\pi^2}{3}(c_LT_L+c_RT_R) =\frac{2\pi\alpha\nu}{3G(\nu^2+3)}=S_{BH}\;, \end{aligned}$$ which matches with the Bekenstein-Hawking entropy of self-dual warped AdS$_3$ black hole. Absorption cross section ------------------------ In this subsection, we will calculate the absorption probability for the scalar filed perturbation and compare it with result from the CFT side. For the spacelike warped AdS$_3$ black hole, this aspect has been investigated in [@kim] and [@wwyu]. Under the condition (\[eq29\]), the solution to the radial equation of scalar field perturbation with the ingoing boundary condition is explicitly given by $$\begin{aligned} R(x)&=&\left(\frac{x-x_+}{x-x_-}\right) ^{-i\left(\frac{k}{2\alpha}+\frac{\omega}{x_+-x_-}\right)} \left(\frac{x_+-x_-}{x-x_-}\right)^{\frac{1}{2}-\sqrt{\frac{1}{4}+\frac{m^2}{\nu^2+3}}}\nonumber\\ &&\times F\left(\frac{1}{2}-\sqrt{\frac{1}{4}+\frac{m^2}{\nu^2+3}} -i\frac{2\omega}{x_+-x_-},\frac{1}{2}-\sqrt{\frac{1}{4}+\frac{m^2}{\nu^2+3}} -i\frac{k}{\alpha},\right.\nonumber\\ &&\left.1-i\left(\frac{k}{\alpha}+\frac{2\omega}{x_+-x_-}\right),\;\; \frac{x-x_+}{x-x_-}\right)\;. \end{aligned}$$ At asymptotic infinity, the solution behaves as $$\begin{aligned} R(x\rightarrow\infty)\sim Ax^{-\frac{1}{2}+\sqrt{\frac{1}{4}+\frac{m^2}{\nu^2+3}}}\;, \end{aligned}$$ with $$\begin{aligned} A=\frac{\Gamma\left(1-i\left(\frac{k}{\alpha}+ \frac{2\omega}{x_+-x_-}\right)\right) \Gamma\left(2\sqrt{\frac{1}{4}+\frac{m^2}{\nu^2+3}}\right)} {\Gamma\left(\frac{1}{2}+\sqrt{\frac{1}{4} +\frac{m^2}{\nu^2+3}}-i\frac{2\omega}{x_+-x_-}\right) \Gamma\left(\frac{1}{2}+\sqrt{\frac{1}{4} +\frac{m^2}{\nu^2+3}}-i\frac{k}{\alpha}\right)}\;. \end{aligned}$$ The absorption cross section is then proportional to $$\begin{aligned} P_{\textrm{abs}}&\sim&\left|A\right|^{-2}\nonumber\\ &\sim&\sinh\left(\frac{\pi k}{\alpha}+ \frac{2\pi\omega}{x_+-x_-}\right) \left|\Gamma\left(\frac{1}{2}+\sqrt{\frac{1}{4} +\frac{m^2}{\nu^2+3}}-i\frac{2\omega}{x_+-x_-}\right) \right|^2\nonumber\\&&\times\left| \Gamma\left(\frac{1}{2}+\sqrt{\frac{1}{4} +\frac{m^2}{\nu^2+3}}-i\frac{k}{\alpha}\right) \right|^2\;. \end{aligned}$$ To compare it with the result from the CFT side, we need to find out the related parameters. From the first law of black hole thermodynamics $$\begin{aligned} \delta S_{BH}=\frac{\delta M-\Omega_{H}\delta J}{T_{H}}\;, \end{aligned}$$ one can calculate the conjugate charges $$\begin{aligned} \delta S_{BH}=\frac{\delta E_L}{T_L} +\frac{\delta E_R}{T_R}\;. \end{aligned}$$ The solution is given by $$\begin{aligned} \delta E_L=\delta J\;,\;\;\delta E_R=\delta M\;. \end{aligned}$$ Identifying $\delta M=\omega$ and $\delta J=k$, one can find the left and right conjugate charges as $$\begin{aligned} \omega_L\equiv\delta E_L=k\;,\;\;\omega_R\equiv\delta E_R=\omega\;, \end{aligned}$$ which are coincide with the the left and right charges given in the last subsection. Finally, the absorption cross section can be expressed as $$\begin{aligned} P_{\textrm{abs}}\sim T_L^{2h_L-1}T_R^{2h_R-1} \sinh\left(\frac{\omega_L}{2T_L}+\frac{\omega_R}{2T_R}\right) \left|\Gamma\left(h_L+i\frac{\omega_L}{2\pi T_L}\right)\right|^2 \left|\Gamma\left(h_R+i\frac{\omega_R}{2\pi T_R}\right)\right|^2\;, \end{aligned}$$ which is precisely coincide with the the absorption cross section for a finite temperature 2D CFT. Quasinormal modes ----------------- In this subsection, we will compute the quasinormal modes of scalar field perturbation by using the algebraic method firstly proposed by Sachs et al in [@BTZTMG] and compare the results with that presented in [@ranlimode]. This method strongly depends on the observation of hidden conformal symmetry in the last section. This method has also been employed to investigate the quasinormal modes of vector and tensor perturbation by chen et al in [@chenbinmode]. Here, we consider the general case without imposing the small angular momentum condition (\[eq29\]). The radial equation of scalar field perturbation can be written using the SL(2, R) generators as $$\begin{aligned} \label{eq44} \left[\frac{1}{2}\left(H_1H_{-1}+H_{-1}H_1\right)-H_0^2 +\frac{3(\nu^2-1)}{4\nu^2}\tilde{H}_0^2 \right]\Phi=\frac{m^2}{\nu^2+3}\Phi\;. \end{aligned}$$ Firstly, we consider the chiral highest weight modes satisfying the condition $$\begin{aligned} H_1\Phi=0\;. \end{aligned}$$ Under the ansatz (\[eq13\]) for scalar field, this condition implies the equation $$\begin{aligned} \frac{d}{dx}\ln R(x)=-\frac{i}{(x-x_+)(x-x_-)} \left(\frac{T_R}{T_L}k+\frac{[(x-x_+)+(x-x_-)]}{4\pi T_R}\omega \right)\;, \end{aligned}$$ which gives the solution $$\begin{aligned} R(x)=(x-x_+)^{-i\left(\frac{\omega}{4\pi T_R}+\frac{k}{4\pi T_L}\right)} (x-x_-)^{-i\left(\frac{\omega}{4\pi T_R}-\frac{k}{4\pi T_L}\right)}\;. \end{aligned}$$ Then the operator equation (\[eq44\]) can be transformed into an algebra equation $$\begin{aligned} \left(\frac{\omega}{2\pi T_R}\right)^2 +i\left(\frac{\omega}{2\pi T_R}\right)-C=0\;. \end{aligned}$$ The solution of this equation gives the lowest quasinormal mode in the right-moving sector $$\begin{aligned} \frac{\omega}{2\pi T_R}=-i h'_R\;,\;\; h'_R=\frac{1}{2}+\sqrt{\frac{1}{4}-C}\;. \end{aligned}$$ It should be noted that this conformal weight is slightly different from that given by Eq.(\[eq32\]). If the small angular momentum limit is taken into account, it recovers the conformal weight for scalar field given by Eq.(\[eq32\]). Then, the descendents of the chiral highest weight mode $\Phi$ $$\begin{aligned} \Phi^{(n)}=(H_{-1}\tilde{H}_{-1})^{n}\Phi\;, \end{aligned}$$ give the infinite tower of quasinormal modes $$\begin{aligned} \frac{\omega}{2\pi T_R}=-i(n+h'_R)\;. \end{aligned}$$ This is just the result obtained in [@ranlimode; @chenbinmode], and coincides with the poles in the retarded Green’s function obtained in [@chenselfdual]. The left sector of quasinormal scalar modes can not be obtained analogously. The solution constructed from the chiral highest weight condition $H_1\Phi=0$ falls off in time as well as at infinity, while for another highest weight condition $\tilde{H}_1\Phi=0$ one can not obtain the solution with the same property. Conclusion ========== We have investigated the hidden conformal symmetry of self-dual warped AdS$_3$ black holes in topological massive gravity. The wave equation of massive scalar field propagating in this background with sufficient small angular momentum can be rewritten in the form of SL(2, R) Casimir operator. Interestingly, unlike the higher dimensional black holes where the near-horizon limit should be taken into account to match the wave equation with the Casimir operator, in the present case, only the condition of the small angular momentum of scalar field is imposed, which suggests that the hidden conformal symmetry is valid for the scalar field with arbitrary energy. Despite the right temperature can not be directly read from the periodic identification in the $\phi$ direction, one can still conjecture that the self-dual warped AdS$_3$ black hole is dual to a 2D CFT with nonzero left and right temperatures. As a check of this conjecture, we also show that the entropy of the dual conformal field given by the Cardy formula matches exactly with the Bekenstein-Hawking entropy of self-dual warped AdS$_3$ black hole. Furthermore, the absorption cross section of scalar field perturbation calculated from the gravity side is in perfect match with that for a finite temperature 2D CFT. At last, an algebraic calculation of quasinormal modes for scalar field perturbation is presented and the correspondence between quasinormal modes and the poles in the retarded Green’s function is found. Acknowledgement {#acknowledgement .unnumbered} =============== RL would like to thank Shi-Xiong Song and Lin-Yu Jia for helpful discussions. The work of JRR was supported by the Cuiying Programme of Lanzhou University (225000-582404) and the Fundamental Research Fund for Physics and Mathematic of Lanzhou University(LZULL200911). [99]{} S. Deser, R. Jackiw and S. Templeton, *Topologically massive gauge theories*, Ann. Phys. **140**(1982)372. S. Deser, R. Jackiw and S. Templeton, *Three-Dimensional Massive Gauge Theories*, Phys. Rev. Lett. **48**(1982)975. D. Anninos, W. Li, M. Padi, W. Song, and A. Strominger, *Warped $AdS_3$ Black Holes*, JHEP, **0903**(2009)130. K. A. Moussa, G. Clement and C. Leygnac, Class. Quantum Grav. **20** (2003)L277; A. Bouchareb and G. Clement, Class. Quantum Grav. **24**(2007)5581; K. A. Moussa, G. Clement, H. Guennoune and C. Leygnac, Phys. Rev. D **78**(2008)064065. G. Compere and S. Detournay, *Semi-classical Central Charge in Topologically Massive Gravity*, Class. Quant. Grav. **26**(2009)012001; G. Compere and S. Detournay, *Boundary Conditions for Spacelike and Timelike Warped $AdS_3$ Spaces in Topologically Massive Gravity*, JHEP, **0908**(2009)092; M. Blagojevic and B. Cvetkovic, *Asymptotic Structure of Topologically Massive Gravity in Spacelike Stretched AdS Sector*, JHEP, **0909**(2009)006, R. Fareghbal, *Hidden Conformal Symmetry of the Warped $AdS_3$ Black Hole*, arXiv: 1006.4034 \[hep-th\]. B. Chen, G. Moutsopoulos and B. Ning, *Self-Dual Warped $AdS_3$ Black Holes*, arXiv: 1005.4175 \[hep-th\]. M. Guica, T. Hartman, W. Song and A. Strominger, *The Kerr/CFT Correspondence*, Phys.Rev. D **80**(2009)124008. T. Hartman, K. Murata, T. Nishioka and A. Strominger, *CFT Duals for Extreme Black Holes*, JHEP, **0904**(2009)019. A. Castro, A. Maloney and A. Strominger, *Hidden Conformal Symmetry of the Kerr Black Hole* arXiv:1004.0996 \[hep-th\]. C. Krishnan, *Hidden Conformal Symmetries of Five-Dimensional Black Holes*, arXiv:1004.3537 \[hep-th\]. C. M. Chen and J. R. Sun, *Hidden Conformal Symmetry of the Reissner-Nordstrom Black Holes*, arXiv:1004.3963 \[hep-th\]. Y. Q. Wang and Y. X. Liu, *Hidden Conformal Symmetry of the Kerr-Newman Black Hole*, arXiv:1004.4661 \[hep-th\]. B. Chen and J. Long, *Real-time Correlators and Hidden Conformal Symmetry in Kerr/CFT Correspondence*, arXiv:1004.5039 \[hep-th\]. R. Li, M.-F. Li and J.-R. Ren, *Entropy of Kaluza-Klein Black Hole from Kerr/CFT Correspondence*, arXiv:1004.5335 \[hep-th\]. D. Chen, P. Wang and H. Wu, *Hidden Conformal Symmetry of Rotating Charged Black Holes*, arXiv:1005.1404 \[gr-qc\]. M. Becker, S. Cremonini and W. Schulgin, *Correlation Functions and Hidden Conformal Symmetry of Kerr Black Holes*, arXiv:1005.3571 \[hep-th\]. B. Chen and J. Long. *On Holographic Description of the Kerr-Newman-AdS-dS Black Holes*, arXiv:1006.0157 \[hep-th\]. H. Wang, D. Chen, B. Mu and H. Wu, *Hidden Conformal Symmetry of the Einstein-Maxwell-Dilaton-Axion Black Hole*, arXiv:1006.0439 \[gr-qc\]. C.-M. Chen, Y.-M. Huang, J.-R. Sun, M.-F. Wu and S.-J. Zou, *On Holographic Dual of the Dyonic Reissner-Nordstrom Black Hole*, arXiv:1006.4092 \[hep-th\]. C.-M. Chen, Y.-M. Huang, J.-R. Sun, M.-F. Wu and S.-J. Zou, *Twofold Hidden Conformal Symmetries of the Kerr-Newman Black Hole*, arXiv:1006.4097 \[hep-th\]. C. Krishnan, *Black Hole Vacua and Rotation*, arXiv:1005.1629 \[hep-th\]. I. Sachs and S. N. Solodukhin, *Quasi-normal modes in topologically massive gravity*, JHEP, 0808(2008)003. V. P. Frolov and K. S. Thorne, *Renormalized Stress-Energy Tensor Near the Horizon of a Slowly Evolving, Rotating Black Hole*, Phys. Rev. D **39**(1989)2125. J. J. Oh and W. Kim, *Absorption Cross Section in Warped AdS$_3$ Black Hole*, JHEP, 0901(2009)067. H.-C. Kao and W.-Y. Wen, *Absorption cross section in warped AdS$_3$ black hole revisited*, JHEP, 0909(2009)102. R. Li and J.-R. Ren, *quasinormal modes of Self-Dual Warped AdS$_{3}$ Black Hole in Topological Massive Gravity*, arXiv:1008.3239\[hep-th\]. B. Chen and J.Long, *Hidden Conformal Symmetry and Quasi-normal Modes*, arXiv:1009.1010\[hep-th\]. [^1]: Electronic mail: [email protected] [^2]: Electronic mail: [email protected] [^3]: Electronic mail: [email protected]
--- abstract: 'Gravitational waves can provide an accurate measurement of the luminosity distance to the source, but cannot provide the source redshift unless the degeneracy between mass and redshift can be broken. This makes it essential to infer the redshift of the source independently to measure the expansion history of the Universe. We show that by exploiting the clustering scale of the gravitational wave sources with galaxies of known redshift, we can infer the expansion history from redshift unknown gravitational wave sources. Using gravitational wave sources with unknown redshift that are detectable from the network of gravitational wave detectors with Advanced LIGO design sensitivity, we will be able to obtain accurate and precise measurements of the local Hubble constant, the expansion history of the universe, and the gravitational wave bias parameter, which captures the distribution of gravitational wave sources with respect to the redshift tracer distribution. This technique is not only limited to the low redshift gravitational wave sources, but will be also applicable to the high redshift gravitational wave sources detectable from Laser Interferometer Space Antenna (LISA), Cosmic Explorer (CE), and Einstein Telescope (ET). Moreover, this method is also going to be applicable to samples of supernovae and fast radio bursts with unknown or photometric redshifts.' author: - Suvodip Mukherjee - 'Benjamin D. Wandelt' - 'Samaya M. Nissanke' - Alessandra Silvestri bibliography: - 'main.bib' title: Accurate and precision Cosmology with redshift unknown gravitational wave sources --- Introduction ============ Measurement of the current expansion rate of the Universe, known as Hubble constant (denoted by $H_0$), as well as its value at different cosmological redshifts, is one of the key science goals in the field of Cosmology. This endeavour, which started with the first measurement of $H_0$ by Edwin Hubble [@1929PNAS...15..168H], has been typically performed via electromagnetic probes which can be classified as standardized candles (e.g., supernovae (SNe)) [@Perlmutter:1998np; @2009ApJ...695..287R; @Riess:2019cxk], standard rulers (e.g., cosmic microwave background (CMB), baryon acoustic oscillation (BAO)) [@Ade:2013zuv; @Anderson:2013zyy; @Aubourg:2014yra; @Ade:2015xua; @Macaulay:2018fxi], and standard clock [@Jimenez:2001gg; @Simon:2004tf; @Stern:2009ep; @Jimenez:2019onw]. All these different probes have become increasingly successful in making precision measurements of $H_0$, but have failed to converge to a single value of the Hubble constant. In fact, low redshift probes to the Universe such as SNe [@Riess:2019cxk] are indicating a value of $H_0= 74 \pm 1.4$ km/s/Mpc , whereas the probes which depend on the high redshift Universe such as big bang nucleosynthesis (BBN), CMB, BAO indicate a value of Hubble constant $H_0= 67.4 \pm 0.5$ km/s/Mpc [@Abbott:2017smn; @Aghanim:2018eyx]. An independent measurement of $H_0= 73. 8^{+1.7}_{-1.8}$ km/s/Mpc from the time delay of the strongly lensed low redshift events by the H0LiCOW [@Wong:2019kwg] also supports the mismatch. This discrepancy in the value of $H_0$ between the probes from the early-time Universe and probes from the late-time Universe differs by more than $4\sigma$ [@Verde:2019ivm]. However independent measurements using the Tip of the Red Giant Branch (TRGB) as calibrators have indicated reduction in the discrepancy in the value of $H_0= 69.8\, \pm \,0.8\, \text{(stat)} \pm 1.7\, \text{(sys)}$ km/s/Mpc [@Freedman:2019jwv]. A few studies have also proposed the possible sources of systematic in the $H_0$ measurement from the late-time [@Rigault:2018ffm; @Kochanek:2019ruu]. However, there is as of yet no conclusive evidence [that]{} settles this mismatch by [either]{} any systematic, [and/or]{} invoking new physics, and hence independent probes are required to settle this discrepancy. The direct detection of gravitational waves has recently offered a new independent probe of cosmic expansion. From the gravitational wave chirp generated by [compact object]{} binary mergers, one can infer luminosity distances [@1986Natur.323..310S; @Holz:2005df; @Dalal:2006qt; @PhysRevD.77.043512; @2010ApJ...725..496N; @2011CQGra..28l5023S; @Nissanke:2013fka], leading to such sources being named standard sirens. The intrinsic luminosity of the gravitational wave source depends on the chirp mass, and its evolution with the frequency of the gravitational wave is solely dictated by the general theory of relativity [@1986Natur.323..310S; @Holz:2005df]. As a result, there is no need for additional calibration of the luminosity of the gravitational wave strain, apart from any systematic uncertainty arising from the gravitational wave detector calibration [@Sun:2020wke] and statistical uncertainty arising from the inclination angle [@2010ApJ...725..496N]. Though standard sirens are promising, using them for the measurement of the expansion history requires an independent measurement of their redshift. The gravitational wave signal alone does not provide this information in the absence of a known scale arising from either the tidal deformation [@Messenger:2011gi] or mass-gap in the binary black hole (BBH) sources due to pair instability supernova [@Farr:2019twy]. Another possibility to determine the redshift is by identifying the host galaxy using a coincident detection of an electromagnetic (EM) counterpart from the gravitational wave source. This joint electromagnetic and gravitational wave measurement was done for the first time by the Laser Interferometer Gravitational-wave Observatory (LIGO) Scientific [and]{} Virgo Collaborations (LVC) from the binary neutron star merger GW170817 and led to an independent measurement of the Hubble constant $H_0= 70_{-8}^{+12}$ [@Abbott:2017smn]. As shown in [@Howlett:2019mdh; @Mukherjee:2019qmm; @Nicolaou:2019cip], the joint estimation of the electromagnetic signal and gravitational wave signal requires peculiar velocity correction to the gravitational wave sources. In general, the error bar on $H_0$ is more than $15\%$ and, is currently not competitive with the measurements from CMB ($<1\%$) and SNe ($\sim 1.5\%$). However in the future with the measurement of a large number of sirens ($\sim 50$) with EM counterparts, one can achieve a $2\%$ measurement of $H_0$ [@Chen:2017rfc; @Feeney:2018mkj]. Another avenue to reduce the error-bar on the value of $H_0$ is by the measurement of the inclination angle [^1] by either measuring the two polarization states of the gravitational wave signal using an expanded network of three or more gravitational wave detectors [@2010ApJ...725..496N] [or by using the higher order multipole moments of the gravitational wave signal [@LIGOScientific:2020stg]. Measurement of the inclination angle is also possible ]{}by accurately modelling the EM emission from the jet of the gravitational wave source (e.g., [@Ghirlanda:2018uyx; @Mooley:2018qfh; @Hotokezaka:2018dfi], though note this method may introduce astrophysical modelling uncertainty. Consistent with the single binary neutron star (BNS) detection with EM counterpart so far [@TheLIGOScientific:2017qsa; @Abbott:2017xzu], the expected number of gravitational wave sources with a EM counterpart in the cosmic volume that can be explored by the Advanced LIGO/Virgo and KAGRA detectors be small, since only BNS and neutron star black hole (NS-BH) systems are expected to have a detectable EM counterpart [@Foucart:2018rjc]. Also every BNS and NS-BH event may not have a detectable EM counterpart e.g., [@Coughlin:2019xfb; @Andreoni:2019qgh; @Abbott:2020uma; @Abbott2020][^2]. Successful detection of the EM counterpart requires the flux of the EM counterpart to be higher than the detection threshold of follow-up telescopes, and also the sky localization area needs to be small enough to do a fast search of the EM counterpart before the EM counterpart fades away [@Kasliwal:2020wmy]. As a result, BNS and NS-BH systems which are farther away and have a poor sky localization may not have a detectable EM counterpart, similar to the BNS event [@Abbott:2020uma]. These issues such as the rate of events, fainter EM counterparts, and poor sky localization can therefore be a serious bottleneck for measuring $H_0$ the expansion history of the Universe using gravitational waves sources on the time scale of ten years with an accuracy of $\sim 2\%$ [@Chen:2017rfc; @Feeney:2018mkj; @PhysRevD.100.103523]. Gravitational wave sources such as BBHs which have higher intrinsic luminosity than the BNS [systems]{} can be detected from farther away, granting us access to a larger [detectable]{} cosmic volume leading and, therefore, a higher possibility of detecting such systems. However, the [majority of]{} BBHs which are detectable in the [frequency band of]{} the Advanced LIGO/Virgo detectors are not expected to have EM counterparts from [themselves]{}, unless there is presence of baryons surrounding the BBH, [where a candidate was recently announced]{} [@PhysRevLett.124.251102]. We refer to the astrophysical systems without any EM counterparts as *dark standard sirens*. Due to the absence of the EM counterpart, identification of the host galaxy is not possible, and hence their redshift cannot be identified in the standard way. This implies that a large number of detected BBHs without EM counterparts cannot be used to measure the Hubble constant in the usual way by fitting the measured luminosity distance and redshift. An alternative approach is required to make these a useful probe of the expansion history of the Universe. A possibility of using the dark standard sirens is to statistically obtain the host galaxy from galaxy catalogues [@DelPozzo:2012zz; @Chen:2017rfc; @Nair:2018ign; @PhysRevD.101.122001]. An application of this for the existing gravitational wave data was performed in previous studies [@Fishbach:2018gjp; @Abbott:2019yzh; @Soares-Santos:2019irc; @Abbott:2020khf]. These methods can be a promising way to obtain $H_0$ but are not optimal, as we will discuss in the following section. A forecast study of this method report the possibility of making $H_0$ measurement at the level of $5.8\%$ in the future with $50$ objects [@Chen:2017rfc; @Nair:2018ign; @Soares-Santos:2019irc] [^3] from only the low redshift sources and keeping the value of matter density of the Universe $\Omega_m$ fixed. These methods associate [a]{} probability to [each]{} galaxy as [a possible]{} host of the dark sirens [@Soares-Santos:2019irc], and is only effective up to low redshift when the number of galaxies are limited. However, if the method is applied to the high redshift sources, then the possible host along a particular direction of the sky is going to be large, and as a result the method is not informative enough to choose the correct galaxy as a host. As a result, it restricts the use of dark sirens to low redshift even if accurate distance measurement is possible for sources at high redshift from the LIGO/Virgo design sensitivity [@Acernese_2014; @Martynov:2016fzi], and from the upcoming gravitational wave detectors such as Kamioka Gravitational Wave Detector (KAGRA) [@Akutsu:2018axf], LIGO-India [@Unnikrishnan:2013qwa], Laser Interferometer Space Antenna (LISA) [@2017arXiv170200786A], Einstein Telescope (ET) [@Punturo:2010zz], and Cosmic Explorer (CE) [@Reitze:2019iox]. An alternative way to find the redshift of the source is by exploring any mass scale associated with the compact objects originating due to neutron star mass distribution [@PhysRevLett.73.1878; @PhysRevD.85.023535], tidal deformation [@Messenger:2011gi] or using the mass-gap in the gravitational wave source population due to the pair instability supernova [@Farr:2019twy]. In this work, we explore a method which can be applied up to high redshift (up to which galaxy samples are going to be available) and can measure the value of $H_0$ along with the density of dark energy, equation of state of dark energy, and also the spatial distribution of black holes with respect to the dark matter distribution. We exploit the fact that both the gravitational wave sources and galaxies are tracers of matter density, and therefore they are spatially correlated through the underlying matter field, to infer the redshift of the dark standard sirens [@1986Natur.323..310S; @PhysRevD.93.083511; @Mukherjee:2018ebj]. We build on the previous work, where clustering with galaxies was applied to redshift unknown (or photometrically known) SNe [@Mukherjee:2018ebj]. Our method does not identify *the host galaxy* of the BBH source, but finds *its host redshift shell* by exploring the three-dimensional spatial cross-correlation of the gravitational wave sources with redshift-known galaxies. Host galaxy identification is therefore in the limit of our approach that only exploits very small, galaxy-scale correlations [@Mukherjee:2018ebj]. We detail the formalism of this method and the likelihood setup in Sec. \[formalism\] and Sec. \[likelihood\] respectively. Our method does not require making any additional assumption about the redshift dependence of the merger rate of gravitational waves sources but only requires that the BBH mergers traces galaxies [(incorporating the possibility of natal birth kicks)]{}, so that there is a spatial correlation, as discussed in Sec. \[sims\]. We show a forecast for the accuracy and precision of the measurements of [$H_0$]{} achievable with our method in Fig. \[allh0\] after marginalizing over the matter density $\Omega_m$, and the redshift dependent gravitational wave bias parameter $b_{GW} (z)= b_{GW}(1+z)^\alpha$. Details about this result are given in Sec. \[results\]. Moreover, since dark sirens can be detected up to high redshift, this method makes it possible to also explore the expansion history of the Universe and can provide an independent measurement of the cosmological parameters related to matter density $\Omega_m$, dark energy equation of state $w_0$, and its redshift dependence $w(z)= w_a(z/(1+z))$. This method can also explore the bias parameter of the gravitational wave sources at different redshifts $b_{GW}(z)$, which will capture its spatial distribution with respect to dark matter. This method will also be applicable to the multi-messenger test of gravity proposed previously [@Mukherjee:2019wcg; @Mukherjee:2019wfw]. The breadth of the scientific returns possibly from this avenue surpasses the statistical host identification methods [@Chen:2017rfc; @Nair:2018ign; @Soares-Santos:2019irc]. For comparison, we apply our method to only low redshift sources with fixed value of $\Omega_m$, assuming a known value of the gravitational wave bias parameter $b_{GW}$. We find that the the error-bar on $H_0$ from these methods [@Chen:2017rfc; @Nair:2018ign; @Soares-Santos:2019irc] is more by only about $30\%$ than our method. This implies in the limit of low redshift sources, these methods [@Chen:2017rfc; @Nair:2018ign; @Soares-Santos:2019irc] approach the optimal solution proposed in this work. We conclude in Sec. \[conc\]. Formalism: Exploring the clustering of the gravitational wave sources with galaxies {#formalism} =================================================================================== [The matter distribution is clustered and can be statistically described by the correlation function $\xi(r)$[^4] [@1975ApJ...196....1P; @1977ApJS...34..425D; @1983ApJ...267..465D; @1993ApJ...417...19H; @1993ApJ...412...64L].]{} Astrophysical gravitational wave events are expected to occur in galaxies, and hence are going to follow the spatial distribution of the galaxies with a bias parameter $b_{GW}$ that is different from the bias parameters for galaxies $b_{g}$[^5]. According to the standard model of cosmology, the spatial distribution of galaxies should trace the underlying distribution of matter in the Universe, and can be expressed as a biased tracer of the matter density field $\delta_{m}({\bm{k}})$ by the relation, $$\label{deltag} \delta_g ({\bm{k}})= b_g(k) \delta_{m} ({\bm{k}}),$$ where $b_g(k)$ is the galaxy bias, $\delta_g ({\bm{k}})$ is the Fourier transformation of the real space galaxy density field defined in terms of the number density of galaxy $n_g(\hat r)$ at a position ${\bm{r}}$ and mean number of galaxies $\bar n_g$.[^6] as $\delta_g({\bm{r}})= n_g({\bm{r}})/\bar{n_g} -1$ A spectroscopic (or photometric) survey [results in]{} observations of galaxies in the redshift space denoted by [superscript]{} $s$, which leads to a redshift space distortion (RSD) in the density field [@1987MNRAS.227....1K]. The large scale effect due to RSD (known as the Kaiser term) [@1987MNRAS.227....1K; @Hamilton:1997zq] is $$\label{deltagrsd} \centering \delta^s_g ({\bm{k}}, z)= b_g(k,z)(1+ \beta_g \mu_{\hat k}^2)\delta^r_{m} ({\bm{k}}, z),$$ where $\beta\equiv f/b_g(k,z)$ defined in terms of $f\equiv \frac{d\ln D}{d\ln a}$ which is the logarithmic derivative of the growth function $D$ with respect to the scale factor $a$, $\mu_{\hat k}= \cos{\hat n.\hat k}$ is angle between the line of sight and the Fourier mode $\hat k$, [and the superscript $r$ denotes real space]{}. Following the definition Eq. , we can define the density field for the gravitational wave sources in real space $\delta^r_{GW}$ as $$\begin{aligned} \delta^r_{GW} ({\bm{k}}, z)= b_{GW}(k,z)\delta^r_{m}({\bm{k}}, z)\end{aligned}$$ where $b_{GW} (k, z)$ is the gravitational wave bias parameter [@Mukherjee:2018ebj; @Mukherjee:2019qmm; @Mukherjee:2019oma; @Calore:2020bpd; @Vijaykumar:2020pzn]. The gravitational wave bias parameter captures how gravitational wave sources trace the large scale structure in the Universe [@Mukherjee:2019oma]. Since the gravitational wave sources are tracers of luminosity distance and not redshift, they are not affected by RSD. [The corresponding power spectrum between the field can then be written as]{} $$\begin{aligned} \label{power-spec} \begin{split} \left\langle\begin{pmatrix} \delta^s_g ({\bm{k}}, z)\\ \delta^r_{GW}({\bm{k}}, z) \end{pmatrix}\begin{pmatrix} \delta^s_g ({\bm{k}}', z)& \delta^r_{GW}({\bm{k}}', z) \end{pmatrix}\right\rangle= \begin{pmatrix} P^{ss}_{gg} ({\bm{k}}, z)\delta_D( {\bm{k}}- {\bm{k}}') + \bar n_g(z)^{-1}& P^{sr}_{g\,GW} ({\bm{k}}, z)\delta_D( {\bm{k}}- {\bm{k}}')\\ P^{sr}_{g\,GW} ({\bm{k}}, z)\delta_D( {\bm{k}}- {\bm{k}}') & P^{rr}_{GW\,GW} ({\bm{k}}, z))\delta_D( {\bm{k}}- {\bm{k}}') + \bar n_{GW}(z)^{-1} \end{pmatrix}, \end{split}\end{aligned}$$ The corresponding power spectrum between two tracers (say $x$ and $y$) of the dark matter density field can be written as $$\label{power-spec} \langle \delta^i_x({\bm{k}}, z)\delta^{j*}_y({\bm{k}}', z) \rangle = (P^{ij}_{xy} ({\bm{k}}, z) + \frac{1}{\bar n_x}\delta^K_{xy})\delta_D( {\bm{k}}- {\bm{k}}'),$$ where $P^{ij}_{xy} ({\bm{k}}, z)$ is the three dimensional power spectrum at redshift $z$ associated with the clustering between two tracers ($\{x\,y\} \in \{g, GW\}$) in redshift space or real space ($\{i\,j\} \in \{s, r\}$), $\delta_D( {\bm{k}}- {\bm{k}}')$ denotes the Dirac delta function, and $\bar n_{x}(z)^{-1}$ is the shot noise contribution which is non-zero only when $x$ and $y$ are the same. The redshift tomographic estimate of the auto power spectrum ($x=y$) and cross power spectrum ($x\neq y$) between galaxies and gravitational wave sources can be written in terms of the matter power spectrum $P_m(k,z)$ as $$\label{power-spec-dm} \begin{split} P^{ss}_{gg} ({\bm{k}},z)&= b^2_g(k,z)(1 + \beta_g \mu_{\hat k}^2)^2P_{m}(k,z),\\ P^{sr}_{g\,GW} ({\bm{k}},z)&= b_g(k,z)b_{GW}(k,z)(1 + \beta_g \mu_{\hat k}^2)P_{m}(k,z),\\ P^{rr}_{GW\,GW} ({\bm{k}},z)&= b^2_{GW}(k,z)P_{m}(k,z). \end{split}$$ Due to the presence of redshift space distortion (RSD) [@1987MNRAS.227....1K], the observed auto (and cross) power spectrum with galaxies are anisotropic. The bias parameter for galaxies $b_g(k,z)$ and gravitational wave sources $b_{GW}(k,z)$ are modelled as redshift and scale-dependent. At large scales ($k<0.1$) the galaxy bias is scale-independent, and behaves like a constant value $b_g= 1.6$ [@2012MNRAS.427.3435A; @Desjacques:2016bnm; @2017MNRAS.470.2617A]. For the sources of gravitational wave, we can also expect similar scale-independent behaviour of the bias parameter $b_{GW}$ in the large scales. However, at the small scales, the bias parameter is likely to be scale-dependent. The redshift dependence of the gravitational wave bias parameter is also unknown and we will discuss [its implication]{} in detail in the next section. One of the key aspects of Eq.  is that the underlying cross power spectrum between galaxies and gravitational wave sources $P^{sr}_{g\,GW}({\bm{k}}, z)$ is related to the matter power spectrum $P_{m}(k, z)$, which is also measurable from the auto power spectrum of galaxies $P^{ss}_{gg}({\bm{k}}, z)$. As a result, $P_{g\,GW}({\bm{k}}, z)$ should follow similar statistical properties as $P^{ss}_{gg}({\bm{k}}, z)$. We exploit this very simple model to use the spatial cross-correlation of galaxies with gravitational wave sources to infer the luminosity-distance–redshift relation and hence the cosmological parameters. Likelihood for inferring the expansion history using dark standard sirens {#likelihood} ========================================================================= Let us consider a sample of $N_{GW}$ gravitational wave sources (denoted by $i$) for which we have inferred the luminosity distance $\{d^i_l\}$ to the source over a sky volume denoted by $V_s$. For each of these sources there is also a measurement of the sky localization $\{\theta^i_{GW}, \, \phi^i_{GW}\}$ with a $68\%$ sky localization error $\Delta \Omega^i_{GW}$ for each source. The presence of sky localization error will smooth out the density fluctuations for values of $k> k_{eff}(z)\equiv \sqrt{8\ln2}/(\Delta \Omega_{GW}^{1/2}d_c(z))$, where $d_c (z)$ is the comoving distance to the source.[^7] Critically, assuming a Gaussian distribution of the sky localization error, we can write the effect of sky localization on the density field as $\delta_{GW}({\bm{k}}, \Delta \Omega_{GW}, z)= \delta_{GW}({\bm{k}}, z)e^{-k^2/k^2_{eff}(z)}$. Along with the gravitational wave sources, we consider number of galaxy samples $N_g= \bar n_gV_s$ in the overlapping sky volume $V_s$ with the known redshift $z_g$ and an error $\sigma_z$ and the sky position denoted by $\{\theta_{g}, \, \phi_{g}\}$ with an error on the sky position $\Delta \Omega_{g}$.[^8] Using galaxies samples with known redshift, we can make tomographic bins of the galaxies with $N_z$ galaxies in each redshift bin. The expansion history of the Universe $(H(z)= H_0(\Omega_m(1+z)^3 + \Omega_{de}\exp{(3\int_0^z d\ln(1+z)(1+w(z))}))^{0.5}$ and the corresponding cosmological parameters ($\Theta_c \in$ $\{H_0$, $\Omega_m$, $w(z)= w_0 + w_a(z/(1+z))\}$) can be explored from dark standard sirens using the Bayes theorem [@bayes] $$\label{posterior-1} \begin{split} \mathcal{P}(\Theta_c|{\bm{\vartheta}}_{GW}, {\bm{d}}_{g})\propto & \iint d\Theta_n\, dz \, \bigg[\prod_{i=1}^{N_{GW}}\, \, \mathcal{L}({\bm{\vartheta}}_{GW}| P^{ss}_{gg}({\bm{k}},z), \Theta_n, {\bm{d}}_g(z)) \mathcal{P}({\bm{d}}_g| P^{ss}_{gg}({\bm{k}},z)) \mathcal{P}({\{d^i_l\}}_{GW}|z, \Theta_c, \{\theta^i,\, \phi^i\}_{GW})\Pi(z) \bigg] \\& \times\, \Pi(\Theta_n)\Pi(\Theta_c), \end{split}$$ where, the gravitational wave data vector is composed of ${\bm{\vartheta}}_{GW} \equiv \{ d^i_l,\, \theta^i_{GW},\, \phi^i_{GW}\}$ and the galaxy data vector is composed of ${\bm{d}}_{g} \equiv \{\delta_g( z_g^i,\,\theta^i_{g},\, \phi^i_{g})\}$. $\Pi(\Theta_c)$ and $\Pi(\Theta_n)$ denotes respectively the prior on the cosmological parameters $\Theta_c$ and prior on the nuisance parameters $\Theta_n \in \{b_{GW}(k,z)\}$. $\Pi(z)$ denotes the prior on the redshift range of the gravitational wave sources [which can be taken uniform over a wide range. In the presence of a redshift information about the gravitational wave sources, an informative prior on the redshift can be considered. In this analysis we consider a uniform prior $\mathcal{U}(0,1)$[^9] on the redshift unknown gravitational wave sources; this is sufficiently wide for the near-term and medium-term gravitational wave surveys we are considering. ]{} $\mathcal{P}({\{d^i_l\}}_{GW}|z, \Theta_c)$ is the posterior on the luminosity distance $d_l$ from the gravitational wave data ${\bm{\vartheta}}_{GW}$ which, for convenience, we model as a Gaussian distribution.[^10] $$\begin{aligned} \label{pos-2} \begin{split} \mathcal{P}({\{d^i_l\}}_{GW}|z, & \Theta_c, \{\theta^i,\, \phi^i\}_{GW}) \\ &\propto \exp\bigg(-{\frac{(d^i_l(\{\theta^i,\, \phi^i\}_{GW})- d_l(z, \Theta_c))^2}{2\sigma^2_{d_l}}}\bigg), \end{split}\end{aligned}$$ where, $\sigma_{d_l}$ is the error on the luminosity distance, and $d_l(z, \Theta_c)= (1+z) \int_0^z \frac{c\, dz'}{H(z')}$ [is the model for the luminosity distance]{}. The posterior of the galaxy density field $\mathcal{P}({\bm{d}}_g| P_{gg}({\bm{k}}, z))$ given the galaxy power spectrum $P_{gg}({\bm{k}}, z)$ (mentioned in Eq. ) can be written as $$\label{pos-3} \mathcal{P}({\bm{d}}_g| P^{ss}_{gg}({\bm{k}},z)) \propto \exp\bigg(-{\frac{ \delta^{s}_g({\bm{k}}, z)\delta^{s*}_g({\bm{k}}, z)}{2(P^{ss}_{gg}({\bm{k}},z) + n_g(z)^{-1})}}\bigg),$$ where $\delta^s_g({\bm{k}}, z)= \int d^3{\bm{r}}\, \delta_g ({\bm{r}}) e^{i{\bm{k}} .{\bm{r}}}$ is the Fourier decomposition of the galaxy distribution, the first term in the denominator $P^{ss}_{gg}({\bm{k}},z)$ is the galaxy three dimensional power spectrum defined in Eq. , and $n_g(z)= N_g(z)/V_s$ is the number density of galaxies in the redshift bin $z$. The likelihood term $\mathcal{L}({\bm{\vartheta}}_{GW}| P_{gg}({\bm{k}},z), \Theta_n, {\bm{d}}_g(z))$ in Eq.  is $$\begin{aligned} \label{likeli-1} \begin{split} \mathcal{L}({\bm{\vartheta}}_{GW}| P^{ss}_{gg}({\bm{k}},z), \Theta_n,& {\bm{d}}_g(z)) \propto \\ & \exp\bigg(-\frac{V_s}{4\pi^2}\int k^2 dk \int d\mu_k{\frac{ \bigg(\hat P ({\bm{k}}, \Delta \Omega_{GW}) - b_g(k,z)b_{GW}(k, z)(1 + \beta_g \mu_{\hat k}^2)P_{m}(k,z)e^{-\frac{k^2}{k^2_{eff}}}\bigg)^2}{2(P^{ss}_{gg}({\bm{k}},z) + n_g(z)^{-1})(P^{rr}_{GW\,GW}({\bm{k}},z) + n_{GW}(z)^{-1})}}\bigg), \end{split}\end{aligned}$$ where $\hat P({\bm{k}}, z)= \delta_{g}({\bm{k}}, z)\delta_{GW}^*({\bm{k}},\Delta \Omega_{GW})$, $n_{GW}(z)= N_{GW}(d^i_l(z))/V_s$ is the number density of gravitational wave sources denoted in terms of the number of number of objects in the luminosity distance bin $N_{GW}(d^i_l(z))$, and $V_s$ denotes the total sky volume. The likelihood given in Eq.  is maximized for that set of cosmological parameters that transforms the galaxy density field from redshift space to match or correlated maximally with the spatial distribution of gravitational wave sources. The integration in Eq.  takes into account the [anisotropic]{} shape of the power spectrum by combining the contribution from $\mu_k= \cos\hat n.\hat k$ arising due to RSD. The total number of Fourier modes which contributes to the signal depends on the volume of the sky survey given by $N_m= k^2dkV_s/4\pi^2$. In the limit $n_x(z)P_{x}(k, z)> 1$, the likelihood is in the cosmic variance limited regime, and in the other extreme scenario $n_x(z)P_{x}(k, z)< 1$, it is in the shot noise dominated regime. For the gravitational wave sources expected within $5$ years (with an event rate $R(z)= \,100$ Gpc$^{-3}$ yr$^{-1}$ [@LIGOScientific:2018mvr; @LIGOScientific:2018jsj]), we are going to explore the cross-correlation between the galaxies and gravitational wave sources only for small values of $k<k_{eff}$ in the shot noise regime $n_{GW}P^{ss}_{GW\,GW}(k,z)<1$. Galaxy samples are going to be have $\mathcal{O}(10^9)$ galaxies [@2009arXiv0912.0201L; @2010arXiv1001.0061R; @2012arXiv1208.4012G; @2013arXiv1305.5425S; @Aghamousa:2016zmz; @Dore:2018smn; @Dore:2018kgp] and as a result, we are going to be in the cosmic variance limited regime for the values of $k<k_{eff}$. So, the denominator of the exponent in Eq. , is going to scale as $ \frac{4\pi^2P^{ss}_{gg}({\bm{k}},z)}{n_{GW}(z)}$. With the availability of the large number of gravitational wave samples, the measurement is going to be in the cosmic variance limited regime $n_{GW}P^{rr}_{GW\,GW}(k,z)>1$, and in that case the denominator of the exponent can be approximated as $4\pi^2P^{ss}_{gg} ({\bm{k}},z) P^{rr}_{GW\,GW}({\bm{k}},z)$. In this analysis, we have considered an analytical covariance matrix. This can be also calculated from simulations for a specific mission of large scale structure and gravitational waves experiment. Generation of mock sample {#sims} ========================= We implement our method on a mock sample of large scale structure and gravitational wave sources which are produced [for the log-normal distribution of the density field using the publicly available package]{} `nbodykit` [@Hand:2017pqn]. [The realization of the galaxies and gravitational wave sources are obtained from the same random realization, using a fixed matter power spectrum $P_m({\bm{k}}, z)$ with different bias parameters for galaxies and gravitational wave sources $b_g$ and $b_{GW}$ respectively.]{} In this analysis, we use the mock samples with the box length (in units of Mpc/h) $[l_x=1350.,\, l_y=1350.,\, l_z= 300]$ from redshift range $z=0$ to $z=1.0$ with Planck-2015 cosmology [@Ade:2015xua]. These mocks do not take into account the contribution from weak lensing. Weak lensing is going to have a marginal ($\leq 1\%$) increase in the variance of the [inferred cosmological parameters for the low redshift gravitational wave sources]{} considered in this analysis. *Galaxy samples:* The galaxy samples are produced for a scale-independent bias parameter $b_g=1.6$ including the effect from RSD [@Hand:2017pqn]. The galaxy mocks are obtained for the number of galaxies $N_g= 1.5\times 10^4$. The redshift of these sources is assumed to be known spectroscopically, which implies the [corresponding]{} error in the redshift measurement is $\sigma_z \approx 0$. *Gravitational wave samples:* For the same set of cosmological parameters and using the same realization of the large scale structure density field for which galaxy samples are produces, we obtain the gravitational wave samples $N_{GW}$ [^11] with the gravitational wave bias parameter $b_{z}= b_{GW}(1+z)^{\alpha}$ with $b_{GW}=2$ and $\alpha= 0$. For these samples we consider three different cases of sky localization error ${\Delta \Omega_{GW}}= 10$ sq. deg., ${\Delta \Omega_{GW}}= 25$ sq. deg., and ${\Delta \Omega_{GW}}= 100$ sq. deg. [@Fairhurst:2010is; @Chan:2018fpv] which are possible to achieve from the network of five gravitational wave detectors (LIGO-Hanford, LIGO-Livingston, Virgo, KAGRA, LIGO-India [@Unnikrishnan:2013qwa; @Acernese_2014; @Martynov:2016fzi; @Akutsu:2018axf]). For each gravitational wave sources, the fractional error on the luminosity distance depends inversely on the matched filtering signal-to-noise ratio ($\rho$) [@Sathyaprakash:1991mt; @Cutler:1994ys; @Balasubramanian:1995bm; @2010ApJ...725..496N; @Ghosh:2015jra] $$\label{snr} \rho^2\equiv 4\int_0^{f_{max}} df \frac{ |h(f)|^2}{S_n(f)},$$ where the value of $f_{max}$ is considered as $f_{merg}= c^3(a_1\eta^2 + a_2\eta +a_3)/\pi G M$ [@Ajith:2007kx] [^12], $S_n(f)$ is the detector noise power spectrum, which we consider as the advance LIGO design sensitivity [@Martynov:2016fzi] [^13]. The template of the gravitational wave strain $h(f)$ for $f\leq f_{merg}$ can be written in terms of the redshifted chirp mass $\mathcal{M}_z= (1+z)\mathcal{M}_c$, inclination angle with respect to the orbital angular momentum $\hat L.\hat n$ (which is denoted by the function $\mathcal{I}_{\pm} (\hat L.\hat n)$), and luminosity distance to the source $d_L$ by the relation [@1987thyg.book.....H; @Cutler:1994ys; @Poisson:1995ef; @maggiore2008gravitational; @Ajith:2007kx] $$\label{strain} h_{\pm}(f)= \sqrt{\frac{5}{96}}\frac{G^{5/6}\mathcal{M}_z^2 (f_z\mathcal{M}_z)^{-7/6}}{c^{3/2}\pi^{2/3}d_L}\mathcal{I}_{\pm} (\hat L.\hat n).$$ In this analysis, we critically consider the posterior distribution of luminosity distance to be Gaussian with the minimum matched filtering detection threshold $\rho_{th}=10$ for equal mass binaries with masses $30\, M_{\odot}$.[^14] The fractional error in the luminosity distance $\sigma_{d_l}/d_l$ can be about $10\%$ for the bright sources having high detection SNR $\rho>60$ and be large as $70\%$ for the objects at detection threshold $\rho=10$. The mean value of the luminosity distance are kept for the flat LCDM cosmological model with the parameter values $[H_0=\, 70\,\text{km/s/Mpc},\,\Omega_m= 0.315,\,\Omega_\Lambda= 1-\Omega_m, w_0=-1, w_a=0]$. The values of the Hubble parameter is kept completely different from the value of $H_0$ considered in the large scale structure mock sample ($H_0= 67.3$ km/s/Mpc) to show that the inferred cosmological parameters are affected only by the luminosity distance and not by the parameters assumed in the mock catalog. For the gravitational wave sources, we do not assume any redshift information. The current estimate of the event rate of BBHs is $R(z)= 10^2$ Gpc$^{-3}$ yr$^{-1}$ [@LIGOScientific:2018mvr]. With this event rate, we expect to see a few thousands events detected per year with the advanced LIGO design sensitivity [@Martynov:2016fzi]. In this analysis, we show the measurability of the expansion history considering a few different cases of the number of gravitational wave sources $N_{GW}$[^15] and for the sky localization which is expected to be achievable with a network of four/five gravitational detectors. Results ======= Using the mock samples of galaxies and gravitational wave sources, discussed in Sec. \[sims\], we explore the cosmological parameters which affects the expansion history of the Universe[^16] (Hubble constant $H_0$, matter density $\Omega_m$, dark energy equation of state $w (z)$) using the formalism described in Sec. \[likelihood\]. The precise and accurate inference of the cosmological parameters using this method will rely on successfully mitigating the uncertainties associated with the unknown bias parameter and its redshift dependence associated with the gravitational wave sources. So, along with the cosmological parameters, we also consider the gravitational wave bias parameter to be unknown $b_{GW}(z)= b_{GW}(1+z)^\alpha$ and jointly infer the value of $b_{GW}$ and $\alpha$ (these are our nuisance parameters $\Theta_n \in\{b_{GW}, \alpha\}$) is the analysis along with the cosmological parameters. We consider three cases in this analysis: (i). $H_0$, $\Omega_m$, with fixed $w_0=-1$, and $w_a=0$; (ii). $\Omega_m$ and $\Omega_\Lambda$, with fixed $H_0=70$ km/s/Mpc, $w_0=-1$, and $w_a=0$; (iii). $w_0$ and $w_a$ with fixed $H_0=70$ km/s/Mpc and $\Omega_m=0.315$. Uniform priors on the cosmological and nuisance parameters are considered in the following range: $\Pi\bigg(\frac{H_0}{\text{km/s/Mpc}}\bigg) = \mathcal{U}(20, 150)$, $\Pi(\Omega_m) = \mathcal{U}(0.1,1)$, $\Pi(\Omega_\Lambda) = \mathcal{U}(0,1)$, $\Pi(w_0) = \mathcal{U}(-2,0)$, $\Pi(w_a) = \mathcal{U}(-8,8)$, $\Pi(b_{GW}) = \mathcal{U}(0,6)$, $\Pi(\alpha) = \mathcal{U}(-4,4)$ and $\Pi(z)= \mathcal{U}(0,1)$. We show the results only for the $\Delta \Omega_{GW}=10$ sq. deg. However, the results for $\Delta \Omega_{GW}=25$ sq. deg. only deteriorates marginally. For sky-localization error $\Delta \Omega_{GW}=100$ sq. deg., the impact on the error-bars for the bias parameters are about a factor of two, and is less for other parameters. Measurement of $H_0$, $\Omega_m$ and $b_{GW}(z)$ ------------------------------------------------ The joint-estimation of the cosmological parameters $H_0$ and $\Omega_m$ along with the nuisance parameters are shown in Fig. \[hom\] for fixed value of $w_0=-1$ and $w_a=0$. These results are obtained for the cases with $N_g= 1.5\times 10^4$, $N_{GW}=200$ [^17], and $\Delta \Omega_{GW}= 10$ sq. deg [^18]. Results show that we can make the measurement of [$H_0= 70$ km/s/Mpc]{} with an accuracy of $1.9\%$ with only $N_{GW}(z)=40$ BBHs in each redshift bin of width $\Delta z=0.1$ up to redshift $z=0.5$ detectable with the advance LIGO design sensitivity [@Martynov:2016fzi]. The result shown in Fig. \[hom\] also indicates that the gravitational wave bias parameters $b_{GW}$ and $\alpha$ are uncorrelated with the cosmological parameters $H_0$ and $\Omega_m$. As a result, uncertainty associated with the gravitational wave bias parameter does not affect the inference of the cosmological parameters ([for the parametric form of the bias considered in this analysis]{}). This makes our method both precise and accurate to infer the cosmological parameters. Using this method we can measure the value of the gravitational wave bias parameter with an $\sigma_{b_{GW}}/b_{GW}\sim 27\%$ with only $200$ BBHs at the advanced LIGO design sensitivity [@Martynov:2016fzi]. The cross-correlation technique makes it possible to measure the bias parameter even with the currently ongoing detector network and much before the operation of next-generation gravitational wave detectors [@Punturo:2010zz; @Reitze:2019iox] by using the auto correlation between the gravitational wave sources. This is an another additional gain which is not possible from the other proposed methods [@Chen:2017rfc; @Nair:2018ign; @Soares-Santos:2019irc]. The forecast posteriors on $H_0$ (after marginalizing over $\Omega_m, \,b_{GW},\, \alpha$) for $N_{GW}= 50,$ $100, $ $200$ gravitational wave sources are shown in Fig. \[allh0\] along with the measurement of Hubble constant $H_0=67.4\pm{0.5}$ km/s/Mpc and $H_0=74\pm{1.4}$ km/s/Mpc from Planck [@Aghanim:2018eyx] and SH0ES [@Riess:2019cxk] respectively. The [uncertainty]{} in the measurement of $H_0$ decreases as the number of sources increases ($\sim N_{GW}^{-1/2}$) and as the uncertainty in the luminosity distances decreases ($\sim\sigma_{d_l}/d_l$). Fig.\[allh0\] shows that a measurement of $H_0$ from 200 dark sirens ($\sigma_{H_0}/H_0=1.9\%$) compares favourably with that which would be obtained from 50 sources *with EM counterparts* (such as BNS and NS-BH, assuming $\sigma_{H_0}/H_0=2\%$, [@Chen:2017rfc; @Feeney:2018mkj]). However, as the number of detected dark sirens is expected to outnumber the sources with EM counterparts (such as BNSs and NS-BHs), one can expect the constraints on $H_0$ from dark sirens to dominate those from BNSs and NS-BHs, with very conservative assumptions about the availability of galaxy redshift survey covering a substantial fraction of the sky. *In summary, the $H_0$ our method is going to provide both accurate and precise measurements of $H_0$ from dark sirens along with $\Omega_m$, and redshift dependent gravitational wave bias parameter $b_{GW}(z)$ from the network of the advanced with/without optical squeezing gravitational wave detectors. Combining these two independent constraints would achieve $\sigma_{H_0}/H_0\,\sim 1.4\%$, which is competitive with current constraints from standard candles [@Riess:2019cxk].* Measurement of $\Omega_\Lambda$, $\Omega_m$ and $b_{GW}(z)$ ----------------------------------------------------------- As our method can be applied to high redshift (up to which galaxy surveys will be available), we can also measure the energy budget in dark energy $\Omega_\Lambda$ from dark sirens. We make the joint estimation of the cosmological parameters $\Omega_\Lambda$–$\Omega_m$ along with the two bias parameters $b_{GW}$ and $\alpha$ for the parametric form $b_{GW} (z)= b_{GW}(1+z)^\alpha$ for a fixed value of $H_0=70\,\text{km/s/Mpc}, w_0=-1$ and $w_a=0$ with $N_{GW} (z)= 40$ up to redshift $z=0.7$. The corresponding plot is shown in Fig. \[olamdaom\]. We show for the first time that the energy budget of dark energy can be measured from using dark sirens detectable within the modest timescale with the advanced LIGO design sensitivity [@Martynov:2016fzi] with only $N_{GW}= 280$ BBHs. The $\Omega_m$ and $\Omega_\Lambda$ are also uncorrelated with the bias parameters ($b_{GW}$ and $\alpha$), and as a result will not affect the measurement of cosmological parameters. The measurement of $\Omega_\Lambda$ and $\Omega_m$ gets less constraining for the limited number of gravitational wave sources if the value of $H_0$ is not kept fixed. However, joint estimation with $H_0$ is possible with more gravitational wave sources. This method is also useful for the future gravitational wave detectors such as LISA [@2017arXiv170200786A], ET [@Punturo:2010zz], and CE [@Reitze:2019iox] to measure $\Omega_\Lambda$, $\Omega_m$, and the gravitational wave bias parameter $b_{GW}(z)$. Measurement of $w_0$, $w_a$ and $b_{GW}(z)$ ------------------------------------------- The two-parameter phenomenological model of the dark energy equation of state $w_{de}= w_0 + w_a\,z/(1+z)$ is usually considered to explore the redshift dependence of dark energy. Using our method, we show the joint estimation of $w_0$ and $w_a$ along with the two bias parameters $b_{GW}$ and $\alpha$ (for the parametric form $b_{GW} (z)= b_{GW}(1+z)^\alpha$) in Fig. \[w0wa\] for $N_{GW} (z)= 40$ extended up to $z=0.7$. We have kept the value of $H_0=70$ km/s/Mpc and $\Omega_m=0.315$ fixed for flat LCDM model. This plot shows that this technique is capable to infer the dark energy equation of state with $N_g=1.5\times 10^4$, $N_{GW}=280$ (up to redshift $z=0.7$) for $\Delta \Omega_{GW}=10$ sq. deg. The constraints on the values on $w_0=-1$ are possible with $3.4\sigma$, however the constraints on $w_a$ are going to be weak with the modest number of gravitational wave sources. With more number of gravitational wave sources possible from the five years of observation with the Advanced LIGO design sensitivity [@Martynov:2016fzi], we will be able to infer the dark energy equation of state with higher accuracy (the error on the parameter reduces by $N_{GW}^{-1/2}$) for sources up to redshift $z\sim 1$. [This independent avenue to measure $w_0$ and $w_a$ will also be]{}accessible from the next generation gravitational wave detectors such as LISA [@2017arXiv170200786A], ET [@Punturo:2010zz], and CE [@Reitze:2019iox] for sources which are beyond redshift $z=1$. The gravitational wave bias parameters $b_{GW}$ and $\alpha$ are also uncorrelated with the parameters describing the dark energy equation of state and can be measured with high statistical significance as shown in Fig. \[w0wa\]. Conclusions and discussions {#conc} =========================== Gravitational-wave sources are accurate luminosity distance tracers without requiring any external calibration, if instrument calibration can be achieved [@Sun:2020wke]. This makes gravitational wave sources an exquisite probe to measure the expansion history of the Universe by exploiting the luminosity distance and its redshift. However, inference of the redshift of the gravitational wave sources requires either an EM counterpart or a known mass scale (such as the mass scale associated with the tidal deformation [@Messenger:2011gi] and mass scale associated with the pair instability of supernova [@Farr:2019twy]) to infer the redshift. However, for most of the gravitational wave sources, measurement of the mass scale is not going to be possible. The alternative method to infer the redshift of the gravitational wave sources by exploiting the scale associated with the three dimensional clustering property of cosmic structures [@Mukherjee:2018ebj]. In this paper, we show the applicability of this avenue for the gravitational wave sources to infer the expansion history of the Universe. Using the detector sensitivity expected from the current generation gravitational wave detectors [@Unnikrishnan:2013qwa; @Acernese_2014; @Martynov:2016fzi; @Akutsu:2018axf], we show that with the modest number of gravitational wave sources ($\sim 100$) we will be able to infer the Hubble constant $H_0$ with an accuracy $\sim2.5\%$ as shown in Fig. \[allh0\] for gravitational wave sources distributed up to redshift $z=0.5$. The exploration of the clustering of the gravitational wave sources with the galaxies makes it a robust method to infer the Hubble constant using the dark sirens. Going beyond Hubble constant, our method makes it possible to measure the fraction of dark energy in the Universe and its fundamental nature using the gravitational wave sources with the network of current generation gravitational wave detectors, as shown in Fig. \[olamdaom\] and Fig. \[w0wa\]. This is not possible currently from the gravitational wave sources with EM counterparts (such as BNS and NS-BH) due to observable horizon up to a lower redshift ($z<0.5$). As a result, only dark sirens can be used to explore the expansion history of the Universe with the currently ongoing network of gravitational wave detectors. Along with the measurement of the expansion history, this method makes it possible to infer the gravitational wave bias parameter and its redshift dependence $b_{GW}(z)$ using the gravitational wave sources. The gravitational wave bias parameter determines the spatial distribution of the gravitational wave sources with respect to the dark matter distribution and provides an avenue to measure this. Using our method, we can measure the bias parameter by more than $3\sigma$ precision with only $200$ BBHs distributed up to redshift of $z=0.5$ as shown in Fig. \[hom\]. With the availability of the more gravitational wave sources, the bias parameter can be measured with higher precision and accuracy. The cross-correlation with the galaxies makes it possible to detect the bias parameters of gravitational wave sources sooner with higher statistical significance than possible from the auto-correlation [@Vijaykumar:2020pzn]. The redshift dependent bias parameter is not degenerate with the cosmological parameters as shown in Fig. \[hom\], Fig. \[olamdaom\], and Fig. \[w0wa\], which makes it possible to reliably detect the cosmological parameters even if the gravitational wave bias parameter is currently unknown. In the longer timescale with the operation of the next-generation gravitational wave detectors such as LISA [@2017arXiv170200786A], ET [@Punturo:2010zz], and CE [@Reitze:2019iox], we will be able to probe the expansion history of the Universe up to a much higher redshift using the method proposed in this paper, without inferring the EM counterparts to the gravitational wave sources. So, the method proposed in this paper will help in building the observation strategy of the future gravitational wave detectors. Finally, this method is not limited to gravitational wave sources but also applicable to any other distance tracers to infer the expansion history of the Universe using the luminosity distance – redshift relation. Our method is readily applicable to SNe samples which will be detected with photometric redshift measurement from Rubin Observatory [@2009arXiv0912.0201L], as already pointed by a previous analysis [@Mukherjee:2018ebj]. In future, this method can play a crucial role for cosmology with type-Ia SNe [@Scolnic:2019apa]. This method will be useful in exploring the synergy between the upcoming missions such as DES [@10.1093/mnras/stw641], Dark Energy Spectroscopic Instrument (DESI) [@Aghamousa:2016zmz], Euclid [@2010arXiv1001.0061R], Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx) [@Dore:2018kgp], Nancy Grace Roman Telescope [^19] [@2012arXiv1208.4012G; @2013arXiv1305.5425S; @Dore:2018smn]. This method is also applicable to Fast Radio Burst (FRBs) [@Petroff:2019tty] to infer their redshift for which host identification will be difficult. Authors would like to thank Archisman Ghosh for carefully reviewing the manuscript and providing useful comments. S. M. also acknowledges useful discussion with Rahul Biswas, Neal Dalal, Will Farr, Archisman Ghosh, Salman Habib, Eiichiro Komatsu, Daniel M. Scolnic and Joseph Silk. This analysis was carried out at the Horizon cluster hosted by Institut d’Astrophysique de Paris. We thank Stephane Rouberol for smoothly running the Horizon cluster. SM and SMN is also supported by the research program Innovational Research Incentives Scheme (Vernieuwingsimpuls), which is financed by the Nether- lands Organization for Scientific Research through the NWO VIDI Grant No. 639.042.612-Nissanke. The work of BDW is supported by the Labex ILP (reference ANR-10-LABX-63) part of the Idex SUPER, received financial state aid managed by the Agence Nationale de la Recherche, as part of the programme Investissements d’avenir under the reference ANR-11-IDEX-0004-02. The Center for Computational Astrophysics is supported by the Simons Foundation. AS acknowledges support from the NWO and the Dutch Ministry of Education, Culture and Science (OCW) (through NWO VIDI Grant No. 2019/ENW/00678104 and from the D-ITP consortium). In this analysis, we have used the following packages: Corner [@corner], emcee: The MCMC Hammer [@2013PASP..125..306F], IPython [@PER-GRA:2007], Matplotlib [@Hunter:2007], nbodykit [@Hand:2017pqn], NumPy [@2011CSE....13b..22V], and SciPy [@scipy]. The authors would like to thank the LIGO/Virgo scientific collaboration for providing the noise curves. LIGO is funded by the U.S. National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes. \#1 [^1]: The angle between the line of sight and the system’s orbital angular momentum. [^2]: The nature of the lighter companion of the binary system for GW190814 [@Abbott2020] is most likely a black hole. However, at present, the data and models for neutron star equation-of-states cannot conclusively exclude the possibility it is a neutron star. [^3]: Scaling the previous bounds from [@Chen:2017rfc] and [@Soares-Santos:2019irc] as $1/\sqrt{N_{GW}}$ indicates similar error-bar. [^4]: Correlation function $\xi(r)$ is related to the power spectrum $P(k)$ by Fourier Transformation. [^5]: If primordial black holes (PBHs) are dark matter, then the distribution of PBHs are also going to be biased tracer of the galaxy distribution. [^6]: $\bar n_g \equiv N_g/V_s= \sum_i\, n_g({\bm{r}}_i)$ [^7]: Comoving distance $d_c (z)$ is related to the luminosity distance $d_l (z)$ by the relation $d_l(z)= (1+z) d_c(z)$. [^8]: For all practical purpose, sky localization error for galaxies can be considered to be zero. [^9]: $\mathcal{U}(a,b)$ denotes the uniform function over the range (a,b). [^10]: While this posterior is likely to be non-Gaussian in practice, we make this assumption purely to construct a forecast that can be compared to other studies making similar assumptions. [^11]: Different cases of $N_{GW}$ are considered in this analysis, and are discussed in the respective sections [^12]: $M= m_1+m_2$ is the total mass of the coalescing binaries, $\eta$ is the symmetric mass ratio $\eta= m_1m_2/M^2$, $c$ is the speed of light and $G$ denotes the gravitational constant. The value of the parameters are $a_1= 0.29740$, $a_2=0.044810$, $a_3=0.095560$ [@Ajith:2007kx]. [^13]: The noise curves are available publicly in this website <https://dcc-lho.ligo.org/LIGO-T2000012/public> [^14]: $M_\odot= 2\times 10^{30}$ kg denotes the mass of the sun. [^15]: We consider four cases of $N_{GW}= 50, 100, 200, 280$ for this analysis in the LIGO design sensitivity, which is expected to be easily available with the network of gravitational wave detectors. [^16]: Considering only the cosmological models with curvature $\Omega_K=0$. [^17]: The total number of gravitational wave sources $N_{GW}= \int N(z) dz$. [^18]: Results with $\Delta \Omega_{GW}= 25$ sq. deg changes only marginally. [^19]: Previouly known as Wide-Field InfraRed Survey Telescope [(WFIRST)](https://wfirst.gsfc.nasa.gov)
--- abstract: 'We present a flexible method that can calculate Bloch modes, complex band structures, and impedances of two-dimensional photonic crystals from scattering data produced by widely available numerical tools. The method generalizes previous work which relied on specialized multipole and FEM techniques underpinning transfer matrix methods. We describe the numerical technique for mode extraction, and apply it to calculate a complex band structure and to design two photonic crystal antireflection coatings. We do this for frequencies at which other methods fail, but which nevertheless are of significant practical interest.' author: - 'Felix J. Lawrence' - 'Lindsay C. Botten' - 'Kokou B. Dossou' - 'R. C. McPhedran' - 'C. Martijn de Sterke' title: 'A flexible Bloch mode method for computing complex band structures and impedances of two-dimensional photonic crystals' --- Introduction {#sec:introduction} ============ When modeling photonic crystals (PCs), it is important to consider all the relevant Bloch modes. Light at a fixed frequency, polarization, and incident angle exists in a PC as a superposition of a set of propagating and evanescent Bloch modes, the PC’s eigenstates. At low frequencies, only one mode generally needs to be considered. For light at frequencies above the first Wood anomaly [@wood], each row of holes in the PC diffracts light into several propagating orders, so the PC may support multiple propagating Bloch modes. At the PC’s front and back interfaces, some of its modes couple via reflection, affecting the overall reflection and transmission through the PC, so it is important to model all relevant modes. It is often important to include evanescent modes [@Smaali2003443]. If the PC is not long—for example, if it is a layer in a thin antireflection coating—then evanescent modes can play a role in energy transport [@Stefanou:92]. Evanescent modes can also play a role in field matching across an interface between PCs [@Lawrence:2008p79] or PC waveguides [@deSterke:09]. The propagative qualities of an evanescent mode are well-represented by its complex band structure [@heine1964], which augments the traditional band structure, conveying information about the rate at which the mode accumulates phase together with information about the mode’s decay rate. There have been a number of studies seeking to derive impedance-like quantities to characterize reflection at PC interfaces by a scalar [@Biswas:2004p465; @Smigaj:2011p1695]. Furthermore, a number of studies have adapted metamaterial parameter extraction techniques [@Simovski:2007p1704] to photonic crystals, and used them to design antireflection coatings [@Miri:2010p791; @Kim:2009p1688]. However, since these techniques characterize reflection and transmission by a single complex number each, they cannot handle problems involving multiple modes, where every mode reflects into every other mode. Scalar-based methods generally give manifestly incorrect results for light at frequencies above the first Wood anomaly, which ranges from $a_x/\lambda =1/n$ for normally incident light to $a_x/\lambda = 1/2n$ for light at the Brillouin-zone edge, where $a_x$ is the length of the lattice vector parallel to the interface, $\lambda$ is the free space wavelength and $n$ is the PC’s background index. Above this frequency, generally several Bloch modes must be simultaneously considered in each PC, regardless of whether these modes are propagating or evanescent. Reflection at a PC/PC interface is well-described by a matrix that maps incident modes to reflected modes, as we have shown previously [@Lawrence:2008p79; @Lawrence:2009p11]. In our experience, the minimum acceptable dimension of this reflection matrix, as argued in Sec. \[sub:background\_theory\], is usually $$\label{eq:numprop} M_{\text{min}} = \left\lfloor\frac{a_x}{n\lambda}(1+\sin\theta_i)\right\rfloor + \left\lfloor\frac{a_x}{n\lambda}(1-\sin\theta_i)\right\rfloor + 1,$$ where $\theta_i$ is the incident angle from a uniform dielectric with the PC’s background index, and $\lfloor x \rfloor$ denotes the *floor* of $x$. We have previously achieved accurate results modeling PC stacks using impedance matrices of this dimension and higher [@Lawrence:2008p79; @Lawrence:2009p11; @Lawrence:2010p1345]. A number of methods for finding multiple Bloch modes and complex band structures have been demonstrated. Transfer-matrix [@Gralak:00] and scattering-matrix [@Botten:2001p9] based methods were developed to derive a PC’s Bloch modes from the properties of a single grating layer. The plane wave expansion method has also been extended to include evanescent modes [@Hsue:2004]. Finally, Ha *et al.* presented a method for extracting Bloch modes from the output of an EM solver [@Ha:2009p1388], or even near-field measurements [@Sukhorukov:09; @Ha:2011p2082]. We improve the accuracy, stability and efficiency of Ha *et al.*’s method and extend it to calculate PC impedances for two-dimensional (2D) PCs, which can be used to calculate reflection and transmission at interfaces [@Lawrence:2008p79; @Lawrence:2009p11]. These PC impedances and the reflection and transmission operators are represented by matrices; our method supports the presence and interaction of multiple Bloch modes and so it can work well both above and below the first Wood anomaly. We have made software available that uses the method described in this paper to calculate PCs’ Bloch modes, complex band structures, and impedances. The software, called BlochCode, can then use these complex band structures and impedances to calculate reflection and transmission matrices and coefficients for arbitrary stacks of PCs. BlochCode is open-source and is available on the internet [@blochcodeurl]. In Sec. \[sec:theory\], we present our method for finding Bloch modes from the electric field $E$ and the magnetic field $H$ in a PC structure. Sec. \[sub:background\_theory\] recaps some useful results from our previous work [@Lawrence:2009p11] and provides some background theory. Sec. \[sub:finding\_modes\] details our improvements to Ha *et al.*’s method [@Ha:2009p1388] of finding Bloch factors and modal fields, and Sec. \[sec:numerical\_procedure\] outlines our procedure for successfully applying this method to minimize the residual derived in Sec. \[sub:finding\_modes\]. Sec. \[sub:calculating\_impedance\] explains how we calculate PC impedance matrices from the modal fields. In Sec. \[sec:application\] we apply our method to demonstrate its utility. In Sec. \[sub:complex\_band\_structure\] we calculate the complex band structure for light normally incident on a triangular lattice PC. In Sec. \[sub:oldcoating\] we reproduce the design process of a known antireflection coating for a PC, at a frequency and incident angle for which it is critical to include at least two Bloch modes in the calculations. Finally, in Sec. \[sub:park\_coating\] we use our method to design an all-polarization antireflection coating for a square lattice self-collimating PC, at a high frequency where a scalar method cannot find a coating for the PC [@Park:2010p651]. Theory {#sec:theory} ====== Our method uses a two-step process to extract a PC’s modes and impedance from the field in a finite length of the PC. The PC is assumed to be two-dimensional, lossless, and to have relative permeability $\mu_r = 1$. Like Ha *et al.*’s method [@Ha:2009p1388], we could use data generated by FEM or FDTD simulations, or even experimentally measured by a near-field probe such as a SNOM [@Ha:2011p2082], although the impedance part of our method is not valid for SNOM data, which is derived from a 3D object. First, the Bloch factors and the Bloch modal fields are found (Sec. \[sub:finding\_modes\]), then these modes are analyzed to calculate the PC’s impedance (Sec. \[sub:calculating\_impedance\]). Background Theory {#sub:background_theory} ----------------- Two-dimensional PCs in the $x-y$ plane may be described as a stack of gratings parallel to the $x$ axis [@Botten:2000p505], each of which diffracts incident light into an infinite set of grating orders. At the edge of each unit cell, the PC’s Bloch modes may be written as a superposition of the underlying grating orders [@Botten:2001p9]. Their directions are given by the grating equation $$\label{eq:grating} k_x^{(p)} = k_x + \frac{2 \pi p}{a_x} = k \sin\theta_i + \frac{2 \pi p}{a_x},$$ where $k_x$ is the $x$ component of the incident plane wave’s wavevector, $k_x^{(p)}$ is that of the $p$th diffraction order, and $a_x$ is the length of the lattice vector parallel to the $x$-axis. The wavevector component in the direction perpendicular to the grating is $k_y^{(p)} = \sqrt{k^2 - {k_x^{(p)}}^2}$ where $k$ is the wavenumber in the medium. Evanescent grating orders have imaginary $k_y^{(p)}$, so for a given $k$ and $k_x^{(p)}$, the number of propagating grating orders is the number of solutions to Eq.  with real $k_y^{(p)}$, or $M_\text{min}$ in Eq. . In our experience, $M_\text{min}$ also provides an upper bound on the number of propagating Bloch modes, and at non-normal incidence is a lower bound on the number of Bloch modes required to model a PC accurately. At normal incidence, symmetry allows odd modes to be ignored, so in this case good results may be obtained with fewer than $M_{\text{min}}$ modes—see Sec. \[sub:park\_coating\]. Using Bloch modes found from accurate multipole and FEM transfer matrix methods [@McPhedran:2000p784; @Botten:2004p5], we have consistently had success modeling PCs with no more than $M_\text{min} + 2$ Bloch modes. Bloch’s theorem relates the electric and magnetic fields associated with each mode at equivalent points in different unit cells of a PC. The ratio of each mode’s field at points separated by the lattice vector $\mathbf{e}_1 = (a_x, 0)$ is $e^{i k_x a_x}$. For the PC’s other lattice vector $\mathbf{e}_2$, this ratio is different for each mode and is the mode’s Bloch factor, denoted by $\mu$. Calculating $\mu$ for each mode is the goal of Sec. \[sub:finding\_modes\]. For square and rectangular lattices, $\mathbf{e}_2 = (0, a_y)$ and $\mu = e^{ik_y a_y}$, where $k_y$ is the $y$ component of the mode’s wavevector. For triangular lattices, the lattice vector $\mathbf{e}_2$ is $(a_x/2, a_y)$ and so the Bloch factor may be written $\mu = e^{i(k_x a_x/2+k_y a_y)}$. Bloch modes come in forward/backward pairs. Popov *et al.* provide a useful discussion of symmetry properties [@Popov:1986p1233]. We assume mirror symmetry in each unit cell, which means that each backward mode’s field profile in a unit cell is the reflection on the $x$-axis of its forward partner’s. The Bloch factors of a pair are related because of this: for square and rectangular lattices, $\mu_b = 1/\mu_f$, where $\mu_f$ and $\mu_b$ are respectively the Bloch factors of the forward and backward modes. For triangular-like lattices, the symmetry is more complicated since the reflection of $\mathbf{e}_2$ is not $-\mathbf{e}_2$, the translation corresponding to the field ratio $1/\mu_f$, but $(a_x/2, -a_y)$; these vectors differ by $-\mathbf{e}_1$. Accounting for this discrepancy, we find $\mu_b = e^{-i k_x a_x} / \mu_f$ for triangular lattices. A PC’s impedance is defined in terms of two matrices, $\mathbf{E}$ and $\mathbf{H}$ [@Lawrence:2009p11]. For $E = E_z$ polarized light, each matrix maps a vector of forward Bloch mode amplitudes $\mathbf{c}_+$ to a vector of the $E_z$ or $H_x$ fields associated with each grating diffraction order. Specifically, $E_{p,m}$, the $(p,m)$th element of $\mathbf{E}$, is the $E_z$ field of normalized mode $m$ due to forward and backward plane waves in grating order $p$, at the centre ($x=0$) of a unit cell’s edge. Thus, for a set of forward propagating/decaying Bloch modes $\mathbf{c}_+$, the field components along the edge of the unit cell, i.e., the quantities that are continuous across an interface between PCs or dielectrics, are $$\label{eq:eh_demonstration} E_z(x) = \sum_p \mathbf{E}_p~\!\mathbf{c}_+ e^{i k_x^{(p)}x},~H_x(x) = \sum_p \mathbf{H}_p~\!\mathbf{c}_+ e^{i k_x^{(p)}x},$$ where $\mathbf{E}_p$ and $\mathbf{H}_p$ are the rows of $\mathbf{E}$ and $\mathbf{H}$ corresponding to grating order $p$. In the $H = H_z$ polarization, $\mathbf{E}$ and $\mathbf{H}$ map to $E_x$ and $H_z$ fields, and these quantities replace $E_z$ and $H_x$ in Eq. . Previously [@Lawrence:2009p11], we defined PC impedances in terms of these matrices. For $E_z$ polarized light, the impedance of a PC is $$\label{eq:Z_Ez} {\cal Z} = {\mathbf{H}_0}^T (\mathbf{I} + \mathbf{Q}) \mathbf{E} + {\mathbf{E}_0}^T (\mathbf{I} - \mathbf{Q}) \mathbf{H},$$ and for $H_z$ polarized light it is $$\label{eq:Z_Hz} {\cal Z} = -\left({\mathbf{H}_0}^T (\mathbf{I} - \mathbf{Q}) \mathbf{E} + {\mathbf{E}_0}^T (\mathbf{I} + \mathbf{Q}) \mathbf{H}\right),$$ where $\mathbf{E}$ and $\mathbf{H}$ are calculated for the PC, and $\mathbf{E}_0$ and $\mathbf{H}_0$ are calculated for a reference material, usually free space. $\mathbf{Q}$ is a diagonal matrix that takes into account the half-period shift of gratings in triangular lattice PCs: for square lattices $\mathbf{Q} = \mathbf{I}$, and for triangular lattices $\mathbf{Q} =\text{diag}((-1)^p)$, where $p$ is the grating order. Given impedances $\mathcal{Z}_1$ and $\mathcal{Z}_2$ for two PCs, it is simple to calculate the reflection and transmission matrices across their interface [@Lawrence:2009p11]: \[eq:r\_t\_of\_Z\] $$\begin{aligned} \mathbf{T}_{12} &=& (\mathbf{A}_{12}^T \mathbf{A}_{12} + \mathbf{I})^{-1} 2 \mathbf{A}_{12}^T, \label{eq:t12}\\ \mathbf{R}_{12} &=& (\mathbf{A}_{12} \mathbf{A}_{12}^T + \mathbf{I})^{-1} (\mathbf{A}_{12} \mathbf{A}_{12}^T - \mathbf{I}), \label{eq:r12} \end{aligned}$$ where $\mathbf{A}_{12} = \mathcal{Z}_1^{-1} \mathcal{Z}_2$. Finding modes {#sub:finding_modes} ------------- Our method of finding the Bloch modes and Bloch factors is based on the method presented by Ha *et al.* [@Ha:2009p1388], although our method offers some significant improvements in accuracy and efficiency. We take field data for several unit cells of a PC, and try to write it as a superposition of Bloch modes, thus finding the modal fields and Bloch factors. The final steps of our mode-finding method impose symmetry relationships between forward and backward modal fields, increasing accuracy by almost halving the number of unknowns in the problem. We now outline our method. ![Schematic of $L=5$ PC structures for a square and a triangular PC lattice. The squares with solid edges are the unit cells used by our method. For the triangular lattice PC, the field in the solid-edge unit cells are calculated from the unit cells of the simulated structure (dashed edges) using Bloch’s theorem, with the ratio $e^{i k_x a_x}$ between adjacent cells’ fields.[]{data-label="fig:simmoschem"}](simmoschem.pdf) In an EM solver, we simulate a section of 2D PC with Bloch-Floquet periodic boundary conditions on two boundaries, and uniform dielectric on the others (Fig. \[fig:simmoschem\]). We sample the $E_z$ or $E_x$ (depending on polarization) field component at many ($N_p$) points in unit cell $\ell = 0$, and then at the equivalent points in each of the other unit cells. If desired, $E_y$, $H_x$, $H_y$, or $H_z$ may be used in place of or in addition to $E_z$ and $E_x$. For triangular lattice PCs, we use the field in the simulated unit cells (dashed edges in Fig. \[fig:simmoschem\]) to calculate the field in the unit cells separated by a lattice vector (solid edges); we apply Bloch’s theorem with integer multiples of the lattice vector $(a_x, 0)$. We seek to write these electric field components as a superposition of forward and backward Bloch modes. So we want to express every $U_\ell(\mathbf{r})$, i.e., the $E_z$ or $E_x$ field component for sampled point $\mathbf{r}$ in unit cell $\ell$, as $$U_\ell(\mathbf{r}) = \sum_{m} \mu_m^\ell A_m(\mathbf{r}) + \sum_{m^\prime} (1/{\mu_{m^\prime}}^{L-1-\ell}) A_{m^\prime}(\mathbf{r}) + w(\ell, \mathbf{r}), \label{eq:EAMu}$$ where $A_m(\mathbf{r})$ and $\mu_m$ are respectively the modal field and the Bloch factor of forward mode $m$; $m^\prime$ denotes backward modes, and $w(\ell,\mathbf{r})$ is the residual error. More specifically, for forward modes, $A_m(\mathbf{r})$ is the field component of mode $m$ at point $\mathbf{r}$ of the first unit cell, $\ell = 0$. The Bloch factor $\mu_m$ is the ratio of the field in cells $\ell + 1$ and $\ell$, so $\mu_m^\ell A_m(\mathbf{r})$ is the field component of forward mode $m$ at point $\mathbf{r}$ of unit cell $\ell$. To avoid ill-conditioning, the field $A_{m^\prime} (\mathbf{r})$ at point $\mathbf{r}$ of each backward mode $m^\prime$ is defined in the last unit cell, $\ell = L-1$. This means that the coefficients of $A_m(\mathbf{r})$ and $A_{m^\prime}(\mathbf{r})$ in Eq.  have moduli no greater than 1. As noted in Sec. \[sub:background\_theory\], the Bloch factor $\mu_{m^\prime}$ of each backward mode is related to that of its forward partner; we enforce this relationship in practice, thereby halving the number of Bloch factors that must be found. Equation for all $\ell$ and all sampled $\mathbf{r}$ may be written in matrix form as: $$\mathbf{U} = \mathbf{C} \mathbf{A} + \mathbf{W}, \label{eq:CA_eq_U}$$ where $\mathbf{U}$ contains the $E_z$ or $E_x$ field components from the EM solver, $\mathbf{A}$ is a matrix of modal fields, $\mathbf{C}$ is a matrix constructed from Bloch factors, and $\mathbf{W}$ is a matrix of residuals $w(\ell,\mathbf{r})$ that must be minimized. $\mathbf{U}$ is a $L \times N_p$ matrix: the field in its $\ell$th row and $r$th column is $U_{\ell, r}= U_\ell(\mathbf{r})$, the field component at point $\mathbf{r}$ in unit cell $\ell$. Similarly, $\mathbf{A}$ is a $M \times N_p$ matrix; the field in its $m$th row and $r$th column is $A_{m,r} = A_m(\mathbf{r})$, the field of mode $m$ at point $\mathbf{r}$ in cell $\ell = 0$ for forward modes, or cell $\ell = L-1$ for backward modes. $\mathbf{C}$ is a $L \times M$ matrix. For a forward mode $m$, the $(\ell,m)$th element of $\mathbf{C}$ is ${\mu_m}^\ell$, and for a backward mode $m^\prime$, the $(\ell,m^\prime)$th element is ${1/\mu_{m^\prime}}^{L-1-\ell}$. If multiple field components (e.g. $E_z$, $H_x$ and $H_y$) are to be used to find the modes, then the additional data can be added as extra columns in $\mathbf{U}$. We start the optimization process knowing $\mathbf{U}$, and with information about the structure of $\mathbf{C}$, and no direct information about $\mathbf{A}$. In our method, we first find the Bloch factors that determine $\mathbf{C}$, a relatively difficult problem. Once $\mathbf{C}$ is known, solving Eq.  for the modal fields $\mathbf{A}$ becomes a pure least-squares problem that can be solved accurately and efficiently using standard techniques. To find the modes, we seek to minimize the difference between the observed field $\mathbf{U}$ and the superposition of Bloch mode fields $\mathbf{CA}$. That is, we seek to minimize $||\mathbf{W}||_F^2$ in Eq. , the sum of squared moduli of the elements of $\mathbf{W}$. Constraining the problem by dividing by the squared Frobenius norm $||\mathbf{U}||_F^2$ of $\mathbf{U}$, the quantity we minimize is $$\label{eq:CA_residual} w^2 = \frac{||\mathbf{U} - \mathbf{CA}||_F^2}{||\mathbf{U}||_F^2},$$ where $w^2 = ||\mathbf{W}||_F^2/||\mathbf{U}||_F^2$. First we eliminate $\mathbf{A}$ from Eq.  in order to find $\mathbf{C}$ with a numerical minimizer. We use an alternative representation of the Frobenius norm, $||\mathbf{U}||_F = \sqrt{\text{tr}(\mathbf{U}^H \mathbf{U})}$, to write $$\label{eq:CA_residual_factorised} w^2 = \frac{{\text{tr}((\mathbf{U}^H - \mathbf{A}^H \mathbf{C}^H)(\mathbf{U} - \mathbf{CA}))}} {||\mathbf{U}||_F^2}.$$ Finding $\mathbf{A}$ for arbitrary $\mathbf{C}$ is a standard least-squares problem; the optimal $\mathbf{A}$ satisfies $\mathbf{C}^H \mathbf{CA}=\mathbf{C}^H \mathbf{U}$. We expand Eq. , twice apply this relation, and rearrange to get $$\label{eq:C_residual} w^2 = 1 - \frac{\text{tr}(\mathbf{U}^H \mathbf{CC}^+\mathbf{U})}{||\mathbf{U}||_F^2},$$ where $\mathbf{C}^+ = (\mathbf{C}^H \mathbf{C})^{-1} \mathbf{C}^H$ is the Moore-Penrose pseudoinverse of $\mathbf{C}$. Using Eq.  and a numerical minimizer, the Bloch factors that determine $\mathbf{C}$ may often be found to a useful level of accuracy (see Sec. \[sec:numerical\_procedure\] for implementation details). In order to improve the accuracy and reliability of the results, we impose further physical constraints. The PC impedance method [@Lawrence:2008p79; @Lawrence:2009p11] assumes the unit cell to be up-down symmetric, which causes the forward and backward modes to be related. So far, we have only imposed a relationship between the forward and backward Bloch factors, not the modal fields within each unit cell. We can halve the number of unknowns in $\mathbf{A}$ and strongly improve the quality of our results by enforcing this relationship in the minimization process. We commence by partitioning the forward ($f$) and backward ($b$) modes, and the points in the left ($L$; $y \leq a_y/2$) and right ($R$; $y \geq a_y/2$) halves of the unit cell: $$\mathbf{U} = \left(\mathbf{U}_L, \mathbf{U}_R\right),~\mathbf{C} = \left(\mathbf{C}_f, \mathbf{C}_b\right),$$ $$\mathbf{A} = \begin{pmatrix} \mathbf{A}_{L,f} & \mathbf{A}_{R, f}\\ \mathbf{A}_{L,b} & \mathbf{A}_{R, b} \end{pmatrix}.$$ After normalization, the field of a backward mode is the field of its forward partner reflected about the $x$-axis, thus $$\label{eq:A_constraint} \left(\mathbf{A}_{L,b}, \mathbf{A}_{R, b}\right) = \left(\gamma \mathbf{A}_{R,f}\mathbf{P}, \gamma \mathbf{A}_{L,f} \mathbf{P}^{-1}\right),$$ where $\mathbf{P}$ is the permutation matrix that maps points $(x,a_y - y)$ to $(x, y)$, and $\gamma$ is a normalizing diagonal matrix whose elements are the ratio of backward and forward mode amplitudes. The columns of $\mathbf{A}_{R,f}$ and $\mathbf{A}_{R,b}$, corresponding to points in the right half of the unit cell, can easily be ordered so that $\mathbf{P} = \mathbf{I}$; from now on we assume this ordering. Eq.  can now be written with roughly half as many unknowns, $$\label{eq:CpCm_multiplied} \left(\mathbf{U}_L, \mathbf{U}_R \right) = \left(\mathbf{C}_f, \mathbf{C}_b \gamma \right) \begin{pmatrix} \mathbf{A}_{L,f} & \mathbf{A}_{R, f}\\ \mathbf{A}_{R,f} & \mathbf{A}_{L, f} \end{pmatrix} + \mathbf{W}.$$ $\mathbf{C}_b \gamma$ represents each backward mode’s amplitude in each cell, relative to that of the corresponding forward mode in cell 0. The constraints on $\mathbf{A}$ (Eq. ) mean that Eq.  does not have a least-squares form, so may not be immediately simplified in the way that Eq.  led to Eq. . To transform Eq.  into a more useful form, we block-diagonalize $\mathbf{A}$ and right-multiply by the matrix $ \left(\begin{smallmatrix} \mathbf{I} & \mathbf{I} \\ \mathbf{I} & -\mathbf{I} \end{smallmatrix}\right)$, to show $$\label{eq:UpUm} (\mathbf{U}_+,~\mathbf{U}_-) = (\mathbf{C}_+ \mathbf{A}_+,~\mathbf{C}_- \mathbf{A}_-) + \mathbf{W^\prime}.$$ Here we have introduced the symmetric and antisymmetric forms $\mathbf{U}_\pm = \mathbf{U}_L \pm \mathbf{U}_R$, $\mathbf{C}_\pm = \mathbf{C}_f \pm \mathbf{C}_b \gamma$, and $\mathbf{A}_\pm = \mathbf{A}_{L,f} \pm \mathbf{A}_{R,f}$. Eq.  takes the form of two independent least-squares equations, each with half the dimension of Eq. . The two equations must be satisfied simultaneously, so to find the Bloch factors we can minimize $$w^2 = \frac{||\mathbf{U}_+ - \mathbf{C}_+ \mathbf{A}_+||_F^2 + ||\mathbf{U}_- - \mathbf{C}_- \mathbf{A}_-||_F^2} {||\mathbf{U}_+||^2_F + ||\mathbf{U}_-||^2_F},$$ or equivalently $$\label{eq:final_minimise} w^2 = 1 - \frac{\text{tr}(\mathbf{U}_+^H \mathbf{C}_+ \mathbf{C}_+^+ \mathbf{U}_+) + \text{tr}(\mathbf{U}_-^H \mathbf{C}_- \mathbf{C}_-^+ \mathbf{U}_-)} {||\mathbf{U}_+||_F^2 + ||\mathbf{U}_-||_F^2}.$$ Again, this quantity may be minimized by a numerical optimizer. The residual $w^2$ for any solution to Eq.  is equal to the residual obtained by inserting the solution into Eq. : the two equations differ only in the symmetry constraint on backward modal fields (Eq. ). Compared to Eq. , we have removed $N_p M$ unknowns from $\mathbf{A}$ (where $N_p \gg M$ is the number of sampled points in each unit cell), halving its dimension at the cost of adding $M$ unknowns to $\mathbf{C}_\pm$ as $\gamma$. These new unknowns must be found simultaneously with the Bloch factors using a numerical minimizer, so it is important to supply a good starting estimate; our method for doing so is detailed in Sec. \[sec:numerical\_procedure\]. Calculating impedance {#sub:calculating_impedance} --------------------- Once the Bloch factors and $\gamma$ are known, the modal fields can be reconstructed and analyzed to determine the PC’s impedance. The essential quantities for this calculation are the $E$ and $H$ field components in the plane of the PC interface (i.e., $E_z$ and $H_x$, or $E_x$ and $H_z$, depending on polarization) of each Bloch mode $m$ along the left edge ($y=0$) of a unit cell (see Fig. \[fig:simmoschem\]). These quantities, $E_m(x)$ and $H_m(x)$, may be found from Eq.  using the known values for $\mathbf{C}_+$ and $\mathbf{C}_-$ and inserting the appropriate $E$ or $H$ fields into $\mathbf{U}_+$ and $\mathbf{U}_-$. To calculate the impedance, we find the $\mathbf{E}$ and $\mathbf{H}$ matrices for the PC, as defined in Sec. \[sub:background\_theory\]. Inserting multiples of unit vectors $\mathbf{c}_+$ into Eq. , we can show that $$E_m(x) = {\cal A}_m \sum_{p} E_{p,m}~e^{i k_x^{(p)} x},$$ $$H_m(x) = {\cal A}_m \sum_{p} H_{p,m}~e^{i k_x^{(p)} x},$$ where ${\cal A}_m$ is the amplitude of the normalized mode $m$, and $E_{p,m}$ and $H_{p,m}$ are the elements of $\mathbf{E}$ and $\mathbf{H}$. It is straightforward to exploit the orthogonality of the plane wave grating diffraction orders to show that \[eq:EH\_elements\] $${\cal A}_m E_{p,m} = 1/a_x \int_{-a_x/2}^{a_x/2} \! E_m(x) e^{-i k_x^{(p)} x} \, dx,\\$$ $${\cal A}_m H_{p,m} = 1/a_x \int_{-a_x/2}^{a_x/2} \! H_m(x) e^{-i k_x^{(p)} x} \, dx.$$ Eqs.  let us calculate each element of the $\mathbf{E}$ and $\mathbf{H}$ matrices, up to a normalization constant ${\cal A}_m$ per column. We remove the constants by calculating the PC’s impedance (Eq. or ) with the PC itself as the reference material: by reciprocity-derived Bloch mode orthogonality relations [@Lawrence:2009p11], this quantity should be the identity matrix. The diagonal entries of this matrix are the ${{\cal A}_m}^2$; the off-diagonal terms, which should be zero, provide an error estimate. After normalizing the $\mathbf{E}$ and $\mathbf{H}$ matrices for the PC, we calculate its impedance matrix $\cal Z$ from Eq. or using a reference medium such as free space. Numerical Procedure {#sec:numerical_procedure} =================== Having outlined the theoretical basis of our method for finding the Bloch factors and impedance of a PC at a given frequency, incident angle, and polarization, we now provide some practical detail about our implementation of the method. We outline the procedure for $M = 3$ pairs of Bloch modes. In COMSOL Multiphysics 4.2, we simulate a $1 \times 8$ unit cell sample of PC, embedded in its background dielectric, with Bloch-Floquet periodic boundary conditions along the two long boundaries (Fig. \[fig:simmoschem\] shows a $1 \times 5$ structure). Eq. is a set of $LN_p$ equations, with $2M$ and $M N_p$ unknowns in $\mathbf{C}_\pm$ and $\mathbf{A}_\pm$ respectively. To be overspecified, the method requires $LN_p > M N_p + 2M$; thus $L=8$ periods and a large $N_p$ is sufficient to find $M=3$ modes. A deeper structure with more unit cells does not necessarily provide useful information about additional evanescent modes, as their amplitude deep inside the structure may be negligible. From COMSOL we export the relevant $E$ and $H$ field components in the $L=8$ unit cells, sampled over a $101 \times (50L+1)$ grid. In order to compute a mode, it must be present in the structure with sufficient amplitude to be detected. Light at normal incidence often fails to excite odd Bloch modes; these *uncoupled modes* [@Sakoda:1995p1955] consequently cannot be found by an optimization, which loses accuracy in searching for modes that are not present. At frequencies above the first Wood anomaly, the frequencies at which the higher order modes are most important, this problem may be avoided by exciting the PC slab not with a normally incident plane wave, but with the first grating diffraction order. This technique is used in Sec. \[sub:complex\_band\_structure\] and Sec. \[sub:park\_coating\]. If the uncoupled mode is not relevant to a particular problem, it may instead be ignored. If we seek to find $M=3$ Bloch modes, then finding a global minimum of Eq.  involves searching for $2M = 6$ complex numbers. This is a hard problem if attacked directly, but we use an algorithm that gives more consistent success by providing a good starting estimate. We start by minimizing the residual $w^2$ in Eq. , which forces a relationship between forward and backward Bloch factors but not the modal fields. This involves finding only $M$ complex numbers. As a starting estimate for the forward Bloch factors, we either take the result of a neighboring simulation, or the analytically calculated Bloch factors for the dielectric background of the PC. At every step of the minimization, evanescent modes are sorted into forward and backward decaying modes, based on the moduli of their Bloch factors. The minimization can be done by any standard numerical minimizer, such as SciPy’s [@scipy] `fmin`, which is a modified Nelder-Mead optimization [@Wright:1996:DSM]. At this point, the results are equivalent to those from the method of Ha *et al.* [@Ha:2009p1388], except that we have lessened the likelihood of $\mathbf{C}$ being ill-conditioned by renormalizing the backward Bloch factors $\mu_{m^\prime}$ in Eq. and setting their phase origin to the end of the PC. Occasionally, we encounter an instability in which a pair of modes have very large equal and opposite field amplitudes and very small Bloch factors. When this occurs, we follow a Gram-Schmidt-like process: we subtract the field of non-problematic modes (i.e., modes with $|\mu| > 10^{-3}$) from $\mathbf U$ and repeatedly minimize Eq.  to find each of the remaining modes individually. Using the solution to Eq.  as our estimate for the Bloch factors, the modal fields may be found with a least-squares optimization. The average field ratio of each pair of backward and forward modes gives us an estimate for $\gamma$. We now have a plausible estimate for $\gamma$ and the Bloch factors, which we can use as a starting estimate to minimize Eq. . To further refine the estimates, we repeatedly iterate through the modes, fixing all but one $\mu$ and the corresponding element of $\gamma$, minimizing Eq.  to find the two variables. After this process, we finally minimize Eq.  across all 6 complex dimensions simultaneously to obtain the correct Bloch factors and modal fields from which we calculate impedances. Forward and backward propagating modes are sorted based on their flux [@Botten:2001p9], before impedances are calculated as outlined in Sec. \[sub:calculating\_impedance\]. Applications {#sec:application} ============ We now apply our method to a range of typical problems. Each of these problems involves frequencies above the first or second Wood anomaly—frequencies at which scalar methods fail and multiple modes are required to describe the system. BlochCode, software that implements our method in Python, using SciPy [@scipy] and Sage [@sage], is freely available on the internet [@blochcodeurl]; we use it here. Complex band structure {#sub:complex_band_structure} ---------------------- The first application of our method is to calculate the complex band structure of a PC. The PC is a triangular lattice of circular air holes with radius $r = 0.3~a$ and lattice constant $a_x = a$ in a dielectric background with $n=3$. We calculate the band structure for light polarized with the $\mathbf{H}$ field out of the PC plane ($H_z$ polarization) at frequencies $a/\lambda \in (0,0.5)$ in the $\Gamma-M$ direction, i.e., at normal incidence. Using COMSOL, we calculate the field in an 8 period slab of the PC, and we apply our method to find the largest three Bloch factors. $w^2$ varies: it is less than $10^{-8}$ at low frequencies and less than $10^{-4}$ at high frequencies. ![(Color online) Complex band structure for the PC. The Wood anomaly ($a/\lambda = 0.333$) is marked. The modes are sorted into colors by $|\mu|$; where two modes are propagating (i.e., have $|\mu| = 1$), they are sorted by $|\text{arg} (\mu)|$. (a) Magnitude of Bloch factors $|\mu|$, with three Bloch modes found at all frequencies. (b) $|\mu|$ with two Bloch modes found below the Wood anomaly, three above. (c) Argument of Bloch factors. (d) Complex band structure in 3D.[]{data-label="fig:bandstructure"}](compositefig2.pdf) Fig. \[fig:bandstructure\] summarizes the propagation properties of the two/three most dominant modes. The moduli of the Bloch factors $|\mu|$, which quantify how the modes’ amplitudes vary with propagation, are shown in Figs. 2(a) and 2(b). Below the Wood anomaly, an inspection of $\mathbf{A}$ and $\gamma$ shows that the third mode is barely excited by the normally incident plane wave, and this reduces the accuracy of the results (Fig. 2(a)). Ignoring the uncoupled mode at low frequencies (where the $p=1$ grating order is evanescent and so may not be used to excite the structure, as mentioned in Sec. \[sec:numerical\_procedure\]) increases the accuracy of the other two modes (Fig. 2(b)). The complex arguments of the Bloch factors, which quantify how phase is acquired through propagation, are shown in Fig. 2(c), and the information about amplitude and phase is summarized in a single plot in Fig. 2(d). Aside from slight errors in the phase of strongly evanescent modes in Fig. 2(c), there is good agreement between Fig. \[fig:bandstructure\] and Bloch factors calculated by highly accurate multipole techniques. Figure \[fig:bandstructure\] shows that at frequencies below the Wood anomaly there is at most one propagating Bloch mode, which becomes evanescent in the first bandgap with a decay factor $|\mu|$ of no less than 0.5; it still decays far more slowly than the other evanescent Bloch modes at that frequency. Fig. 2(c) shows that for the evanescent modes, either 0 or $\pi$ phase is acquired across each unit cell. Antireflection coating {#sub:oldcoating} ---------------------- Our next application is to reproduce the design of an antireflection coating we presented previously [@Lawrence:2009p11], found using PC impedances calculated with a specialized transfer-matrix method [@Botten:2004p5]. As in this previous paper, our design strategy is to try out a very large number of potential coatings, and choose the coating that gives the lowest reflectance off the coated structure. The use of PC impedances makes this a feasible problem, as the evaluation of each coating is quick, involving a few operations on $M \times M$ (here $3\times 3$) matrices. The target PC is a triangular lattice with lattice constant $a_x = a$, consisting of air holes in a dielectric background with $n=2.86$. The holes are cylinders with radius $r = 0.25~a$. We seek to coat the PC to minimize reflection for light with frequency $a/\lambda = 0.38$, incident from air at an angle of $30^\circ$ in the $E_z$ polarization. At this frequency and incident angle, $M_\text{min} = 2$; we consider a total of 3 modes to ensure accuracy. As in our previous work [@Lawrence:2009p11], we seek a two-layer coating, where the degree of freedom is $a_y$, the lattice vector component perpendicular to the air/PC interface. For a regular triangular lattice, $a_y = \frac{\sqrt{3}}{2} a$. We choose 121 candidate PCs with $a_y \in [0.6,1.8]~\frac{\sqrt{3}}{2} a$ and simulate 8 periods of each in COMSOL. We apply our method to the resulting data, using the Bloch factors of the previous PC as the starting estimate for the next. BlochCode processes the 121 PCs in approximately 13 minutes on a 3.06 GHz Intel Core 2 Duo desktop computer. An equivalent approach that only requires one PC to be evaluated is detailed in Sec. \[sub:park\_coating\]; we do not use it here since the purpose of this section is to demonstrate the reliability and consistency of the optimization procedure. We then calculate the reflectances off the $121^2 = 14641$ coated stacks (Fig. \[fig:pracoat\]), which takes 34 seconds on a single core of the desktop computer. The optimal coating is found to have thicknesses $a_{y1} = 1.53~\frac{\sqrt{3}}{2} a$ and $a_{y2} = 0.65~\frac{\sqrt{3}}{2} a$, and reduces the reflectance of the structure from $R = 0.945$ to $R = 1.96\times 10^{-4}$. The results in Fig. \[fig:pracoat\] agree well with data calculated by a highly accurate multipole scattering matrix method: the RMS difference is $3.4 \times 10^{-3}$, and the only noticeable differences occur on the two sharp resonant features near the lower edge of the figure. Specifically, the multipole-based calculations show that the coating reduces the PC’s reflectance from $R=0.943$ to $R=4.29 \times 10^{-4}$. ![(Color online) Reflectance of the coated PC as a function of $a_{y1}$ and $a_{y2}$, the relative thicknesses of the two coating layers, calculated using PC impedances from BlochCode. The minimum reflectance is marked.[]{data-label="fig:pracoat"}](pra-redux-comsol.pdf) All-polarization antireflection coating {#sub:park_coating} --------------------------------------- Finally, we apply our methods to find an all-polarization antireflection coating for a silicon-based self-collimating square-lattice photonic crystal presented by Park *et al.* [@Park:2010p651]. They investigated this class of structures using a scalar treatment of reflections, and were able to design an all-polarization coating at $a/\lambda=0.28$, below the first Wood anomaly. Since their scalar treatment does not support multiple propagating or evanescent Bloch modes, it generally does not work above the Wood anomaly. Our method does not have this limitation and we demonstrate this by designing an antireflection coating for both polarizations at a frequency well above the Wood anomaly, using more than one Bloch mode. Park *et al.* [@Park:2010p651] showed that at $a/\lambda = 0.368$, a 2D silicon ($n=3.518$) PC with $r = 0.45~a$ is self-collimating for both polarizations at normal incidence. The large radius is an extreme case that is challenging to simulate accurately. At this frequency $M_{\text{min}} = 3$, so for $E_z$ polarized light we include $M = 3$ modes in our calculations, with light incident from the $p=1$ grating order so that the otherwise uncoupled mode is excited. For $H_z$ light, this procedure does not yield accurate results—Bloch factors are calculated accurately, but the calculated reflection coefficients differ from those calculated directly in COMSOL. The calculated impedances prove sufficiently accurate to design an effective antireflection coating, but the inaccuracies mean that the coating is not optimal. To avoid these inaccuracies in $H_z$ polarization, we exploit the symmetry that causes the uncoupled mode. The physical structure and normally incident field are both symmetric about the $y$-axis, and so modes without even symmetry are not coupled to. Therefore we formally ignore the uncoupled odd mode, in each PC and in the reference medium, setting $M = 2$. In our $H_z$ COMSOL simulations for this structure, light is normally incident. In Fig. 2 of Park *et al.*’s paper [@Park:2010p651], they state that $R\simeq 0.28$ for $E_z$ polarized light, and $R\simeq 0.35$ for $H_z$ light. We calculate with BlochCode that a semi-infinite slab of the PC has $R= 0.284$ for $E_z$, and $R= 0.354$ for $H_z$ polarized light at this frequency, when incident from silicon. Specialized FEM-based transfer-matrix calculations agree, showing $R = 0.284$ for $E_z$ polarization, and $R = 0.357$ for $H_z$ polarization. At $a/\lambda = 0.368$, normally incident light is reflected by the PC into three propagating diffraction orders. Due to the symmetries of the problem, the $\pm1$ orders are only excited in an even superposition, so light is reflected into two modes. A successful coating needs to suppress reflection into both these modes simultaneously, and so must balance two modes’ amplitudes and two modes’ phases simultaneously for each polarization. Thus the design of a perfect all-polarization coating requires 8 continuous degrees of freedom. Rather than trying to search an 8-dimensional parameter space, which is computationally expensive even when the evaluation of each point is efficient, we consider coatings with four degrees of freedom and accept that we are unlikely to find an all-polarization coating with zero reflectance. Nevertheless, this is a particularly difficult problem: not only do we need many degrees of freedom to find a satisfactory coating, but if either of the Bloch factors in a PC is incorrect or any element of the PC’s impedance matrix is wrong, then the calculated net reflection off the structure is incorrect as well. To limit the coating’s thickness, we embed the four degrees of freedom into two rows of holes by varying both the hole radii, $r_1$ and $r_2$, and the space after the layers, $d_1$ and $d_2$ (Fig. \[fig:dualpolschem\]). Increasing $d_1$ and $d_2$ is similar to increasing $a_y$, as in Sec. \[sub:oldcoating\], but because the candidate PCs are independent of $d$, only one PC per radius needs to be simulated in COMSOL. Furthermore, the properties of the layers of silicon with thickness $d_i$ may be calculated analytically. We consider 36 possible hole radii in the range $r_i \in [0.10,0.45]~a$ and 99 values of $d_i \in (0,1)~a$. To allow a thin coating, we set $a_y = 2r + 0.1a$ for each PC. If necessary, additional degrees of freedom could be added to find a coating with even lower reflectances. ![Schematic of the all-polarization antireflection coating. $r_1$ and $r_2$ are the radii of the holes in the first two layers, and $d_1$ and $d_2$ are the thicknesses of the extra silicon background layers between the first few rows of holes. For this coating, $r_1 = 0.13~a$, $d_1 = 0.89~a$, $r_2 = 0.17~a$, and $d_2 = 0.9~a$.[]{data-label="fig:dualpolschem"}](dualpolschem.pdf) On a single core of a $16 \times 2.4$ GHz Intel Xeon-Quad workstation, it took a total of 15 minutes to find the modes of the 36 PCs in the two polarizations. For $E_z$ polarization, $w^2 \simeq 10^{-5}$ for most radii, and for $H_z$ polarization $w^2$ ranged roughly from $3 \times 10^{-3}$ for thin unit cells to $10^{-7}$ for the thicker cells with larger radius. Due to the large number of candidate coatings ($\sim 1.3 \times 10^7$), the embarrassingly parallel problem was split over 16 cores of the workstation, taking approximately 80 minutes per polarization. The best $E_z$ coating reduces $R$ from 0.284 to $9.56 \times 10^{-5}$, and the best $H_z$ coating reduces $R$ from 0.354 to $3.33 \times 10^{-4}$. The best all-round coating is taken to be the one with the lowest total reflection in the two polarizations. This coating has $r_1 = 0.13~a$, $d_1 = 0.89~a$, $r_2 = 0.17~a$, and $d_2 = 0.90~a$ (Fig. \[fig:dualpolschem\]). In $E_z$ it reduces $R$ to 0.0141, and in $H_z$ it reduces $R$ to 0.0197. Calculations from a specialized transfer matrix method [@Botten:2004p5] agree with these results, giving $R = 0.0142$ in $E_z$ polarization and $R = 0.0211$ in $H_z$. To verify these results without the aid of our specialized methods, implementations of which are not publicly available, we simulate the structure using COMSOL Multiphysics. Since COMSOL cannot directly calculate reflection coefficients off semi-infinite PCs, we simulate a 20-period section of the uncoated PC surrounded by the background dielectric, and compare the results to a simulation with the antireflection coating on both sides of the PC section. BlochCode calculates the reflectance of the uncoated and coated structures to be 0.407 and 0.0124 respectively in the $E_z$ polarization, and 0.574 and 0.0074 in the $H_z$ polarization. The COMSOL simulations agree with these results, showing that the coating reduces $R$ from 0.407 to 0.0129 in the $E_z$ polarization, and from 0.585 to 0.0055 in the $H_z$ polarization. Discussion & Conclusion {#sec:conclusion} ======================= We have detailed a method for calculating the complex band structure and impedance of PCs. The method takes into account structural symmetries in the PC, and enforces relationships between the fields of forward and backward modes, thus improving the method’s accuracy by eliminating ill-conditioning and constraining modal fields. We have applied the method to three cases, and have demonstrated that it works for a variety of square and triangular lattice 2D photonic crystals, for light in both polarizations and at different incident angles. We have demonstrated that our method works at frequencies both above and below the first Wood anomaly, the frequency above which scalar methods cannot adequately describe light propagation and reflection in PCs. The stronger the excitation of a Bloch mode, the more accurately our method calculates its properties. Thus the method is well-suited to calculating reflection and transmission through arbitrary PC stacks, where the most important modes are those that are strongly excited. Since PC impedances make it so easy to calculate the reflection and transmission properties of many combinations of PCs in a stack, it is feasible to search large parameter spaces of PC stacks for particular reflective properties over a range of frequencies, incident angles and polarizations. The method can be used to design not only all-polarization antireflection coatings, but also broadband antireflection coatings [@Lawrence:2009p11], polarization filters, angular filters, and other devices. Ha *et al.* have applied their method to slab PC waveguides [@Ha:2011p2082]. We have not yet applied our method to any 3D structure. As long as the $x-z$ plane mirror symmetry is present, our method for finding the complex band structure remains valid. The field of a slab waveguide might be sampled only over the PC’s surface (as in a SNOM experiment [@Ha:2011p2082]) or throughout the entire volume of the structure (as in a simulation); either case provides sufficient information to determine the modal fields within the sampled region and the associated complex band structure. However, the impedance formalism is yet to be developed for 3D structures. Our method is also valid for finding modes of PC waveguides, using supercells. Calculation of reflection and transmission matrices between PC waveguides is yet to be demonstrated using impedances, but they have previously been calculated directly from the supercell’s $\mathbf{E}$ and $\mathbf{H}$ matrices [@deSterke:09]. Bloch mode analysis is a valuable tool in understanding light’s interactions with PCs. Using an EM solver and our method, for which source code is available [@blochcodeurl], it is straightforward to find a PC’s complex band structure and its impedance. Respectively, these quantities dictate how the Bloch modes travel through the PC, and which modes they couple with at a PC interface. If these quantities are known for a set of PCs, then it is fast and efficient to calculate how light travels through arbitrary stacks of the PCs. This research was conducted by the Australian Research Council Centre of Excellence for Ultrahigh bandwidth Devices for Optical Systems (project number CE110001018). [28]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , , , ****, (). , , , ****, (). , , , , ****, (). , , , , , ****, (). , ****, (). , , , ****, (). , , , , , , ****, (). , ****, (). , , , , ****, (). , , , , , ****, (). , , , , , ****, (). , , , , , ****, (). , , , ****, (). , , , , , ****, (). , ****, (). , , , , , , ****, (). , , , , , ****, (). , , , , , , , , ****, (). <https://launchpad.net/blochcode> , , , , ****, (). , , , , , , ****, (). , , , , ****, (). , , , , , , ****, (). , , , ****, (). , ****, (). , , , , ** (), <http://www.scipy.org/>. , in **, edited by (, ), vol. of **, pp. . , **, (), .
--- abstract: 'We propose a new method to characterize the different phases observed in the non-perturbative numerical approach to quantum gravity known as Causal Dynamical Triangulation. The method is based on the analysis of the eigenvalues and the eigenvectors of the Laplace-Beltrami operator computed on the triangulations: it generalizes previous works based on the analysis of diffusive processes and proves capable of providing more detailed information on the geometric properties of the triangulations. In particular, we apply the method to the analysis of spatial slices, showing that the different phases can be characterized by a new order parameter related to the presence or absence of a gap in the spectrum of the Laplace-Beltrami operator, and deriving an effective dimensionality of the slices at the different scales. We also propose quantities derived from the spectrum that could be used to monitor the running to the continuum limit around a suitable critical point in the phase diagram, if any is found.' author: - Giuseppe Clemente - 'Massimo D’Elia' title: ' Spectrum of the Laplace-Beltrami Operator and the Phase Structure of Causal Dynamical Triangulation ' --- Introduction ============ Causal Dynamical Triangulations (CDT) [@cdt_report] is a numerical Monte-Carlo approach to Quantum Gravity based on the Regge formalism, where the path-integral is performed over geometries represented by simplicial manifolds called “triangulations”. The action employed is a discretized version of the Einstein-Hilbert one, and the causal condition of global hyperbolicity is enforced on triangulations by means of a space-time foliation. One of the main goals of CDT is to find a critical point in the phase diagram where the continuum limit can be performed in the form of a second-order phase transition. The phase diagram shows the presence of four different phases [@cdt_secondfirst; @Ambjorn:2014mra; @Ambjorn:2015qja; @cdt_charnewphase; @cdt_newhightrans; @cdt_toroidal_phasediag], and the hope is that the transition lines separating some of these phases could contain such a second order critical point. Presently, such phases are identified by order parameters which are typically based on the counting of the total number of simplexes of given types or on other similar quantities (e.g., the coordination number of the vertices of the triangulation). The main motivation of the present study is to enlarge the set of observables available for CDT, trying in particular to find new order parameters and to better characterize the geometrical properties of the various phases at different scales. One successful attempt to characterize the geometries of CDT has been obtained by implementing diffusion processes on the triangulations [@cdt_spectdim; @Coumbe:2014noa]. In practice, one analyzes the behavior of [*random walkers*]{} moving around the triangulations: from their properties (e.g., the return probability) one can derive relevant information, such as the effective dimension felt at different stages of the diffusion (hence at different length scales). In this way, estimates of the [*spectral dimension*]{} of the triangulations have been obtained. In this paper we propose and investigate a novel set of observables for CDT configurations, based on spectral methods, namely, the analysis of the properties of the eigenvalues and the eigenvectors of the Laplace–Beltrami (LB) operator. This can be viewed as a generalization of the analysis of the spectral dimension, since the Laplace–Beltrami operator completely specifies the behavior of diffusion processes (see Appendix \[sec:heatkernel\] for a closer comparison). Still, as we will show in the following, the Laplace–Beltrami operator contains more geometric information than just the spectral dimension. Nowadays, spectral methods find application in a huge variety of different fields. To remember just a few of them, we mention shape analysis in computer aided design and medical physics [@reuter_dna; @reuter_cad], dimensionality reduction and spectral clustering for feature selection/extraction in machine learning [@eigenmaps], optimal ordering in the PageRank algorithm of the Google Search engine [@pagerank], connectivity and robustness analysis of random networks [@robustnetw]. Therefore, the application to CDT is just one more application of a well known analysis tool. On the other hand, some well known results which have been established in other fields will turn out to be useful in our investigation of CDT. In the present paper, we limit our study to the LB spectrum of spatial slices. Among the various results, we will show that the different phases can be characterized by the presence or absence of a gap in the spectrum of the LB operator, as it happens for the spectrum of the Dirac operator in strong interactions, and we will give an interpretation of this fact in terms of the geometrical properties of the slices. The presence/absence of a gap will also serve to better characterize the two different classes of spatial slices which are found in the recently discovered bifurcation phase [@Ambjorn:2014mra; @Ambjorn:2015qja; @cdt_charnewphase; @cdt_newhightrans]. Moreover, we will show how the spectrum can be used to derive an effective dimensionality of the triangulations at different length scales, and to investigate quantities useful to characterize the critical behavior expected around a possible second order transition point. The paper is organized as follows. In Section \[sec:ctdreview\] we discuss our numerical setup together with a short review of the CDT approach, summarizing in particular the major features of the phase diagram that will be useful for the discussion of our results. In Section \[sec:LBproperties\] we describe some of the most relevant properties of the Laplace-Beltrami operator in general, then focusing on its implementation for the spatial slices of CDT configurations and discussing a toy model where the relation between the LB spectrum and the effective dimensionality of the system emerges more clearly. Numerical results are discussed in Section  \[sec:numres\_eigenvalues\]. Finally, in Section \[sec:conclusions\], we draw our conclusions and discuss future perspectives. Appendix \[sec:heatkernel\] is devoted to a discussion of the relation existing between the spectrum of the LB operator and the spectral dimension, defined by diffusion processes as in Ref. [@cdt_spectdim]. A brief review on CDT and numerical setup {#sec:ctdreview} ========================================= It is well known that, perturbatively, General Relativity without matter is non-renormalizable already at the two-loop level [@sagnotti]. Nevertheless, interpreted in the framework of the Wilsonian renormalization group approach [@wilsonian_RG], this really means that the gaussian point in the space of parameters of the theory is not an UV fixed point, as for example it happens for asymptotically free theories. Indeed, Weinberg conjecture of *asymptotic safety* of the gravitational interaction [@weinbergsASS] states the existence of an UV non-gaussian fixed point, which makes the theory well defined in the UV (i.e. renormalizable), but in a region of the phase diagram not accessible by perturbation theory. Various non-perturbative methods have been developed in the last decades to investigate this possibility, like Functional Renormalization Group techniques [@qeg] or the Monte-Carlo simulations of standard Euclidean Dynamical Triangulations (DT) [@edt1; @edt2; @dt_forcrand; @dt_syracuse] or Causal Dynamical Triangulations, the latter being the subject of this study. Monte-Carlo simulations of quantum field theories are based on the path–integral formulation in Euclidean space, where expectation values of any observable $\mathcal{O}$ are estimated as averages over field configurations sampled with probability proportional to $e^{-\frac{S}{\hbar}}$, $S$ being the action functional of the theory. Regarding the Einstein–Hilbert theory of gravity, the action is a functional of the metric field $g_{\mu \nu}$, given by[^1] $$\label{eq:act_cont} S[g_{\mu\nu}] = \frac{1}{16 \pi G} \bigintssss d^dx \,\sqrt{-g}\,(R -2 \Lambda)$$ where $G$ and $\Lambda$ are respectively the *Newton* and *Cosmological* constants, while the path–integral expectation values are formally written as averages over geometries (classes of diffeomorphically equivalent metrics) $$\label{eq:cdtove_EH_PI} \langle \mathcal{O} \rangle = \frac{1}{Z} \bigintssss \mathcal{D}[g_{\mu\nu}]\, \mathcal{O}[g_{\mu\nu}]e^{-\frac{S[g_{\mu\nu}]}{\hbar}\,},$$ where $Z$ is the partition function. The first step in setting up Monte-Carlo simulations is the choice of a specific regularization of the dynamical variables into play. In the case of gravity without matter fields the only variable is the geometry itself, which can be conveniently regularized in terms of *triangulations*, namely a collection of *simplexes*, elementary building blocks of flat spacetime, glued together to form a space homeomorphic to a topological manifold. The simplexes representing (spacetime) volumes in $4$–dimensional spaces are called *pentachorons*, analogous to *tetrahedra* in $3$–dimensional spaces and *triangles* in $2$–dimensional spaces (i.e. surfaces). Besides the general definition, and at variance with standard DT, triangulations employed in CDT simulations are required to satisfy also a causality condition of global hyperbolicity[^2]. This is realized by assigning an integer time label to each vertex of the triangulation in order to partition them into distinct sets of constant time called *spatial slices*, and constraining simplexes to fill the spacetime between adjacent slices (i.e. slices with neighbouring integer labels). The resulting triangulation has therefore a *foliated* structure[^3], and the simplexes can be classified by a (time–ordered) pair specifying the number of vertices on the slices involved (e.g., the pairs $(4,1)$, $(3,2)$, $(2,3)$ and $(1,4)$ classify all spacetime pentachorons). In order to ensure both the simplicial manifold property and the foliated structure *at the same time*, spatial slices, considered as simplicial submanifolds composed of glued spatial tetrahedra, need to be topologically equivalent. This basically means that triangulations are always geodetically complete manifolds, and topological obstructions (e.g., singularities) can only be realized in an approximate fashion, with increasing accuracy in the thermodynamic limit (infinite number of simplexes). The numerical results shown in Section \[sec:numres\_eigenvalues\] refer to slices with $S^3$ topology, but other topologies could be investigated as well (e.g., the toroidal one [@cdt_toroidal; @cdt_toroidal_phasediag]). In practice, it is convenient, without loss of generality, to impose a further condition, that is fixing the length–squared of every spacelike link (i.e. connecting vertices on the same slice) to a constant value $a^2$, and the square–length of every timelike link (i.e. connecting vertices on adjacent slices) to a constant value $-\alpha a^2$. The constant $a$ takes the role of *lattice spacing*, while $\alpha$ represents a genuinely regularization–dependent asymmetry in the choice of time and space discretizations. With this prescription, simplexes in the same class (according to the above definition) not only are equivalent topologically, but also geometrically, so that the expression of the discretized action greatly simplifies. Indeed, at the end of the day[^4], the standard $4$–dimensional action employed in CDT simulations with $S^3$ topology of the slices and periodic time conditions becomes a functional of the triangulation $\mathcal{T}$, and takes the relatively simple form $$\label{eq:4Daction} S_E = -k_0 N_0 + k_4 N_4 + \Delta ( N_4 + N_{41}-6 N_0) \, ,$$ where $N_0$ counts the total number of vertices, $N_4$ counts the total number of pentachorons, and $N_{41}$ is the sum of the total numbers of type $(4,1)$ and type $(1,4)$ pentachorons, while $k_4$, $k_0$ and $\Delta$ are free dimensionless parameters, related to the Cosmological constant, the Newton constant, and the freedom in the choice of the time/space asymmetry parameter $\alpha$ (see Ref. [@cdt_report] for more details). We want to stress that, even if CDT configurations are defined by means of triangulations, the ultimate goal of the approach is to perform a continuum limit in order to obtain results describing continuum physics of quantum gravity. Therefore, the specific discretization used in CDT must be meant as artificial, becoming irrelevant in the continuum limit. For this reason, simplexes should not be considered as forming the physical fabric of spacetime: eventually, one would like to find a critical point in the parameter space where the correlation length diverges and the memory about the details of the fine structure is completely lost.\ In standard CDT simulations, configurations are sampled using a Metropolis–Hastings algorithm [@metropolishastings], where local modifications of the triangulation at a given simulation time (i.e. insertions or removals of simplexes) are accepted or rejected according to the probability induced by the action in Eq.  and complying with the constraints discussed above. Unlike usual lattice simulations of quantum field theories, the total spacetime volume of CDT triangulations changes after a Monte Carlo update. In order to take advantage of finite size scaling methods (i.e. extrapolation of results to the infinite volume limit), it is convenient to control the volume by performing a Legendre transformation from the parameter triple $(k_4,k_0,\Delta)$ to the triple $(V,k_0,\Delta)$, where the parameter $k_4$ is traded for a target volume $V$. In practice, this is implemented by a fine tuning of the parameter $k_4$ to a value that makes the total spacetime or spatial volumes[^5] fluctuate with mean around a chosen target volume (respectively $\overline{N}_{4}$ or $\overline{N}_{41}$), and adding to the sample only configurations whose total volume lies in a narrow range around the target one. Moreover, a (weak) spacetime volume fixing to a target value $\overline{N}_{4}$ can be enforced, for example, by adding a term to the action of the form $\Delta S = \epsilon (N_{4}-\overline{N}_{4})^2$, where $\epsilon$ quantifies how much large volume fluctuations are suppressed. A relation similar to the latter holds for fixing the total spatial volume (substituting $N_{41}$ with $N_{4}$). Fixing a target total spatial volume $V_{S,tot}=\frac{\overline{N}_{41}}{2}$, one can investigate the properties of configurations sampled at different values of the remaining free parameters $k_0$ and $\Delta$. The general phase structure of CDT which is found in the $k_0$ - $\Delta$ plane is thoroughly discussed in the literature [@cdt_report; @cdt_secondord; @cdt_newhightrans; @cdt_charnewphase]. Here we will only recall some useful facts. Four different phases have been identified, called $A$, $B$, $C_{dS}$ and $C_{b}$, as sketched in Fig. \[fig:phasediag\], where for the two $C$ phase the labels $dS$ and $b$ stand respectively for [*de Sitter*]{} and [*bifurcation*]{}. At a qualitative level, configurations in the different phases can be characterized by the distribution of their spatial volume $V_S(t)$, which counts the number of spatial tetrahedra (spatial volume $V_S$) in each slice as a function of the slice time $t$. For configurations in the $B$ phase, the spatial volume is concentrated almost in a single slice, leaving the other slices with a minimal volume[^6]. For both the $C_{dS}$ and $C_{b}$ phase, the spatial volume is peaked at some slice–time but then, unlike the case of the $B$ phase, falls off more gently with $t$, so that the majority of the total spatial volume is localized in a so-called “blob” with a finite time extension; also in this case, slices out of this blob have a minimal volume. Finally, configurations in phase $A$ are characterized by multiple and uncorrelated peaks in the spatial volume distribution. From these observations, it is apparent that $C_{dS}$ and $C_{b}$ are the only physically relevant phases. Indeed, the average spatial volume distribution in the $C_{dS}$ phase is in good agreement with the prediction for a de Sitter Universe, having a $S^4$ geometry after analytical continuation to the Euclidean space [@cdt_desitter]. The bifurcation phase, instead, is characterized by the presence of two different classes of slices which alternate each other in the slice time $t$ [@cdt_charnewphase; @cdt_newhightrans]. The transition lines between the different phases (dashed lines in Fig. \[fig:phasediag\]) have been investigated by means of convenient observables. Regarding the $B$-$C_b$ and $A$-$C_{dS}$ transition lines, the definitions employed are based on the observation that changes in the qualitative behavior of the spatial volume distribution function $V_S(t)$ occur for almost constant values of $\Delta$ or $k_0$ respectively, suggesting quantities conjugated to them in the action as candidate order parameters: namely $conj(\Delta)\equiv (N_4+N_{41}-6 N_0)/N_4$ for the $B$-$C_b$ transition and $conj(k_0)\equiv N_0/N_{41}$ for the $A$-$C_{dS}$ transition. Finite size scaling computations using these observables as order parameters suggest a first-order nature for the $A$-$C_{dS}$ transition, while the $B$-$C_b$ transition appears to be of second-order [@cdt_secondord]. The definition of observables employed as order parameters for the $C_{b}$-$C_{dS}$ transition is more involved [@cdt_charnewphase; @cdt_newhightrans]: in the $C_b$ phase, one of the two classes of spatial slices is characterized by the presence of vertices with very high coordination number; also in this case there are hints for a second-order transition, even if results might depend on the topology chosen for the spatial slices [@cdt_toroidal_phasediag]. ![Sketch of the phase diagram CDT in $4$d and with spherical topology of spatial slices. The results shown in the present paper have been obtained from simulations running at the points marked by a star symbol $*$. The circled and labeled points $a$,$b$,$c$ and $\widetilde{c}$ refer to simulations running deeply inside the respective phases (see Table \[tab:simpoints\]). The position of transition lines is only qualitative.[]{data-label="fig:phasediag"}](blank_plot_wsimpoints_xphasediag_v3){width="1.0\linewidth"} $k_0$ $\Delta$ phase ----------------- ------- ---------- ---------- $b$ 2.2 -0.2 $B$ 2.2 -0.05 $B$ 2.2 0.022 $B$ 2.2 0.05 $C_b$ $\widetilde{c}$ 2.2 0.1 $C_b$ 2.2 0.15 $C_b$ 2.2 0.3 $C_{dS}$ 2.2 0.45 $C_{dS}$ $c$ 2.2 0.6 $C_{dS}$ $a$ 5 0.6 $A$ : Coordinates ($k_0$ and $\Delta$) for the simulation points chosen, as shown in Figure \[fig:phasediag\], and the phases in which they are contained. Some of the points are labeled also by a letter for later convenience. The assignment of simulation points to the different phases refers to the total volumes fixed in our runs ($N_{41} = 40k$ and $80k$).[]{data-label="tab:simpoints"} Global counts of simplexes, like those entering the definitions of $conj(\Delta)$ and $conj(k_0)$, are not sufficient to clearly distinguish the different geometrical properties of the various phases. From this point of view, the *spectral dimension* $D_S(\tau)$ (see Appendix \[sec:heatkernel\] for more details) is probably one of the few useful probes available up to now to probe the geometrical structure of CDT configurations. It is basically a measure of the effective dimension of the geometry at different stages of the diffusion process, it has permitted to demonstrate that, in the bulk of configurations in the de Sitter phase, the spectral dimension tends to a value $D_S\simeq 4$ for large diffusion times [@cdt_spectdim]. In the following, we will show how the analysis of the spectrum of the LB operator, which is discussed in the following section, permits to access new classes of observables, and how some clear characteristic differences among the various phases emerge in this way.\ The code employed for this study is an home–made implementation in C++ of the standard CDT algorithm discussed in Ref. [@cdt_report], which was checked against many of the standard results which can be found in the CDT literature. We performed simulations with parameters chosen as shown in Fig. \[fig:phasediag\] by points marked with a star symbol and reported also in Table \[tab:simpoints\]; for later convenience, four points, each being deep into one of the 4 phases, have been labeled by a letter: $a$, $b$, $c$ and $\widetilde{c}$. For most simulation points we have performed simulations with two different total spatial volumes, $V_{S,tot}=20k$ and $V_{S,tot}=40k$, adopting a volume fixing parameter $\epsilon = 0.005$; we have verified that our results are independent of the actual prescription used. The Laplace-Beltrami operator {#sec:LBproperties} ============================= The LB operator, usually denoted by the symbol $-\Delta$, is the generalization of the standard Laplace operator. Its specific definition depends on the underlying space and on the algebra of functions on which it acts. For a generic smooth Riemannian manifold $(\mathcal{M},g_{\mu \nu})$ the Laplace-Beltrami operator acts on the algebra of smooth functions $f\in \mathcal{C}^\infty(M)$ in the form [@diffgeom]: $$\begin{aligned} \label{eq:LB-Mg} -\Delta f &=& -\frac{1}{\sqrt{|g|}} \partial_\mu (\sqrt{|g|} g^{\mu \nu} \partial_\nu f) \\ &=& - g^{\mu \nu} (\partial_\mu \partial_\nu -\Gamma_{\mu \nu}^\alpha \partial_\alpha) f \, , \nonumber \end{aligned}$$ where $g$ is the metric determinant, $g^{\mu \nu}$ is the inverse metric and $\Gamma_{\mu \nu}^\alpha$ are the Christoffel symbols. It is easily shown that $-\Delta$ is invariant with respect to isometries. Furthermore, since it is positive semi-definite, a set of eigenvectors $\mathcal{B}_{M}$ solving the eigenvalue problem $-\Delta f = \lambda f$ is an orthogonal basis for the algebra $\mathcal{C}^\infty(M,\mathbb{R})$; in the following we will refer to such sets as *spectral bases*, which, for convenience and without loss of generality, we will always consider orthonormal. A spectral basis can then be used to define the Fourier transform as basis change from real to momentum space (e.g. sines and cosines in $\mathbb{R}^n$, or spherical harmonics in $S^2$), while the eigenvalues associated to each eigenspace contain information about the characteristic scales of the manifold. We will now elaborate further on the interpretation of the spectrum of eigenvalues, considering a diffusion process on a generic manifold $M$ described by the heat equation $$\begin{aligned} \label{eq:Mdiffeq} \partial_t u(x,x_0;t) - \Delta u(x,x_0;t) = 0 \, . \end{aligned}$$ We can expand the solution in a spectral basis $\mathcal{B}_M = \big\{ e_{n} | \lambda_n \in \sigma_M, \lambda_{n+1} \geq \lambda_n \big\}$ associated to the spectrum of (increasingly ordered) eigenvalues $\sigma_M = \big\{ \lambda_n \big\}$ $$\begin{aligned} u(x;t) = \sum_{n=0}^{|\sigma_M|-1} u_{n}(t) e_{n}(x) \, , \end{aligned}$$ so that Eq.  is transformed (by orthogonality) in a set of *decoupled* equations $$\begin{aligned} \partial_t u_{n}(t) = -\lambda_n u_{n}(t)\;\, \forall\, n\, , \end{aligned}$$ $$\begin{aligned} \label{eq:decmodes} \implies u(x;t) = \sum_{n=0}^{|\sigma_M|-1} e^{-\lambda_n t} u_{n}(0) e_{n}(x) \, . \end{aligned}$$ In the form of Eq.  the geometric role of eigenvectors in the diffusion process is evident: $\lambda_n$ represents the diffusion rate of the mode $e_{n}(x)$, so that the smallest eigenvalues are associated to eigenvectors along the slowest diffusion directions and vice versa. In this specific sense, the spectrum $\sigma_{M}$ encodes information about the characteristic scales of the manifold, while the set of eigenstates $\mathcal{B}_{M}$ identifies all the possible diffusion modes, and forms a basis for the algebra of functions on the manifold. Similar considerations can be applied to the problem of wave propagation on the manifold, where the heat equation is replaced by the wave equation; this is the reason behind the famous idea of “hearing the shape of a drum” [@drumshape_rev]. The definition of the Laplace–Beltrami operator can be extended easily to more general algebras, like the graded algebra of differential forms or the algebra of functions on a graph [@spectra_of_graphs; @spectral_graph_theory], the latter being of particular importance in our discussion, since, as discussed below, it allows us to implement straightforwardly the spectral analysis on CDT spatial slices, by means of their associated dual graphs. A undirected graph $G$ [@graph_theory] is formally a pair of sets ($V$,$E$), where $V$ contains *vertices*, which assume the role of lattice sites, whereas the set of edges, $E \subset V \times V$, is a symmetric binary relation on $V$ encoding the connectivity between vertices in the form of ordered pairs of vertices $\{(v_i, v_j)\}$. The reason why, in this first study, we choose to apply spectral methods to analyze the geometry of spatial slices only is that spatial tetrahedra have all link lengths equal to the spatial lattice size $a$, so that the distance between their centers is equal for any adjacent tetrahedra; therefore, it is possible to represent faithfully spatial slices by dual undirected and unweighted graphs, where the vertex set is the set of tetrahedra, and the edge set is the adjacency relation between tetrahedra. The algebra on which the Laplace-Beltrami operator acts can be taken as that of the real-valued functions $f:V\to \mathbb{R}$, which can be represented as the vector space $\mathbb{R}^{N}$ (where $N=|V|$), once an ordering of the vertices $i\mapsto v_i \in V\;\forall i \in \{0,1,\dots,N-1\}$ has been arbitrarily chosen, without loss of generality[^7]. In this representation the Laplace–Beltrami operator becomes formally a matrix, named *Laplace matrix*, and defined as: $$\label{eq:LBmat} L = D - A \, ,$$ where $D$ is the (diagonal) *degree matrix* such that the element counts the number of vertices connected to the vertex $v_i$, while $A$ is the symmetric *adjacency matrix* such that the element is $1$ only if the vertices $v_i$ and $v_j$ are connected (i.e. $\{v_i,v_j\}\in E$) and zero otherwise. For instance, the graph associated with a one-dimensional hypercubic lattice with $N$ sites and periodic boundary conditions corresponds to , and , while the Laplace matrix can be read off as the lowest order approximation to the Laplace–Beltrami operator estimated by evaluating functions on lattices sites: $$\label{eq:lap1D_f} -\Delta f (x_i) = -\frac{d^2f}{dx^2}(x_i) = \frac{2 f_i-f_{i+1}-f_{i-1}}{a^2} + \mathcal{O}(a) \, ,$$ where $a$ is the lattice spacing and $f_n\equiv f(x_{(n\bmod{N})})$. Notice that, since any tetrahedron of CDT spatial slices is adjacent to exactly $4$ neighboring tetrahedra, the dual graphs are $4$-regular (i.e. each vertex has degree $4$), so that the adjacency matrix suffices to compute eigenvalues and eigenvectors ($L= 4 \cdot \,\mathbbm{1} - A$), and furthermore it is sparse. In practice, we build and save the graphs associated to each slice in the adjacency list representation. Being already a memory-efficient storage for the adjacency matrix of the graph, these structures can be directly fed to any numerical solver optimized for the computation of eigenvalues and eigenvectors of sparse, real and symmetric matrices. The spectra and eigenvectors analyzed in the present paper have been obtained using the ‘Armadillo’ C++ library [@armadillo] with Lapack, Arpack and SuperLU support for sparse matrix computation. By solving the eigensystem for the LB spectrum, we can easily obtain eigenvectors as a side product. Even if the spectrum of a graph does contain much geometric information, still alone it is not capable to completely characterize geometries, but only classes of isospectral graphs. Conversely, the joint combination of eigenvalues and eigenvectors yields complete information on the graph[^8], but decomposed in a way useful for the analysis of geometries. General properties of the eigenvalues of the Laplace matrix on graphs --------------------------------------------------------------------- Here we will describe some results from spectral graph theory that allow us to extract the information mentioned above. For convenience, we will always consider the basis of eigenvectors $\mathcal{B}_G = \{\vec{e}_{n}\}$ to be real and orthonormal, since in this case the spectral theorem for real symmetric matrices applies. First of all we observe that, if no boundary is present, the Laplace matrix always has the zero eigenvalue, with a multiplicity equal to the number of connected components[^9]. For graphs made of a single connected component, any eigenfunction associated to the zero eigenvalue is simply a multiple of the uniform function $\vec{e}_{0} = \frac{1}{\sqrt{|V|}} \vec{1}_{|V|}$, where we indicate with $\vec{1}_{|V|}$ the vector in $\mathbb{R}^{|V|}$ with $1$ on each entry. Furthermore, the sum of the components of each eigenvector $\vec{e}_{n}$, with the exception of $\vec{e}_{0}$, is zero, since $\sum_{v\in V} e_{n}(v) = (\vec{e}_{n},\sqrt{|V|} \vec{e}_{0}) = 0$ by orthogonality of the chosen basis $\mathcal{B}_G$. In the following, we will only discuss properties of graphs with a single connected component[^10], like the ones occurring in CDT. ### Spectral gap and connectivity {#subsec:spectralgap} As argued above, geometric information about the large scales comes from the smallest eigenvalues and associated eigenvectors. The $0$-th eigenvalue has a topological character, and in the general case its multiplicity tells us how many connected components the graph is composed of, but for connected graphs its role is trivial and uninteresting. Arguably the most interesting eigenvalue is the first (non-zero) $\lambda_1$, which, depending on the context, is called *spectral gap* or *algebraic connectivity*. The latter name comes from the observation that the larger the spectral gap $\lambda_1$, the more the graph is connected. A measure of connectivity for a compact Riemannian manifold $\mathcal{M}$ is given by the *Cheeger isoperimetric constant* $h(\mathcal{M})$ defined as the minimal area of a hypersurface $\partial A$ dividing $\mathcal{M}$ into two disjoint pieces $A$ and $\mathcal{M}\setminus{A}$ $$h(\mathcal{M}) \equiv \inf \frac{vol(\partial A)}{vol(A) vol(\mathcal{M}\setminus{A})} \, ,$$ where the infimum is taken over all possible connected submanifolds $A$. For a graph $G=(V,E)$, the Cheeger constant is usually defined by $$\label{eq:CheegerDef} h(G) \equiv \min \Big\{\frac{|\partial A|}{|A|}\,|\, A\subset V, |A| \leq \frac{|V|}{2}\Big\} \, ,$$ where $\partial A$ is the set of edges connecting $A$ with $V\setminus A$. The relation between the Cheeger constant and the spectral gap for a graph $G$ where all vertices have exactly $d$ neighbours is encoded in the *Cheeger’s inequalities* $$\label{eq:Cheeger_ineq} \frac{1}{2} \lambda_1 \leq h(G) \leq \sqrt{2 d \lambda_1} \, .$$ This property of the spectral gap is interesting for the analysis of geometries of slices in CDT, since, as we will se in the next section, it highlights different behaviors for the various phases. Eigenvalue distribution and a toy model {#subsec:toymodel} --------------------------------------- When one considers the whole spectrum of the LB operator, two particularly interesting quantities are the density $\rho(\lambda)$, defined so that $\rho(\lambda)\, d \lambda$ gives the number of eigenvalues found in the range $[\lambda, \lambda + d \lambda]$, and its integral $n(\lambda)$, which gives the total number of eigenvalues below a given value $\lambda$. Both functions can be defined for single configurations (spatial slices) or can be given as average quantities over the Euclidean path integral ensemble. As we shall see, the latter quantity, $n(\lambda)$, will prove particularly useful to characterize the properties of triangulations at different scales. It is an increasing function of $\lambda$ and its inverse is simply the $n$-th eigenvalue $\lambda_n$. We will usually show $\lambda_n$ as a function of $n$ since, when considering a sample of configurations, taking the average of $\lambda$ at fixed (integer) $n$ is easier. There are various well known results regarding the two quantities above, most of them involving the LB operator on smooth manifolds. In particular, Weyl law [@weylslaw_1; @weylslaw] gives the asymptotic (large $\lambda$) behavior of $n(\lambda)$: $$\label{eq:weyl_law} n(\lambda) = \frac{\omega_d}{(2 \pi)^d} V \lambda^{d/2}$$ where $V$ is the volume of the manifold (which is assumed to be finite, with or without a boundary), $d$ is its dimensionality, and $\omega_d$ is the volume of the $d$-dimensional ball of unit radius. As we shall better discuss below, Weyl law, even if asymptotic, is generally expected to hold with a good approximation in the range of $\lambda$ for which one is not sensitive to the specific infrared properties (i.e. shape, boundaries and/or topology) of the manifold. How violations to the Weyl law emerge and how they can be related to a sort of effective dimension at a given scale will be one of the main points of our discussion. ![Plot of $\lambda_n$ against its volume-normalized order $n/V$, for a hypercubic lattice with periodic boundary conditions (i.e., toroidal) and different combinations of sizes $L_i$ for each direction. The straight continuous line is the exact Weyl scaling, see Eq. , predicted for $d = 3$; the dashed straight lines correspond to effective Weyl scalings for effective dimensions $d=2$ and 3.[]{data-label="fig:toymodel_1"}](toymodel_p_1){width="1.0\linewidth"} In the following we shall consider the LB spectrum computed on discretized manifolds. It is therefore useful to start by analyzing a simplified and familiar model, consisting of a regular and finite 3-dimensional cubic lattice, with respectively $L_x$, $L_y$ and $L_z$ sites along the $x$, $y$ and $z$ directions. All lattice sites are connected with 6 nearest neighbors sites, with periodic boundary conditions in all directions: this is therefore the discretized version of a 3-dimensional torus. The Laplacian operator can be simply discretized on this lattice and its eigenvectors coincide with the normal modes of a corresponding system of coupled oscillators: they are plane waves having wave number $\vec k = (k_x, k_y, k_z)$, with $k_i = 2 \pi\, m_i / L_i$ and $m_i$ integers such that $-L_i/2 < m_i \leq L_i/2$, so that $$\lambda_{\vec m} = 4 \pi^2 \left( \frac{m_x^2}{L_x^2} + \frac{m_y^2}{L_y^2} + \frac{m_z^2}{L_z^2} \right) \, .$$ Determining $n(\bar\lambda)$ for a given $\bar\lambda$ now reduces to counting how many vectors $\vec m$ exist such that $ \lambda_{\vec m} \leq \bar\lambda$. That corresponds to finding the triplets of integer numbers, i.e. the cubes of unit side, within the ellipsoid of semiaxes $R_i = \bar \lambda^{1/2} L_i / (2 \pi)$, with the constraint that $ - L/2 < m_i \leq L/2\ \forall\, i$. The latter constraint expresses the particular (cubic) discretization that we have adopted for the 3-dimensional torus, i.e. the structure of the system at the UV scale: if $\bar \lambda$ is low enough so that $R_i < L_i\ \forall\ i$, then we are not sensitive to such scale. On the other hand, the discretized structure of the eigenvalues expresses the finiteness of the system, i.e. the properties of the system at the IR scale: if we have also $R_i \gg 1\ \forall\ i$ then we are not sensitive to such scale either, and the counting reduces approximately to estimate the volume of the ellipsoid, so that $$n(\bar \lambda) \simeq \frac{4 \pi}{3} R_x\, R_y\, R_z = \frac{4 \pi}{3} \frac{L_x\, L_y\, L_z}{(2 \pi)^3} \bar \lambda^{3/2} \, ,$$ which is nothing but Weyl law for $d = 3$. In Fig. \[fig:toymodel\_1\] we show the exact distribution of $\lambda_n$ as a function of $n/V$, for various choices of $L_x$, $L_y$ and $L_z$. The tick line represents the Weyl law prediction, $\lambda = 6 \pi^2 (n/V)^{2/3}$. When $n/V \to 1$, all systems show similar deviations from the law, which are related to the common structure at the UV scale. The Weyl law is a very good approximation for lower values of $n/V$, as expected, and actually down to very small values of $n/V$ for the symmetric lattice where $L_x = L_y = L_z = 50$. ![Same as in Fig. \[fig:toymodel\_1\], for different combinations of the spatial sizes $L_i$ of the toroidal lattice. []{data-label="fig:toymodel_2"}](toymodel_p_2){width="1.0\linewidth"} For the asymmetric lattices, instead, some well structured deviations emerge at low $n/V$, where $\lambda$ follows a Weyl-like power law which is typical of lower dimensional models and can be easily interpreted as follows. For the lattice with $L_x = L_y = l = 15$ and $L_z = 600$, one does not find any eigenvalue with $m_x \neq 0$ and $m_y \neq 0$ as long as $\lambda < 4 \pi^2 / l^2 \simeq 0.175$, therefore in this range the distribution of eigenvalues is identical to that of a one-dimensional system, for which $\lambda \propto (n/V)^{1/2}$; for $\lambda > 4 \pi^2 / l^2$ also eigenvalues for which $m_x$ and/or $m_y$ are non-zero appear, and their distribution goes back to the standard 3-dimensional Weyl law. Making a wave-mechanics analogy, at low energy only longitudinal modes are excited, while transverse modes are frozen until a high enough energy threshold is reached. The point where one crosses from one power law behavior to the other brings information about the size of the shorter transverse scale. Similar considerations apply to the lattice $L_x = 3$, $L_y = 75$ and $L_z = 600$, which has three different and well separated IR scales: in this case one sees a one-dimensional power law for small $n/V$, which first turns into a two-dimensional one as modes in the $y$-direction start to be excited, and finally ends up in a standard 3d Weyl law when also modes with $m_x \neq 0$ come into play. The argument above can be rephrased at a more general level. Suppose we have a $D$-dimensional manifold where $d$ “transverse” dimensions are significantly shorter than the other $D -d$ “longitudinal” dimensions, with a typical transverse scale $l$. As long as one considers small eigenvalues, the modes in the transverse directions will not be excited, so that the counting of eigenvalues will be given by the Weyl law for $D - d$ dimensions, i.e. $n(\lambda) = \omega_{D-d} (V/l^d) \lambda^{(D-d)/2} $. The change from one regime to the other will take place when the transverse directions get excited for the first time, i.e. at $\lambda \simeq \pi^2/l^2$ (the actual prefactor depends on the details of the shorter dimension), which corresponds to $n \propto V l^{-D}$, with a proportionality constant which depends only on the details of the short transverse scales and is independent of the details of the longer scales. Therefore, different manifolds, sharing the same structure at short scales associated with an effective dimensional reduction, lead to a distribution $\lambda_n$ where the change from one power law behavior to the other takes place at the same point in the $(n/V)$-$\lambda$ plane, where $V$ is the global volume of the manifold. The value of $n/V$, being proportional to $l^{-D}$, brings information about the size of the short scale. To better illustrate the concepts above, in Fig. \[fig:toymodel\_2\] we show the distribution of $\lambda_n$ as a function of $n/V$ for three different choices of $L_x$, $L_y$ and $L_z$. The curves obtained for $(L_x,L_y,L_z) = (3,75,600)$ and $(L_x,L_y,L_z) = (3,75,1200)$ go exactly onto each other: their short scale structure is the same and the function $n(\lambda)$ just differs for different number of modes which are counted along the large direction $L_z$, however this difference disappears when one considers the scaling variable $n/V$, leading to a perfect collapse. The collapse instead is not perfect when one considers the lattice $(L_x,L_y,L_z) = (3,15,600)$, which has a different “intermediate” scale: moving from large to small $n/V$, the turning point from dimension 3 to dimension 2 is the same as for the two other lattices, however the turning point from dimension 2 to dimension 1 takes place earlier, because $L_y$ is shorter.\ The possible examples which one can discuss within the toy model are quite limited. For instance, one cannot consider the case in which there are points where the manifold branches into multiple connected ramifications, something which in general can lead to an increase, instead of a decrease, of the effective dimension. However, extrapolating the arguments given above, we can conjecture the following. $D$-dimensional manifolds having different overall volumes and shape, but sharing a similarity in the structures which are found at intermediate and short scales, will lead to similar (i.e. collapsing onto each other) curves when $\lambda_n$ is plotted against $n/V$, $V$ being the total volume of the manifold. Moreover, the power law taking place at a given value of $n/V$ will give information about the effective dimensionality $d_{EFF}$ of the manifold at a scale of the order $(n/V)^{-1/D}$, with $$\frac{2}{d_{EFF}} = \frac{d \log \lambda}{d \log (n/V)} \, . \label{eq:weyl_dimension}$$ This kind of information is similar to what is obtained by implementing diffusive processes to measure the spectral dimension. Numerical Results {#sec:numres_eigenvalues} ================= In this section we present results regarding mostly the spectrum of the LB operator defined on spatial slices, while a detailed discussion regarding the eigenvectors is postponed to a forthcoming study. We performed the analysis on spatial slices of configurations in each phase; in particular, almost all the results shown come from simulations running deep into each phase, at the points circled and labeled by a letter in Fig. \[fig:phasediag\] and in Table \[tab:simpoints\]. ![Probability distribution of $\lambda_1$ and $\lambda_3$ for slices with $V_S \simeq 2300$, taken from configurations sampled deep in the $C_{dS}$ phase (simulation point $c$), and with total spatial volume $V_{S,tot}=\frac{N_{41}}{2} = 40 k$. []{data-label="fig:dis_l1_l3"}](distrib_lambda1_lambda3){width="1\columnwidth"} While the total spatial volume has been fixed in each simulation to a target value, the spatial volume of single slices, $V_S$, can vary greatly from one slice to the other (apart from phase $B$). That will permit us to access the dependence of the spectrum on $V_S$, an information that will be very important for many aspects. As discussed above, each spatial slice will be associated with a 4-regular undirected graph, with each vertex of the graph corresponding to a spatial tetrahedron. For this reason, it will be frequent in the following discussion to borrow concepts and terminology from graph theory. We will first look at the low lying part of the spectrum, show how the transition from one phase to the other can be associated to the emergence of a gap in the spectrum, and discuss what that means in terms of the geometrical properties of the triangulations. We will then turn to results regarding the whole spectrum and show how one can obtain information on the effective dimension of the geometry at different scales. Finally, we will describe two methods to visualize graphs and apply them to show the appearance of spatial slices. ![Density $\rho(\lambda)$ computed from the first 100 eigenvalues for slices deep in the $C_{dS}$ phase (simulation point $c$) with total spatial volume $V_{S,tot} = 40k$, and for different ranges of the spatial slice volume $V_S$. []{data-label="fig:CdS_100eigs"}](first100_Cds_scaling){width="1\columnwidth"} The low lying spectrum and the emergence of a gap {#subsec:low_spect} ------------------------------------------------- Apart from the zero eigenvalue, $\lambda_0 = 0$, the remaining eigenvalues will fluctuate randomly from one configuration to the other and, moreover, their distribution will depend on $V_S$ in a well defined way that we are going to discuss later on. As an example, in Fig. \[fig:dis\_l1\_l3\] we show the distribution of $\lambda_1$ and $\lambda_3$ on a set of around $3 \times 10^3$ slices of approximately equal volume $V_S \simeq 2300$ and in $C_{dS}$ phase. Therefore, while the spectrum of each spatial slice is intrinsically discrete, because of the finite number of vertices making up the associated graph, it makes sense to define a continuous distribution $\rho(\lambda)$, assigned so that $\rho (\lambda) d \lambda$ gives back the number of eigenvalues which are found on average in the interval $[\lambda, \lambda + d \lambda]$. In general $\rho(\lambda)$ will be a function of the bare parameters chosen to sample the triangulations and, for fixed parameters, of the spatial volume $V_S$ of the chosen slice. ![ Density $\rho(\lambda)$ computed from the first 100 eigenvalues for the maximal slices in the $B$ phase (simulation point $b$) and for different spatial volumes $V_S$. []{data-label="fig:B_100eigs"}](first100_B_scaling){width="1\linewidth"} In Figs. \[fig:CdS\_100eigs\] and \[fig:B\_100eigs\] we show the low lying part of the distribution $\rho(\lambda)$ obtained from simulations performed respectively in the $C_{dS}$ and $B$ phases, selecting in each case three different ranges of spatial volumes[^11]. In order to focus just on the low part of the spectrum, we have limited the input for $\rho$ to just the first few eigenvalues in each case ($n \leq 100$). A striking difference between the two phases emerges. In the $B$ phase there is a gap $\Delta \lambda = \lambda_1 \simeq 0.1$ which does not disappear and is practically constant as the spatial volume $V_S$ grows, i.e. as one approaches the thermodynamical limit. This gap is absent in the $C_{dS}$ phase, where the distribution of the first 100 eigenvalues is instead more and more squeezed towards $\lambda = 0$ as $V_S$ grows. The presence or absence of a gap in the spectrum is a characteristic which distinguishes different phases in many different fields of physics: think for instance of Quantum Chromodynamics, where the absence/presence of a gap in the spectrum of the Dirac operator distinguishes between the phases with spontaneously broken/unbroken chiral symmetry. Let us discuss what is the meaning of the gap in our context. Graphs which maintain a finite gap as the number of vertices goes to infinity are known as [*expander graphs*]{} [@expgraphs] and play a significant role in many fields, e.g., in computer science. They are characterized by a high connectivity, i.e. the boundary of every subset of vertices is generically large. Such a high connectivity is usually associated with a degree of randomness, i.e. lack of order, in the connections between vertices: for instance, random regular graphs are [*expanders*]{} with high probability [@alon_second_conjecture]. The strict relation between the high connectivity and the presence of a finite gap in the spectrum is also encoded in Cheeger’s inequalities, see Eq. (\[eq:Cheeger\_ineq\]). The property which is maybe most relevant to our context is the fact that the diameter of an expander, defined as the maximum distance[^12] between any pair of vertices, does not grow larger than logarithmically with total number of vertices [@diameters_and_eigenvalues; @diameter_lower_bound]. Therefore, in this phase the spatial slices do not develop a well defined geometry, since the size (diameter) of the Universe remains small as the volume tends to infinity, a fact described also in previous CDT studies in terms of a diverging Hausdorff dimension. This fact can be easily interpreted in terms of diffusive processes: as argued above (see Section \[sec:LBproperties\]), the value of the spectral gap, $\lambda_1$, can be interpreted as the inverse of the diffusion time of the slowest mode; the fact that the time to diffuse through the whole Universe stays finite means that its size is not growing significantly. ![Scatter plot of the eccentricity of $200$ randomly selected vertices for each slice of about $400$ configurations in the $C_{dS}$ phase (simulation point $c$) with total spatial volume $V_{S,tot}=20k$, and for the maximal slices of about $200$ configurations in the $B$ phase (simulation point $b$) with total volumes $V_{S,tot}=8k,16k,32k,40k$. Results are reported against the slice volume $V_S$.[]{data-label="fig:diam_CdS_B_ecc-vs-V"}](diam_CdS_B_ecc-vs-V){width="1.0\linewidth"} On the contrary, according to the arguments discussed in Section \[subsec:toymodel\], for a graph representing a standard manifold having a finite effective dimension on large scales, one expects that the number of eigenvalues found below any given $\lambda$ should grow proportionally to the volume $V_S$, $n \propto V_S\, \lambda^{d_{EFF/2}}$, see for instance Eq. . That means that the gap must go to zero as $V_S \to \infty$ and, moreover, that a finite normalized density[^13] of eigenvalues, $\rho(\lambda)/V_S$, must develop around $\lambda = 0$. Instead, as it will be shown in more detail below, the presence of a spectral gap for slices in the $B$ phase indicates that the effective dimension is indeed diverging at large scales, in agreement with the high connectivity property. As an independent check, we computed the maximum distance from a randomly chosen vertex to all other vertices in the graph (a quantity usually called the *eccentricity* of the vertex), iterating the procedure for $200$ different starting vertices and for each slice in the $C_{dS}$ and $B$ phases. The maximum eccentricity in a graph corresponds to its diameter, so the eccentricity of a random vertex is actually a lower bound to the diameter. Therefore the results, which are shown in the form of a scatter plot in Fig. \[fig:diam\_CdS\_B\_ecc-vs-V\], are consistent with a diameter which, for sufficiently large volumes, grows as a power law of $V_S$ in phase $C_{dS}$, while on the contrary it seems to reach a constant or to grow at most logarithmically in the $B$ phase. The properties of slices in phase $A$ are quite similar to those found in phase $C_{dS}$, i.e. one has evidence for a finite density of eigenvalues around $\lambda = 0$ in the large $V_S$ limit, even if the distribution of slice volumes is significantly different from that found in phase $C_{dS}$. An example of the distribution of the first 30 eigenvalues in this phase is reported in Fig. \[fig:A\_100eigs\]. ![ Density $\rho(\lambda)$ computed from the first 30 eigenvalues for slices deep in the $A$ phase (simulation point $a$) with total spatial volume $V_{S,tot} = 8k$, and for two different ranges of the spatial slice volume $V_S$. []{data-label="fig:A_100eigs"}](A_30eigs){width="1\linewidth"} Instead, the spectra of slices in the bifurcation phase $C_b$ need a separate treatment. Indeed, it is well known that the bulk of the configurations are made up of two separate classes of slices, which alternate each other in slice-time and have different properties [@cdt_newhightrans]: it is reasonable to expect that this is reflected also in their spectra. This is indeed the case, as can be appreciated by looking at Fig. \[fig:single\_Cb-CdS\_lam1-tslice\], where we report the value of $\lambda_1$ obtained on the different slices (i.e. at different Euclidean times) for a typical configuration sampled in the $C_b$ phase, and compare it to a similar plot obtained for the $C_{dS}$ phase. For an easier comparison, the time coordinates of the slices have been relabeled in each case so that the slice with the largest volume corresponds to $t_{slice} = 0$; moreover, we restricted to the bulk of configurations (i.e. we chose slices with $V_S>200$). Contrary to the $C_{dS}$ phase, in the $C_b$ phase $\lambda_1$ changes abruptly from one slice to the other, with small values alternated with larger ones, differing by even two order of magnitudes. This striking difference, which emerges even for single configurations, is even more clear as one considers the whole ensemble: Fig. \[fig:aver\_Cb-CdS\_lam1-20-100\_vs\_tslice\] shows the average of $\lambda_1$, $\lambda_{20}$ and $\lambda_{100}$ for configurations in the $C_{b}$ and $C_{dS}$ phases, with slice times relabeled as before. In the $C_{dS}$ phase $\lambda_1$ changes smoothly with $t_{slice}$, and this change is mostly induced by the corresponding change of the slice volume, while in the $C_b$ phase the alternating structure is visible also for higher eigenvalues, even if somewhat reduced and limited to the central region as $n$ grows. Therefore, we conclude that the alternating structure of spatial slices is apparent and well represented in the low-lying spectra: slices in the bulk of $C_b$ phase configurations can be separated in two distinct classes by the value of their spectral gap, while in the $C_{dS}$ phase there is no sharp distinction apart from a volume-dependent behavior connected to an observed Weyl-like scaling, which will be discussed in more detail in Section \[subsec:scalings\]. ![Spectral gap $\lambda_1$ as a function of the slice-time for single configurations in $C_b$ and $C_{dS}$ phases with total spatial volume $V_{S,tot}=40k$ and with the slice-time of maximal slice shifted to zero. Only slices in the bulk (with volume $V_S\ge 200$) have been shown.[]{data-label="fig:single_Cb-CdS_lam1-tslice"}](single_Cb-CdS_lam1-tslice){width="1\linewidth"} ![Averages of $\lambda_1$,$\lambda_{20}$ and $\lambda_{100}$ as a function of the slice-time for configurations in $C_b$ and $C_{dS}$ phases, where the slice-time of maximal slices has been shifted to zero. Only slices in the bulk (with volume $V_S\ge 200$) have been shown.[]{data-label="fig:aver_Cb-CdS_lam1-20-100_vs_tslice"}](aver_Cb-CdS_lam1-20-100_vs_tslice){width="1\linewidth"} In order get a better perspective on these results, in Fig. \[fig:dt4\_lam1-20-100\_vs\_V\] we show the eigenvalues $\lambda_n$, with $n=1,20,100$, plotted against the volume of the slice on which they are computed, for the slices of all configurations sampled in the $C_b$ phase (in particular at the simulation point labeled $\widetilde{c}$). Slices with volumes larger than a given $V_S$, which we call *bifurcation volume*[^14], divide in two distinct classes characterized by $\lambda_n$ taking values in well separated ranges. It is interesting that such bifurcation volume depends on $n$: that also explains why in Fig. \[fig:aver\_Cb-CdS\_lam1-20-100\_vs\_tslice\] the alternating behavior of higher order eigenvalues (e.g., $\lambda_{100}$) drops off earlier than lower order ones, since spatial volumes get smaller far from the slice with maximal volume and then their volume becomes less than the bifurcation one at that order. That actually means that the alternating slices found in the $C_b$ phase only differ for the low lying part of the LB spectrum, while for high enough eigenvalues order they are not distinguishable; high eigenvalues mean small scales, hence we expect that the alternating slices have the same small scale structures and only differ at large scales. We will come back on this point later on. In view of the close similarity with the properties of slices found respectively in the $C_{dS}$ in the $B$ phase, we assign to the two classes of slices the names $dS$-type (low spectral gap) and $B$-type (high spectral gap). Looking again at Fig. \[fig:dt4\_lam1-20-100\_vs\_V\], we notice that, for sufficiently large volumes, the two classes populate only specific volume ranges. Furthermore, the maximal slice in the $C_b$ phase is typically observed to be of $B$-type, with a volume ranging in a narrow interval which is separated from the volumes of the other slices. This alternating distribution of spatial volumes has been indeed one of the first signals of the presence of the new phase [@Ambjorn:2014mra; @Ambjorn:2015qja]. ![Scatter plot of the values of $\lambda_1$, $\lambda_{20}$ and $\lambda_{100}$ versus the volume of the slice on which they are computed, for slices of configurations deep in the $C_b$ phase (simulation point $\widetilde{c}$) and with volume fixing $V_S = 40k$.[]{data-label="fig:dt4_lam1-20-100_vs_V"}](dt4_lam1-20-100_vs_V){width="1\linewidth"} [ | &gt;m[2cm]{} | &gt;m[2cm]{} | &gt;m[2cm]{} | &gt;m[2cm]{} |]{} $A$ & $B$ & $C_b$ & $C_{dS}$ no-gap & gap & gap   no-gap & no-gap A summary of the characterization of the phase diagram of CDT according to the (zero or non-zero) gap of the LB operator as an order parameter is reported in Table \[tab:gap\]. To conclude the discussion about the gap, it is interesting to consider how the distribution of $\lambda_1$ changes across the different phases. To this purpose, in Fig. \[fig:lambda1\_op-scatt\] we show a scatter plot of $\lambda_1$ for different values of $\Delta$ at fixed $k_0=2.2$: darker points corresponds to more frequent values of $\lambda_1$. As $\Delta$ increases, the gap in the $B$ or $B$-type slices progressively reduces and approaches zero at the point where one enters the $C_{dS}$ phase. A gap in the spectrum is a quantity which has mass dimension two (as the LB operator), i.e. an inverse length squared: if future studies will show that the drop to zero takes place in a continuous way, that will give evidence for a second order phase transition with a diverging correlation length. ![Distribution of $\lambda_1$ (in scatter plot format) for $k_0=2.2$ and variable $\Delta$ for configurations with total spatial volume $V_{S,tot}=\frac{N_{41}}{2}=40k$ and considering only slices with spatial volume $V_S > 2k$.[]{data-label="fig:lambda1_op-scatt"}](lambda1_op-scatt){width="1.1\linewidth"} Scaling and spectral dimension {#subsec:scalings} ------------------------------ As one expects, and as it emerges from some of the results that we have already shown, the typical values obtained for the $n$-th eigenvalue of the LB operator on spatial slices, $\lambda_n$, scale with the volume $V_S$ of the slice, and in a different way for the different phases. As an example, in Fig. \[fig:avereig\_vs\_vols\] we show the average values obtained for $\lambda_n$ (for a few selected values of $n$), as a function of the volume, in the $C_{dS}$ phase. ![Averages of eigenvalues $\lambda_n$ for selected orders $n$ and computed in narrow bins of volumes ($\Delta V_S = 20$), for slices of configurations sampled deep into the $C_{dS}$ phase (simulation point $c$), with total spatial volume $V_{S,tot}=40k$.[]{data-label="fig:avereig_vs_vols"}](avereig_vs_vols){width="1\linewidth"} ![Plot of $\lambda_n$ against its volume-normalized order $n/V_S$, for four randomly selected slices with volumes $V_S \simeq 500,1000,2000,3000$, taken from configurations sampled deep into the $C_{dS}$ phase (simulation point $c$) with total spatial volume $V_{S,tot}=40k$.[]{data-label="fig:plot_CdS_lam-k_V_collapse"}](plot_CdS_lam-k_V_collapse){width="1\linewidth"} In order to better interpret this scaling, and inspired by the discussion reported in Section \[subsec:toymodel\], in the following we will consider how $\lambda_n$ depends on the variable $n/V_S$. To show that this may indeed be illuminating, in Fig. \[fig:plot\_CdS\_lam-k\_V\_collapse\] we report $\lambda_n$ as a function of $n/V_S$ for four spatial slices, which have been randomly picked from an ensemble produced in the $C_{dS}$ phase and have quite different volumes, ranging over almost one order of magnitude[^15]. The collapse of the four curves onto each other is impressive and, in view of the discussion in Section \[subsec:toymodel\], can be interpreted in this way: despite the fact that the slices have quite different extensions, they show the same kind of structures at intermediate common scales. This kind of scaling is well visible in all phases, as one can appreciate by looking at Fig. \[fig:averbin\_CdS\_A\_B-lam-vs-k\_V\]. For convenience, we have divided all spatial slices in small volume bins, and then averaged $\lambda_n$ for each $n$ over the slices of each bin: such averages are reported in the figure against $n/V_S$. average eigenvalues are reported with error bars, which however are too small to be visible. ![ Averages of $\lambda_n$ versus $n/V_S$ computed in bins of $n/V_S$ with size $2/V_{S,max}$ for slices taken from configurations sampled deep into the $A$, $B$ and $C_{dS}$ phases (simulation points $a$,$b$ and $c$). The volume is fixed to $V_{S,tot}=40k$ for configurations in $A$ and $C_{dS}$ phase, and to $V_{S,tot}=8k$ for configurations in $B$ phase.[]{data-label="fig:averbin_CdS_A_B-lam-vs-k_V"}](averbin_CdS_A_B-lam-vs-k_V){width="1\linewidth"} ![Running dimension obtained from the logarithmic slope $m$ of the curves shown in Fig. \[fig:averbin\_CdS\_A\_B-lam-vs-k\_V\] as ${2}/{m}$ (see Section \[subsec:toymodel\] and Eq. (\[eq:weyl\_dimension\])), computed over bins of different ranges of $n/V_S$ and for configurations sampled in phases $C_{dS}$, $A$ and $B$. The curve associated to the $B$ phase is diverging for $n/V_S \rightarrow 0$ (it is around 30 for $n/V_S \sim 10^{-4}$), but part of it has been omitted from the plot, to improve the readability of the curves obtained for the other two phases.[]{data-label="fig:plot_CdS_A_B-Dk_V"}](plot_CdS_A_B-Dk_V){width="1\linewidth"} Each phase has its own characteristic profile. The profiles of phases $A$ and $C_{dS}$ are quite similar and differ by tiny deviations: in particular, in both cases one has that $\lambda_n \to 0$ as $n/V_S \to 0$, which is an equivalent way to state the absence of a gap in the spectrum. Instead, the profile of phase $B$ is significantly different and characterized by the fact that $\lim_{n/V_S \to \infty} \lambda_n \neq 0$, in agreement with the presence of a gap. In Fig. \[fig:averbin\_CdS\_A\_B-lam-vs-k\_V\] we do not report any data regarding the $C_b$ phase, which is discussed separately because of the particular features that we have already illustrated above. Following the discussion in Section \[subsec:toymodel\], each scaling profile can be associated with a running effective dimensionality $d_{EFF}$ of the spatial triangulations at a scale of the order $(n/V_S)^{-1/3}$: that can be done by taking the logarithmic derivative of $\lambda_n$ with respect to $n/V_S$, see Eq. . For this reason, in Fig. \[fig:plot\_CdS\_A\_B-Dk\_V\] we report $d_{EFF} = 2\, d \log(n/V_S) / d \log \lambda_n$, which has been computed numerically by taking the average derivative of the profile over small bins of the variable $n/V_S$. At very small scales, both the $A$ and the $C_{dS}$ phase are effectively 3-dimensional. However, going to larger scales (smaller $n/V_S$), the effective dimension decreases, going down to values around $d_{EFF} \sim 1.5$, which is approximately the same large scale dimensionality observed by diffusion processes [@cdt_gorlich]. The crossover between the two regimes takes place for $n/V_S$ in the range $0.1 - 0.4$, meaning that typical structures of lower dimensionality develop, with a transverse dimension of the order of just a few tetrahedra. Actually, the plot of $d_{EFF}$ shows a difference between phase $A$ and phase $C_{dS}$, which was not clearly visible before: contrary to phase $A$, in phase $C_{dS}$ the effective dimensionality seems to slowly grow again as one approaches larger and larger scales. This slow grow can be interpreted as a progressive ramification of the lower dimensional structures, i.e. as a hint that it has a fractal-like nature. The effective dimensionality has a completely different behavior in phase $B$: it is smaller than 3 ($d_{EFF} \simeq 2.5$) on small scales, then starts growing and diverges at large scales. This is due to the fact that $d \log \lambda_n / d \log(n/V_S) \to 0 $ as $n/V_S \to 0$, because of the presence of the gap, and, on the other hand, the diverging dimensionality can be interpreted in terms of the fact that the diameter of the slice grows at most logarithmically with $V_S$. Also the low dimensionality observed at small scales can be interpreted in terms of the large connectivity of the associated graphs: each tetrahedron has 4 links to other tetrahedra, some of these links are, in some sense, not “local”, i.e. they are a shortcut to reach directly some otherwise “far” tetrahedron; then, the probability that a couple of neighbouring tetrahedra are adjacent to a common tetrahedron gets smaller and leads to a lower effective dimensionality at short scales.\ Regarding the properties of the slices found in the bifurcation phase $C_b$, on the basis of what we have shown and discussed in Section \[subsec:low\_spect\], we have decided to perform a separate analysis for the different classes of spatial slices. In Fig. \[fig:Cb\_scaling\] we report $\lambda_n$ vs. $n/V_S$ for slices according to their relative position with respect to the central largest $B$-type slice (which corresponds to $t_{slice} = 0$). The differences between the two classes is clearly visible also from the scaling profiles, which resemble, especially for large scales, those found in the $B$ and in the $C_{dS}$ phase for $B$-type and $dS$-type slices, respectively. However, one striking feature emerges: at small scales, in particular for $n/V_S \gtrsim 0.1$, the scaling profiles coincide almost perfectly. We conclude that, at such scales, the two classes of slices present strong similarities, despite the completely different large scale behavior. Hints of this fact were already discussed in Section \[subsec:low\_spect\]. Such similarities are likely induced by the causal structure connecting adjacent spatial slices in CDT triangulations. ![Averages of $\lambda_n$ versus $n/V_S$ for slices taken from the bulk ($V_S >1000$) of configurations sampled in the $C_b$ phase ($k_0=2.2$, $\Delta = 0.10$). The total spatial volume is fixed to $V_{S,tot}=40k$, and the slice times have been relabeled so that the largest $B$-type slice has $t_{slice} = 0$.[]{data-label="fig:Cb_scaling"}](averbin_Cb_D010_classes-lam-vs-k_V){width="1\columnwidth"} ![Averages of $\lambda_n$ versus $n/V_S$ for slices taken from the bulk ($V_S >1000$) of configurations sampled in the $C_b$ phase ($k_0=2.2$, $\Delta = 0.15$). The total spatial volume is fixed to $V_{S,tot}=40k$, and the slice times have been relabeled so that the largest $B$-type slice has $t_{slice} = 0$.[]{data-label="fig:Cb_scaling2"}](averbin_Cb_D015_classes-lam-vs-k_V){width="1\columnwidth"} Running scales and the search for a continuum limit --------------------------------------------------- The analysis of the scaling profiles reported above permits to identify well defined scales, in terms of the parameter $n/V_S$, where something happens, like a change in the effective dimensionality of the system. Such scales are given in units of the elementary lattice spacing of the system, i.e. the size of a tetrahedron. On the other hand, the possible presence of a second order critical point, where a continuum limit can be defined for Quantum Gravity, implies that the lattice spacing should run to zero as the bare parameters approach the critical point. This running of the lattice spacing should be visible by the corresponding growth of the value, determined in lattice units, of some physical scale. This is a standard approach in lattice field theories, where one usually considers correlations lengths which are the inverse mass of some physical state. One of the major challenges in the CDT program is to identify and determine physical scales which could provide such kind of information and thus give evidence that the lattice spacing is indeed running. Promising steps in this direction have been already done by means of diffusive processes, where the scale is fixed by the diffusion time, both in CDT [@Coumbe:2014noa] and in DT [@dt_syracuse]. Here we propose that LB spectra and the observed scaling profiles may be helpful in this direction, and that a careful study of how such profiles change as a function of the bare parameters could provide useful information. A possible second order point is believed to separate the $C_b$ from the $C_{dS}$ phase, therefore it makes sense to analyze how the profiles change in both phases when moving towards the supposed phase transition, and if the observed changes can be associated to any running scale. A growth in the scale associated to some particular feature of the scaling profile means that its location moves to smaller values of $n/V_S$. As an example, in Fig. \[fig:Cb\_scaling2\] we report the scaling profiles obtained for slices in phase $C_b$ and $\Delta = 0.15$, which is closer to the phase boundary with respect to the case $\Delta = 0.10$, which has been discussed previously and is reported in Fig. \[fig:Cb\_scaling\]. An appreciable difference between the two cases is that the region where the profiles of $B$-type and $dS$-type coincide is larger (i.e. extends to smaller $n/V_S$) for $\Delta = 0.15$. From a quantitative point of view, one finds that the approximate value of $n/V_S$ where the profiles start differing by more than 5% is around 0.13 for $\Delta = 0.10$ and around 0.074 for $\Delta = 0.15$. In other words, there is a scale up to which $B$-type and $dS$-type slices are similar to each other, and such scale grows as one approaches the $C_b$-$C_{dS}$ phase transition. In a similar way, one can look at how the scaling profiles found in the $C_{dS}$ phase change as one approaches the phase transition from the other side. Such scaling profiles are reported in Fig. \[fig:averbin\_CdS123-lam-vs-k\_V\]. The short-scale region, and in particular the point where the effective dimension starts changing, seems not sensible to the change of $\Delta$. However the small $n/V_S$ (large-scale) region changes, with the profile undergoing an overall bending towards the left: notice that this implies a change in the effective dimensionality observed at the largest scales, which indeed, for $\Delta = 0.3$, is $d_{EFF}\gtrsim 2$. Finally, as we have already stressed above, the gap itself, which for $B$-type slices seems to approach zero as one gets closer to the $C_b$-$C_{dS}$ phase transition (see Fig. \[fig:lambda1\_op-scatt\]) could be interpreted in terms of a diverging correlation length if the behavior is proved to be continuous. The reported examples are only illustrative of the fact that the LB spectrum can provide useful scales which could give information on the nature of a possible continuum limit. Such program should be carried on more systematically by future studies, in particular by approaching the $C_b$-$C_{dS}$ phase transition more closely. ![ Averages of $\lambda_n$ versus $n/V_S$, computed in bins of $n/V_S$ with size $2/V_{S,max}$, for slices taken from configurations sampled in the $C_{dS}$ phase, with $k_0=2.2$ and different values of $\Delta$. The total spatial volume of each configuration is $V_{S,tot}=40k$. []{data-label="fig:averbin_CdS123-lam-vs-k_V"}](averbin_CdS123-lam-vs-k_V){width="1\linewidth"} Fine structure of the full spectrum {#subsec:full_spectrum} ----------------------------------- In this section we will show some details regarding the full distribution of eigenvalues (i.e. over the whole spectrum) in the different phases. Figs. \[fig:fullspectrum\_CdS\_histo\], \[fig:fullspectrum\_A\_histo\] and \[fig:fullspectrum\_B\_histo\] show the normalized distribution of eigenvalues for spatial slices with volumes in selected ranges, and for simulations performed deep into the phases $C_{dS}$, $A$ and $B$ respectively. ![Normalized distribution of all the eigenvalues for slices with volume in the range $V_S\in [200,400]$ for configurations deep into the $A$ phase (simulation point $a$), and with total spatial volume $V_{S,tot}=40k$.\[fig:fullspectrum\_A\_histo\]](fullspectrum_CdS_histo){width="1\linewidth"} ![Normalized distribution of all the eigenvalues for slices with volume in the range $V_S\in [200,400]$ for configurations deep into the $A$ phase (simulation point $a$), and with total spatial volume $V_{S,tot}=40k$.\[fig:fullspectrum\_A\_histo\]](fullspectrum_A_histo){width="1\linewidth"} The $A$ and the $C_{dS}$ phase present a detailed non-trivial fine structure which is very similar. Even if we are not interested, at least in the present context, to provide a detailed interpretation of the full spectrum, we notice that such fine structure is mostly relative to eigenvalues which are of order 1 or larger, hence associated to typically small scales; this is confirmed by the fact that, contrary to the low part of the spectrum, such fine structure is almost left invariant by changing the volume of the slice. For instance, it can be noticed that the distributions are sharply peaked around the integer values $\lambda=4,5,6$; indeed, by inspecting the spectra of single configurations and the associated eigenvectors, we observed that these integer eigenvalues often occur with high multiplicity and can be associated to the presence of recurrent regular short-scale structures and to very localized eigenvectors. The normalized distribution in the case of configurations in the $B$ phase does not show particular features, other than the already discussed presence of a spectral gap. The distribution looks in general more regular in this phase, even if some of the peaks around integer values are still present, but much reduced in amplitude. ![ Normalized distribution of all the eigenvalues of the maximal slices for configurations deep into the $B$ phase (simulation point $b$), and with spatial volume about $V_S \simeq 8k$.[]{data-label="fig:fullspectrum_B_histo"}](fullspectrum_B_histo){width="1\linewidth"} Visualization of spatial slices {#subsec:visualization} ------------------------------- We have seen how each spatial slice of the triangulations can be associated to a graph with non-trivial properties, i.e. what is usually called a [*complex network*]{}. There are different methods to visualize a complex network, some of them already considered in previous studies (see, e.g., Ref. [@dt_syracuse]), here we will briefly discuss only two of them: *Laplace embedding* [@LB_embedding] and *spring embedding* [@force-directed_embeddings; @spring_embedding]. The former makes use of the eigenvectors associated to the smallest eigenvalues, which are already computed by solving the eigenvalue problem, while the latter is based on a mapping of the graph to a system of points connected by springs: as we are going to discuss, the two methods are strictly related, however spring embedding proves more useful to give an intuitive picture of the short-scale structures. The underlying idea, common to both methods, is to represent any graph $G=(V,E)$ in a $m$-dimensional Euclidean space by finding a set of $m$ independent functions $\{\phi_n(v_i)\}_{n=1}^{m}$ which act as coordinates for each vertex $v_i\in V$, in such a way that vertices with smaller graph distance have coordinates with values as closer as possible. The “closeness” can be defined in many ways, consisting in solving different optimization problems, and that makes the two methods different. We will use the notation $\vec{\phi}_n \equiv (\phi_n(v_i))_{i=1}^{|V|}$ for each $n=1,\dots, m$. ### Laplace embedding The optimization problem for Laplace embedding [@LB_embedding] consists in minimizing the following functional of the coordinate functions $\{\phi_n\}$: $$\begin{aligned} \mathcal{E}_{LB}[\phi] &\equiv \frac{1}{2} \sum\limits_{n=1}^{m} \sum\limits_{(v_i,v_j)\in E} \Big( \phi_n(v_i)- \phi_n(v_j) \Big)^2 \nonumber \\ &= \frac{1}{2} \sum\limits_{n=1}^{m} \vec{\phi}_n^T L \vec{\phi}_n \, . \end{aligned}$$ subject to the constraints $\vec{\phi}_n \cdot \vec{\phi}_k = \delta_{n,k}$ and $\vec{\phi}_n \cdot \vec{1} = 0$ for each $n,p=1,\dots,m$, where $\vec{1}$ is the uniform vector with unit coordinates and $L$ is the matrix representation of the LB operator. It is straightforward to prove that a solution to this constrained optimization problem is given by the set of the first $m$ eigenvectors $\{\vec{e}_n\}_{n=1}^{m}$ of the Laplace–Beltrami matrix, where we excluded the $0$-th mode $\vec{e}_0 = \frac{1}{\sqrt{|V|}}\vec{1}$ (second constraint) and ordered eigenvectors without multiplicity (i.e. $\vec{e}_n$ is associated to $\lambda_n$ and $\lambda_n \leq \lambda_{n+1}$). For example, the coordinates associated to each vertex $v\in V$ in a $3$-dimensional Laplace embedding are the values of the first $3$ eigenvectors on that vertex, that is $v \mapsto (e_1(v),e_2(v),e_3(v))\in \mathbb{R}^3$. Fig. \[fig:LB\_bf51\_3D\] shows the $3$-dimensional Laplace embedding of a typical slice in the bulk of a configuration deep in the $C_{dS}$ phase (simulation point $c$). The geometry seems to be made up of filamentous structures, but that really means that the first $3$ eigenvectors, describing the slowest modes of diffusion, are not capable of describing short scale structures inside the filaments. However, they efficiently describe the largest scale geometry, which in the $C_{dS}$ case is non-trivial and unexpected. ![Laplace embedding in $3$ dimensions for the graph associated to a typical slice in the $C_{dS}$ phase (simulation point $c$) and with volume $V_S\simeq1500$. Here the color identifies the values that takes the first eigenvector $\vec{e}_1$ on each vertex: blue is negative, green is zero and red is positive. A projection of the $3$-dimensional figure is shown on the $xy$ plane.[]{data-label="fig:LB_bf51_3D"}](LB_bf51_3D){width="1.1\linewidth"} ![Spring embedding in $3$ dimensions for the graph associated to a typical slice in the $C_{dS}$ phase; the slice is the same as in Fig. \[fig:LB\_bf51\_3D\]. The rest length has been fixed to $l_0=0.02$. Also in this the color identifies the values that the first eigenvector $\vec{e}_1$ takes on each vertex, see Fig. \[fig:LB\_bf51\_3D\]. A projection of the $3$-dimensional figure is shown on the $xy$ plane.[]{data-label="fig:spring_bf51_3D_v2"}](spring_bf51_3D_v2){width="1.2\linewidth"} ### Spring embedding The optimization problem that has to be solved for spring embedding of an unweighted undirected graph $G=(V,E)$ consists in the energy minimization of a system of ideal springs with fixed rest length $l_0$ and embedded in $\mathbb{R}^m$, with extrema connected in the same way as the links of the abstract graph $G$ [@spring_embedding]. Having assigned coordinates $\{\phi_n(v_i)\}_{n=1}^{m}$ to each abstract vertex of the graph $v_i\in V$, the potential energy of the system is defined as: $$\mathcal{E}_{S}[\phi] = \frac{1}{2} \hspace{-0.15cm} \sum\limits_{(v_i,v_j)\in E} \hspace{-0.15cm} \left( \hspace{-0.1cm} l_0-\sqrt{\sum\limits_{n=1}^{m} (\phi_n(v_i)- \phi_n(v_j))^2}\right)^2$$ In the limit $l_0\rightarrow 0$, the functional $\mathcal{E}_{S}$ becomes equal to $\mathcal{E}_{LB}$ but with no constraint, so that the solution would collapse to the trivial solution, bringing all vertices to the same point, in this limit. On the other hand, for $l_0 > 0$, the springs will push vertexes apart from each other and help resolving even the shortest-scale structures, which are not visible with Laplace embedding. The simplest algorithm to find a (local) minimum is to initialize the coordinates of each vertex to a random value, and then relax the system of springs by performing a gradient descent. Fig. \[fig:spring\_bf51\_3D\_v2\] we shows the spring embedding of the same slice represented by Laplace embedding in Fig. \[fig:LB\_bf51\_3D\]. The large scale structure is well represented by both methods, but spring embedding permits to better discern short-scale structures at the finest level. Such representations of the spatial slices are illuminating to understand the properties of the LB spectrum for $C_{dS}$ slices. The slices are extended objects, i.e. one finds vertexes which are far apart from each other, implying the existence of slow diffusion modes and a continuum of quasi-zero eigenvalues for large $V_S$. On the other hand, the large scale structure is made of lower-dimensional substructures, which have a typical transverse size of the order of a few vertexes, and which often branch, making the overall spectral dimension (i.e. the diffusion rate) fractional at large scales. For comparison, Fig. \[fig:spring\_B10k\_it500\_3D\_v3\] shows the spring embedding of a typical slice in $B$ phase. The high connectivity of the graph, which is clearly visible from the figure, does not permit the development of extended large scale structures, so that diffusion modes maintain always fast and a finite gap remains even in the $V_S \to \infty$ limit. ![Spring embedding ($l_0=0.015$) in $3$ dimensions for the graph associated to a typical slice deep in the $B$ phase (simulation point $b$) and with volume $V_S\simeq4000$. Also in this the color identifies the values that the first eigenvector $\vec{e}_1$ takes on each vertex, see Fig. \[fig:LB\_bf51\_3D\].[]{data-label="fig:spring_B10k_it500_3D_v3"}](spring_B10k_it500_3D_v3){width="1.1\linewidth"} Discussion and Conclusions {#sec:conclusions} ========================== In this work we have investigated the properties of the different phases of CDT that can be inferred from an analysis of the spectrum of the Laplace-Beltrami operator computed on the triangulations. The present exploratory study has been limited to the properties of spatial slices: those can be associated to regular graphs where each vertex is linked to 4 other vertices. Let us summarize our main results and further discuss them:\ [*i)*]{} We have shown that the different phases can be characterized according to the presence or absence of a gap in the spectrum, which therefore can be considered as new order parameter for the phase diagram of CDT. In particular, a gap is found in the $B$ phase, while for the $A$ and the $C_{dS}$ phases one finds a non-zero density of eigenvalues around $\lambda = 0$ in the thermodynamical (large spatial volume $V_S$) limit. The $C_b$ phase, instead, shows the alternance of spatial slices of both types (gapped and non-gapped): that better characterizes the nature of the alternating structures already found in previous works [@cdt_newhightrans; @cdt_charnewphase], which for this reason we have called $B$-type and $dS$-type slices. The presence or absence of a gap in the spectrum is a characteristic which distinguishes different phases in many different fields of physics: think for instance of Quantum Chromodynamics, where the absence/presence of a gap in the spectrum of the Dirac operator characterizes the phases with spontaneously broken/unbroken chiral symmetry. In this context, the presence of a gap tells us that the spatial slices are associated to expander graphs, characterized by a high connectivity. That can be interpreted geometrically as a Universe with an infinite dimensionality at large scales, with a diameter which grows at most logarithmically in the thermodynamical limit; a small diameter in the phases with a gap is consistent with the findings of previous studies and is supported by a direct computation (see Fig. \[fig:diam\_CdS\_B\_ecc-vs-V\]). On the contrary, the closing of the gap can be interpreted as the emergence of a Universe with a standard finite dimensionality at large scales. It is interesting to notice that the value of the gap which is found seems to change continuously as one moves from the $B$ to the $C_b$ phase, and approaches zero as the $C_{dS}$ phase is approached.\ [*ii)*]{} We have shown that the spectrum can be characterized by a well defined scaling profile: the $n$-th eigenvalue, $\lambda_n$, is a function of just the scaling variable $n/V_S$. The profile is different for each phase and characterizes it; moreover, from the profile one can deduce information on the effective dimensionality $d_{EFF}$ of the system at different scales, which generalize a similar kind of information gained by diffusion processes. The $C_{dS}$ and the $A$ phase share a similar profile, corresponding to $d_{EFF} \simeq 3 $ at short scales, which then drops to $d_{EFF} \simeq 1.5 $ for $n/V_S \lesssim 0.1$. At larger scales, the two phases show a different behavior, with $d_{EFF}$ which keeps decreasing as $n/V_S$ decreases in the $A$ case, while in the $C_{dS}$ phase it starts growing again at large scales. Slices in the $B$ phase, instead, show an effective dimensionality which, in agreement with their high connectivity, seems to diverge in the large scale limit. An interesting feature has been found for the two different and alternating (in Euclidean time) classes of spatial slices in the $C_b$ phase: despite the different overall structure, they share an identical profile at small length scales, which is likely induced by the causality condition imposed on triangulations and is therefore an essential property of CDT. The profiles remain identical up to characteristic length scale above which they start to diverge, as expected since one class presents a gap and the other does not.\ [*iii)*]{} We have proposed that the scaling profiles might be used to identify particular length scales which change as a function of the bare parameters, and thus could serve as possible probes of the running to the continuum limit, if any. Among those, we have found of particular interest the characteristic length scale up to which the alternating slices found in the $C_b$ phase share the same profile: we have seen that such length grows as one approaches the boundary with the $C_{dS}$ phase. On the other side of the boundary, also the profiles of the slices in $C_{dS}$ phase show a modification at large scales as the $C_b$ phase is approached, leading in particular to a growing effective dimensionality. Along these lines, one could conjecture that, if a second order critical point is really found between the two phases, at such a point the different profiles found in the $C_b$ phase could merge at all scales and coincide with the profile from the $C_{dS}$ phase. Such a critical point would also been characterized by the vanishing of the gap for the $B$-type slices of the $C_b$ phase. Moreover, it would be interesting to test what the effective dimensionality found at large scales would be at the critical point: is it possible that, just on the critical point where a continuum limit can be defined, the effective dimensionality of spatial slices goes back to $D = 3$ at all physical scales?\ The present work can be continued along many directions. First of all, the region around the transition between the $C_b$ and the $C_{dS}$ phase should be studied in much more detail than what done in the present exploratory work, to see if some of the conjectures that we have made above can be put on a more solid basis. In addition, a careful study of the critical behavior around the transition of the spectral gap, which is the new order parameter introduced in this study, could provide information about the universality class to which the continuum limit, if any, belongs. Of course, it could well be that one finds a first order transition, i.e. a sudden jump in the gap and in other properties, but then one should perform simulations for lines corresponding to different $k_0$ to see if the first order terminates at some critical endpoint. We have not considered yet the information which can be gained by inspecting the eigenvectors of the LB operator, that will be done in a forthcoming study. In particular, it will be interesting to consider and analyze their localization/delocalization properties, in a way similar to what has been done in similar studies for the spectrum of the Dirac operator in QCD [@qcd_anderson_multifractal; @qcd_anderson_eigenmodes]. It will be interesting to extend the study of the spectrum to the full triangulations, i.e. not just for spatial slices. That will require some implementative effort: unlike spatial tetrahedra (which are all identical), pentachorons can have edges with different Euclidean lengths, and therefore a regular graph representation does not describe the geometry faithfully. Nevertheless, the Laplace–Beltrami operator for general triangulated manifolds would have a well defined representation in the formalism of Finite Elements Method, as discussed and applied for example in Refs. [@reuter_cad; @reuter_dna]. Finally, it would be interesting to apply spectral methods also to other implementations of dynamical triangulations, like the standard Euclidean Dynamical Triangulations (DT) where no causality condition is imposed. The implementation in this case would be straightforward, as for the spatial slices of CDT, i.e. given in terms of regular indirected graphs. We plan to address the issues listed above in the next future. We thank C. Bonati and M. Campostrini for useful discussions, and A. Görlich for giving us access to a version of CDT code which has served for a comparison with our own code. Numerical simulations have been performed at the Scientific Computing Center at INFN-PISA. Heat-kernel expansion and spectral dimension {#sec:heatkernel} ============================================ Here we describe how to compute the spectral dimension of a graph $G$ from the spectrum of its LB matrix using the heat-kernel expansion. Later we will apply the definition to slices in $C_{dS}$ phase, making a comparison between the standard definition of spectral dimension (i.e. via diffusion processes), and the one obtained by the spectrum. Let us consider the fundamental solution to the diffusion equation on a $k$-regular connected graph $G$ with LB matrix $L$: $$\label{eq:HKernel_eq} \begin{cases} \partial_t K_{v,v_0}(t) = -\frac{1}{k} \sum\limits_{v' \in V} L_{v,v'} K_{v',v_0}(t) \\ K_{v,v_0}(0) = \delta_{v,v_0} \, . \end{cases}$$ Discretizing time with unit steps $\Delta t = 1$ [@cdt_spectdim], Eq.  becomes the equation for random walk on the graph, where $K_{v,v_0}(\tau)$ is the probability that a random walker starting from the vertex $v_0$ at time $t=0$ is found at the vertex $v$ at time $t=\tau$. The *return probability* $Z_{v_0}(t) = K_{v_0,v_0}(t)$ is the probability that a random walker comes back to the starting vertex $v_0$ after $t$ steps. Averaging over all starting vertices, the return probability reduces to $Z(t)={\mbox{Tr}}K (t)$, the heat-kernel trace. In practice, the diffusion is performed only for a random subset of starting vertices, from which the return probability is then estimated as an average over explicit diffusion processes. However, $Z(t)$ can also be computed using the spectrum of the LB matrix using the *heat-kernel* expansion for the solution to Eq. : $$\label{eq:HKernel} K_{v,w}(t) = \frac{1}{|V|} \sum\limits_{n=0}^{|V|-1} e_{n}(v) e_{n}(w) e^{-\lambda_n t / k} \, ,$$ where $\{\lambda_n\}$ and $\{\vec{e}_n\}$ are the eigenvalues and associated eigenvectors of the LB matrix of the graph $G$. Notice that the terms in Eq.  corresponding to larger eigenvalues are more suppressed for increasing times than terms corresponding to smaller ones. In particular, for times $t \gg k/\lambda_1$, the only surviving term is given by the $0$-th eigenvalue, and the probability distribution tends to be uniformly distributed amongst all vertices: $\lim_{t \rightarrow +\infty} K_{v,v_0}(t) = \frac{1}{|V|} \;\forall v,v_0 \in V$ (assuming a single connected component). ![Estimates of the running spectral dimension (see Eq. ) obtained either via diffusion processes (continuous line), or using Eq.  (dashed lines) with the full spectrum or only the lowest $5\%$ part of it, for slices in the volume range $2000 - 2200$ taken from configurations sampled in the $C_{dS}$ phase (simulation point $c$) with total spatial volume $V_{S,tot}=40k$.[]{data-label="fig:sptdim_diff_LB_CdS_v2"}](sptdim_diff_LB_CdS_v2){width="1.0\linewidth"} The return probability, obtained from the spectrum, then takes the form $$\begin{aligned} Z(t) &\equiv {\mbox{Tr}}K(t) = \sum\limits_{v\in V} K_{v,v}(t) \nonumber\\ & = \frac{1}{|V|} \sum\limits_{n=0}^{|V|-1} e^{-\lambda_n t / k} \, , \end{aligned}$$ where we used the decomposition in Eq.  and the orthonormality of eigenvectors. The return probability $Z(t)$ can be nicely interpreted as a statistical *partition function*, for its formal analogy with the concept in statistical physics: the diffusion time takes here the role of the inverse temperature, while the eigenvectors and their associated eigenvalues take the role of microstates and their associated energies respectively. In the case of a compact smooth manifold $\mathcal{M}$, for which the Laplace–Beltrami spectrum $\{\lambda_n\}_{n=0}^{\infty}$ is countable but unbounded, the averaged return probability density $Z(t)$ has the following asymptotic expansion for $t \rightarrow 0^+$ [@drumshape_rev]: $$\begin{aligned} \label{eq:Zmanif} Z(t) &= \frac{1}{\text{vol}(\mathcal{M})} \sum\limits_{n=0}^{\infty} e^{-\lambda_n t} \nonumber\\ &= (4 \pi t)^{-\frac{\text{dim}(\mathcal{M})}{2}} \frac{1}{\text{vol}(\mathcal{M})} \Big( \sum\limits_{i=0}^{l-1} c_i t^{\frac{i}{2}} + O(t^{\frac{l}{2}}) \Big) \, . \end{aligned}$$ The return probability for unidimensional random-walks is $1/\sqrt{4 \pi t}$, so it is reasonable for a smooth manifold to locally decompose the random motion along the $\text{dim}(\mathcal{M})$ directions and get the return probability as a product of independent unidimensional return probabilities. In the case of random walks on $\mathbb{R}^d$ the return probability density equal to $Z(t)=(4 \pi t)^{-\frac{d}{2}}$, so one can infer the value of coefficients: $c_0 = \text{vol}(\mathcal{M})$ and $c_i = 0 \;\forall i \geq 1$. Corrections to the $t^{-\frac{\text{dim}(\mathcal{M})}{2}}$ behavior must be due to the geometric properties characterizing the manifold under study. For example, the first three coefficient have a geometrical interpretation, as discussed by McKean and Singer [@heatrace_coeffs] $$\begin{aligned} c_0 &= \text{vol}(\mathcal{M}) \, ,\\ c_1 &= -\frac{\sqrt{\pi}}{2} \text{area}(\partial \mathcal{M}) \, ,\\ c_2 &= \frac{1}{3} \bigintsss_{\mathcal{M}} R - \frac{1}{6} \bigintsss_{\partial \mathcal{M}} J \, , \end{aligned}$$ where $\partial \mathcal{M}$ is the (possibly empty) boundary of the manifold $\mathcal{M}$, $R$ is the scalar curvature of the manifold and $J$ is the mean curvature of the boundary. We expect that similar results hold for graphs approximating manifolds, but a first difficulty can be easily detected as shown by the following argument. At a time $t$ only eigenvalues $\lambda \lesssim \frac{1}{t}$ contribute to the sum in Eq., but for $t \rightarrow 0^+$ the full unbounded spectrum of the smooth manifold tends to contribute. The spectrum of a graph $G$, however, is bounded by the largest eigenvalue, so that here the expansion in Eq.  is not numerically reliable for times $t \lesssim (\lambda_{|V|-1})^{-1}$. Nevertheless one can plot the return probability as a function of time and get an estimate of the dimension $d$ by extrapolation to $\tau \rightarrow 0^+$ using the definition of what is called *spectral dimension* [@cdt_spectdim]: $$\label{eq:sptdim_formula} D_S(\tau) \equiv -2 \frac{d\log Z}{d\log t}\biggr\rvert_{t=\tau}.$$ Fig. \[fig:sptdim\_diff\_LB\_CdS\_v2\] shows the comparison between the estimates of spectral dimension obtained employing explicit diffusion processes (Eq.  integrated with step size $\Delta t=1$) and the spectrum of the Laplace–Beltrami matrix on graphs associated to spatial slices in $C_{dS}$ phase: we applied Eq.  using the average of the return probability $Z(t)$ computed on each slice having volume in the range $2000 - 2200$, and, for the definition via diffusion, averaging the return probability also over $200$ iterations of diffusion processes starting from randomly selected vertices in the slice. Using the definition via diffusion, at small diffusion times the return probability, and therefore also the spectral dimension, is highly fluctuating due to the short scale regularity of the tetrahedral tiling of the space (a phenomenon already discussed in Refs. [@cdt_spectdim; @cdt_report]); this is not present in the definition via the spectrum, where a bump is observed instead. For larger diffusion times ($\tau\gtrsim 100$) the curves obtained using both methods agree even using only the lowest $5\%$ part of the spectrum, which confirms that this regime represents indeed the large scale behavior. Here we observe a spectral dimension $D_S\simeq 1.5$ for the spatial slices of configurations in $C_{dS}$ phase. This fact, already observed in literature using diffusion processes [@cdt_gorlich], seems compatible also with the observations obtained from large scale scaling relations for the eigenvalues discussed in Section \[subsec:scalings\]. [99]{} J. Ambjorn, A. Goerlich, J. Jurkiewicz and R. Loll, Phys. Rept.  [**519**]{}, 127 (2012) \[arXiv:1203.3591 \[hep-th\]\]. J. Ambjorn, S. Jordan, J. Jurkiewicz and R. Loll, Phys. Rev. D [**85**]{} (2012) 124044 \[arXiv:1205.1229 \[hep-th\]\]. J. Ambjorn, J. Gizbert-Studnicki, A. Goerlich and J. Jurkiewicz, JHEP [**1406**]{}, 034 (2014) \[arXiv:1403.5940 \[hep-th\]\]. J. Ambjorn, D. N. Coumbe, J. Gizbert-Studnicki and J. Jurkiewicz, JHEP [**1508**]{}, 033 (2015) \[arXiv:1503.08580 \[hep-th\]\]. J. Ambjorn, J. Gizbert-Studnicki, A. Goerlich, J. Jurkiewicz, N. Klitgaard and R. Loll, Eur. Phys. J. C [**77**]{} (2017) no.3, 152 \[arXiv:1610.05245 \[hep-th\]\]. J. Ambjorn, D. Coumbe, J. Gizbert-Studnicki, A. Goerlich and J. Jurkiewicz, Phys. Rev. D [**95**]{} (2017) no.12, 124029 \[arXiv:1704.04373 \[hep-lat\]\]. J. Ambjorn, J. Gizbert-Studnicki, A. Goerlich, J. Jurkiewicz and D. Nemeth, arXiv:1802.10434 \[hep-th\]. J. Ambjorn, J. Gizbert-Studnicki, A. Goerlich, K. Grosvenor and J. Jurkiewicz, Nucl. Phys. B [**922**]{} (2017) 226 \[arXiv:1705.07653 \[hep-th\]\]. D. N. Coumbe and J. Jurkiewicz, JHEP [**1503**]{}, 151 (2015) \[arXiv:1411.7712 \[hep-th\]\]. J. Ambjorn, J. Jurkiewicz, and R. Loll, Phys.Rev.Lett., 95:171301, (2005) \[hep-th/0505113\]. M. Reuter, F. Wolter, M. Shenton and M. Niethammer, Computer-Aided Design [**41**]{} no.10, 739 (2009) M. Reuter, F. Wolter, M. Shenton and M. Niethammer, Computer-Aided Design [**38**]{} no.4, 342 (2006) M. Belkin, P. Niyogi, Neural Comput. [**15**]{} no.6, 1373 (2003) L. Page, S. Brin, R. Motwani and T. Winograd, The PageRank Citation Ranking: Bringing Order to the Web. Stanford InfoLab no.66 (1999) A. Jamakovic, P. Van Mieghem, Lecture Notes in Computer Science, vol 4982, 183–194 (Springer, Berlin, Heidelberg 2008) M. H. Goroff, A. Sagnotti, Nucl. Phys. B [**266**]{} (1986) 709. K. G. Wilson and J. B. Kogut, Phys. Rept.  [**12**]{} (1974) 75. S. Weinberg, General Relativity, an Einstein Centenary Survey, ch.16 (Cambridge Univ. Press, 1979). M. Reuter and F. Saueressig, New J. Phys.  [**14**]{} (2012) 055022 \[arXiv:1202.2274 \[hep-th\]\]. T. Rindlisbacher and P. de Forcrand, JHEP [**1505**]{} (2015) 138 \[arXiv:1503.03706 \[hep-lat\]\]. J. Laiho, S. Bassler, D. Coumbe, D. Du and J. T. Neelakanta, Phys. Rev. D [**96**]{} (2017) no.6, 064015 \[arXiv:1604.02745 \[hep-th\]\]. J. Ambjorn and J. Jurkiewicz, Phys. Lett. B [**278**]{}, 42 (1992). M. E. Agishtein and A. A. Migdal, Mod. Phys. Lett. A [**7**]{}, 1039 (1992). E. Minguzzi and M. Sanchez, EMS Pub.House, 2008, p.299-358 \[gr-qc/0609119\]. S. Jordan and R. Loll, Phys. Rev. D [**88**]{} (2013) 044055 \[arXiv:1307.5469 \[hep-th\]\]. N. Metropolis and S. Ulam J. American Statist. Assoc. [**44**]{} (1949) no.247, 335–341 J. Ambjorn, S. Jordan, J. Jurkiewicz and R. Loll, Phys. Rev. Lett.  [**107**]{} (2011) 211303 \[arXiv:1108.3932 \[hep-th\]\]. J. Ambjorn, J. Jurkiewicz and R. Loll, Phys. Rev. Lett.  [**93**]{} (2004) 131301 \[hep-th/0404156\]. J. Jost, Riemannian Geometry and Geometric Analysis (Berlin: Springer-Verlag, 2002) M. Kac, The American Mathematical Monthly, [**73**]{} no.4, 1 (1966) F. R. K. Chung, Spectral Graph Theory. Providence (RI: Amer. Math. Soc., 1997). D. M. Cvetkovic, M. Doob and H. Sachs, Spectra of Graphs: Theory and Applications, 3rd rev. enl. ed. (New York: Wiley, 1998). A. Bondy and M. R. Murty, Graph Theory (London: Springer-Verlag, 2008) C. Sanderson and R. Curtin, Journal of Open Source Software, Vol. 1, pp. 26, 2016. H. Weyl, Nachr. Konigl. Ges. Wiss. Göttingen, 110–117 (1911). V. Ivrii, Bulletin of Mathematical Sciences [**6**]{} (2016) no.3, 379–452 \[arXiv:1608.03963v2 \[math.SP\]\]. S. Hoory, N. Linial and A. Wigderson, Bull. Amer. Math. Soc. (N.S.) 43 (2006), no. 4, 439–561. J. Friedman, Proceedings of the Thirty-fifth Annual ACM Symposium on Theory of Computing, 720–724 (San Diego: ACM, 2003) \[arXiv:cs/0405020v1 \[cs.DM\]\] F. R. K. Chung, Journal: J. Amer. Math. Soc. [**2**]{}, 187–196 (1989) J.P. Grossman, Discrete Mathematics, [**300**]{} no.1, 225–228 (2005) A. Goerlich, arXiv:1111.6938 \[hep-th\]. M. Belkin and P. Niyogi, Neural Comput. [**15**]{} no.6, 1373–1396 (2003) S. G. Kobourov, \[arXiv:1201.3011v1 \[cs.CG\]\] T. M. J. Fruchterman and E. M. Reingold, Softw. Pract. Exper. [**21**]{} no.11, 1129–1164 (1991) L. Ujfalusi, M. Giordano, F. Pittler, T. G. Kovacs and I. Varga, Phys. Rev. D [**92**]{} (2015) no.9, 094513 \[arXiv:1507.02162 \[cond-mat.dis-nn\]\]. M. Giordano, T. Kovacs, F. Pittler, L. Ujfalusi and I. Varga, PoS LATTICE [**2014**]{} (2014) 212 \[arXiv:1410.8308 \[hep-lat\]\]. H. P. McKean, I. Singer, J. Differential Geometry [**1**]{} no.1, 43 (1967) [^1]: For simplicity, we are not including manifolds with boundaries, so there is no Gibbons–Hawking–York term in the action. [^2]: The global hyperbolicity condition is equivalent to the existence of a Cauchy surface, the strongest causality condition which can be imposed on a manifold [@causconds]. [^3]: The main reason for restricting to foliated triangulations is that it allows to define conveniently the analytical continuation from Lorentzian to Euclidean space (see Ref. [@cdt_report] for details). However, simulations without preferred foliation in $2+1$ dimensions have been build in Ref. [@cdt_nofoliae], showing results similar to the foliated case. [^4]: A series of steps is needed in order to obtain the CDT action in Eq. : Regge discretization of the continuous action, computation of volumes and dihedral angles, Wick rotation and the use of topological relations between simplex types. [^5]: The total spatial volume of a configuration is the number of spatial tetrahedra, which equals $\frac{N_{41}}{2}$ by elementary geometrical arguments. [^6]: Triangulations with $S^3$ topology, like spatial slices in $3+1$ standard CDT simulations, must have at least 5 tetrahedra. [^7]: Every ordering can be obtained as a permutation of the canonical basis vectors. [^8]: Recall that, by the spectral theorem, the LB matrix can be decomposed as $L=U \Lambda U^t$, where $\Lambda$ is the diagonal matrix of eigenvalues, and $U$ is the matrix with corresponding eigenvectors as columns. The adjacency matrix, which defines the graph, can be simply obtained as minus the off-diagonal part of the LB matrix. [^9]: This observation is not restricted to Laplace matrices of graphs, but applies to the spectra of the Laplace operator in a general space. [^10]: The more general case of graphs with multiple connected components can be easily treated by studying its components individually: the Laplace matrix can be put in block-diagonal form, one block for each component, so that its spectrum is the union of the individual spectra, and its eigenspaces are direct sums of the individual eigenspaces. [^11]: For the $B$ phase, different spatial volumes correspond actually to different simulations with different constraint on $N_{41}$, since in this case most of the spatial volume is contained in one single slice [^12]: The distance between a pair of vertices is defined as the length of the shortest path (i.e. the geodesic) connecting the two vertices. [^13]: A finite density of eigenvalues around $\lambda = 0$ is a condition stronger than the simple absence of a gap. Indeed, one might have situations in which isolated quasi-zero eigenvalues develop, while the continuous part of the spectrum maintains a gap: think for instance of two expander graphs connected by a thin bottleneck. [^14]: It is interesting no notice that the $C_b$ phase can be called [*bifurcation phase*]{} for many different reasons. [^15]: Notice that $n/V_S$ can take values in the range $(0,1)$ (recall that $\lambda_0=0$ is excluded from our discussion), while the maximum eigenvalue $\lambda$ is always bounded by $2k = 8$, that is twice the degree of vertices in the $k$-regular graph.
--- abstract: 'The static and dynamic (complex) shear viscosity of a single layer dusty plasma is measured by applying, respectively, a stationary and a periodically modulated shear stress, induced by the light pressure of manipulating laser beams. Under static conditions we observe a decrease of the viscosity with increasing shear rate, the so-called shear-thinning behavior. Under oscillating shear both the magnitude and the ratio of the dissipative and elastic contributions to the complex viscosity show strong frequency dependence, as the system changes from viscous to elastic in nature with increasing excitation frequency. Accompanying molecular dynamics simulations explain and support the experimental observations.' author: - Peter Hartmann - Máté Csaba Sándor - Anikó Kovács - Zoltán Donkó title: Static and dynamic shear viscosity of a single layer complex plasma --- Introduction ============ Transport properties of strongly coupled complex (or dusty) plasmas, have attracted considerable attention during the past decade (see e.g. [@Morfill2009; @*Bonitz2010]). The interaction of the dust particles in these systems may be well approximated by the Debye-Hückel (or Yukawa) model potential $\Phi(r) = Q \exp[-r/\lambda_D]~/~4 \pi\varepsilon_0 r$, where $Q$ is the dust particle charge, $\lambda_D$ is the Debye screening length. Equilibrium Yukawa systems can be fully characterized by two dimensionless quantities: the Coulomb coupling parameter $\Gamma = Q^2 / (4 \pi\varepsilon_0 a k_\text{B} T)$, where $T$ is the temperature, and the screening parameter $\kappa=a/\lambda_D$. Viscosity, the measure of the plastic response of matter (primarily liquids and soft matter) to applied forces, is a central quantity in rheology. For a Newtonian fluid a constant shear viscosity $\eta$ relates the shear stress $\sigma$ to the shear rate $\dot{\gamma}$ = $\partial v_x/\partial y$ (velocity gradient) as $\sigma = -\eta \dot{\gamma} = -\eta (\partial v_x/\partial y)$. Continuum hydrodynamics successfully uses the concept of viscosity, usually as input parameter, as an intrinsic property of the material under investigation, in the Navier-Stokes equation. One has to note, however, that the Newtonian concept of viscosity is applicable only (i) at small shear rates, (ii) long length scales, and (iii) at low frequencies. In many physical systems, these conditions are clearly violated [@Schowalter]. Regarding complex plasmas, the first experiment on a single layer (2D) dust cloud with control over the applied shear, via adjusting the power of a manipulating laser beam, was carried out by the group of Lin I in 2001 [@LinI2001]. Similarly, an experiment making use of a single shearing laser beam was reported in [@Gavrikov05]. In these experiments a sheared velocity profile was created around the beam. In the experiment reported in [@NosenkoPRL04] two displaced, parallel counter-propagating laser beams were used to realize a planar Couette configuration in a 2D dusty plasma layer. In another experiment the non-Newtonian behavior of a 3D complex plasma in the liquid state was identified by Ivlev [*et al.*]{} [@IvlevPRL07]. Motivated by these pioneering studies, more recently, detailed dusty plasma experiments demonstrated the presence of viscoelastic response [@LinI2007] and revealed the wave-number dependence of the viscosity in 2D [@GoreePRL10]. Experiments in the crystalline phase have identified the slipping of individual crystal lines to be the primary mechanism for relaxing an applied shear stress [@Samsonov2011]. Complementing the experimental efforts, the self-diffusion [@Ohta; @Ramazanov06; @Torben08; @*Torben09; @Feng10], shear viscosity [@Saigo; @Salin1; @*Salin2; @Murillo; @DonkoV; @Ramazanov] and thermal conductivity coefficients [@Salin1; @*Salin2; @DonkoT] have been derived in a number of simulation studies, both for three-dimensional (3D) and two-dimensional (2D) settings. In [@DonkoPRL06; @*DonkoMPL07], besides calculations of the “equilibrium” ($\dot{\gamma} \rightarrow 0$) static viscosity, predictions for the shear-thinning effect (typical for complex molecular liquids) were given at high shear rates. The frequency dependence of the complex shear viscosity, which combines the dissipative and the elastic components of the complex response of matter to oscillating shear stress as $\eta(\omega) = \eta^\prime(\omega) - i\eta^{\prime\prime}(\omega)$, was computed for 3D Yukawa liquids in [@DonkoPRE10]. Fundamental questions about the existence or nonexistence of well defined transport coefficients in 2D were addressed in [@DonkoPRE09]. Here we present laboratory dusty plasma experiments in which either a [*static*]{} or a [*periodic*]{} shear is applied on a single layer dusty plasma in the strongly coupled regime. During the evaluation of the experimental data we adopt an earlier experimental method [@NosenkoPRL04] as well as specific methods used so far only in molecular dynamics (MD) simulations. Our experimental results are supported by, and are combined with simulations and theoretical calculations. To obtain the most complete information about the complex plasma single layer we perform three different experiments on the same dust cloud: (1) analysis of the thermally excited waves, without any applied shear, to determine the principal system parameters, (2) applying a static shear ($\omega_{\rm sh}=0$) to investigate stationary flows at high shear rates, and (3) applying a periodic shear ($\omega_{\rm sh}>0$) to obtain the frequency-dependent complex viscosity. The paper is structured as follows. Section II describes the details of the experimental apparatus. Section III discusses our experiments and the data evaluation, as well as the connection with simulation data. Section IV summarizes the work. Experimental apparatus ====================== Our dusty plasma experiments are carried out in a custom designed vacuum chamber with an inner diameter of 25 cm and a height of 18 cm. The lower, powered, 18 cm diameter, flat, horizontal, stainless steel electrode faces the upper, ring shaped, grounded aluminum electrode, which has an inner diameter of 15 cm and is positioned at a height of 13 cm. The experiments are performed in an argon gas discharge at a pressure $p = 1.2 \pm 0.05$ Pa, and at a steady gas flow of $\sim 0.01$ sccm. We apply 13.56 MHz radio frequency excitation of $\sim 7$ W power, to establish the discharge in which the melamine-formaldehyde micro-spheres with a diameter $d = 4.38 \pm 0.06~{\rm \mu m}$ and a mass $m=6.64\times10^{-14}$ kg are levitated. For illumination of the particle layer we use a 200 mW, 532 nm laser, the light of which is expanded and enters the chamber from a side window. Our CCD camera has a resolution of 1.4 Megapixels and captures snapshots of the particle ensemble through the top window of the chamber at 29.54 frames per second acquisition rate. The camera is sensitive only at the illuminating laser wavelength, due to the application of an interference filter centered at 532 nm wavelength. The average dust particle number in the field of view is $\sim 2500$, while the total particle number in the dust cloud is about $\sim 15000$. During the evaluation of the raw images identification and position measurement of the particles is performed using the method described in [@Feng07]. ![\[fig:opt\] (color online) Scheme of the optical setup used to generate the alternating shearing laser beams. Main parts: L - 200 mW 650 nm diode laser; P1 - polarizing beamsplitter; M - mirror; P2 - linear polar filter mounted on a ball bearing and rotated by a motor. Beams with different arrow styles are modulated antiphase. In the reference beam line: R - 10 mW 532 nm diode laser, M - mirror.](fig1){width="\columnwidth"} The scheme of the optical setup used in the measurements to apply an external shear on the dust layer, is shown in Fig. \[fig:opt\]. The light of a 200 mW red diode laser (L) is split into two parallel beams of equal intensity and perpendicular linear polarization. One of these beams is directed towards another polarizer (P2), while the beam with the perpendicular polarization reaches P2 after being reflected from a mirror. By rotating P2 by means of a motor the intensities of these two beams, passing through P2, can be modulated harmonically, with $180^\circ$ phase shift relative to each other. These beams are further guided by additional mirrors, which are aligned to ensure that the two beams propagate inside the chamber in opposite directions, but on a common axis. This axis is adjusted to be horizontal and its vertical position is set in a way that the beams, which are focused to have a diameter of about 0.5 mm, lie within the dust layer. This setup makes it possible to exert a periodic force on the particles, which, in the center of the beam(s) has a form $F_x(t) = F_0\sin(\omega_\text{sh} t)$, with the direction $x$ defined to coalesce with the beams, as shown in Fig. \[fig:opt\]. The amplitude $F_0$ can be tuned by the power of the laser (L), while the frequency $\omega_\text{sh} = 2\pi / T$ is defined by the half-period of rotation ($T$) of P2. As the recording of the images by the camera is not synchronized with the rotation of the polarizer, the angular position of the latter has to be measured. This is accomplished by a reference beam of a green diode laser (R). The polarized beam of this laser passes through P2 as well, and when P2 is rotated, the beam intensity is modulated. The beam is directed onto the surface of the lower electrode of the discharge, and, as the wavelength is the same as that of the laser illuminating the particle layer, the beam spot is picked up by the camera on the images of the particle layer. Measuring the intensity of this reference beam in the recorded images allows an accurate determination of the angular position of P2. In the experiments aimed at the study of the static viscosity P2 is not rotated, it is aligned to a position, which maximizes the intensity of one of the beams. Experiments =========== This section is divided into three parts. Section III.A explains the “calibration” of the system, which was aimed at the determination of the basic characteristics of the dust layer (particle charge, screening parameter of the potential, plasma frequency), needed in the subsequent data analysis. Section III.B discusses the measurements of the static viscosity covering a wide domain of applied shear rates. These measurements quantify shear thinning at elevated shear rates. A brief introduction of the simulation method is presented in this section. Section III.C describes the experiments carried out with a harmonic perturbation, which reveal the viscoelastic behavior of the investigated system. Calibration ----------- The aim of this experiment is the determination of the basic parameters of our particle suspension, which will be crucial in the forthcoming experiments aimed at the measurement of viscous properties. Having calibrated the optical system the Wigner-Seitz radius is determined from the average particle separation. We find an areal density $n=3.21$ mm$^{-2}$ and the Wigner-Seitz radius to be $a=1/\sqrt{\pi n}=0.315$ mm. Fitting the particle velocity distribution with a Maxwellian distribution resulted in a mean thermal velocity $v_{\rm th,0} = \sqrt{2kT/m} = 0.66$ mm/s, for the unperturbed system. ![\[fig:disp\] (color online) Longitudinal (a) and transverse (b) current fluctuation spectra from the experiment (color / grayscale map) with overlaid Yukawa ($\kappa=1.2$) lattice dispersion curves for the two principal lattice directions (lines).](fig2){width="1.0\columnwidth"} The determination of the parameters characterizing the interparticle interaction (particle charge, Debye screening length, plasma frequency) is based on the dependence of the longitudinal and transverse wave dispersion relations on these quantities, as introduced in [@Nunomura2002; @*Zhdanov2003]. Via tracing the trajectories of the particles we calculate the longitudinal and transverse current fluctuation spectra from the ${\bf r}_i(t)$ position and ${\bf v}_i(t)$ velocity data as described in [@DonkoJPC08]. For our system we adopt the Debye-Hückel (Yukawa) type interaction potential, giving a pair interaction energy: $U(r) = Q\Phi(r)$. Matching the dispersion relations obtained from these spectra with theoretical dispersion curves of Yukawa lattices [@lattice] (see Fig. \[fig:disp\]) we have obtained the following system parameters (within an uncertainty of $\pm 8$ %): Debye screening length $\lambda_D=0.263$ mm, screening parameter $\kappa=a/\lambda_D=1.2$, particle charge $Q = 4840 e$ (where $e$ is the electron charge), and nominal plasma frequency $\omega_0=\sqrt{n Q^2/2\varepsilon_0 m a} = 72.1$ rad/s. Experiment with static shear ---------------------------- As explained in Section II, in this experiment the polarizer P2 (see Fig. \[fig:opt\]) is not rotated, it is set in a way that one of the laser beams carries the full power. Two methods are available to obtain viscosity data from this type of experiments. Method 1 is based on the solution of the Navier-Stokes (NS) equation. In [@NosenkoPRL04] this method was applied for a system sheared by two (horizontally shifted), parallel, counter-propagating laser beams. We apply the same approach for our system sheared by a single beam. In this case the NS equation results in a velocity profile $v_x(y) = v_0 \exp(-\sqrt{\nu \rho / \eta}~|y|)$, where $v_0$ is the stationary flow velocity in the beam center, $\nu$ is the effective dust-background collision frequency (frictional drag), and $\rho$ is the 2D particle mass density. The advantage of this method is, that it is relatively noise insensitive and does not assume any particular form for the interaction pair potential. The drawbacks are that (i) one obtains spatially averaged data, and (ii) only for the ratio of viscosity and collision frequency, thus an estimation or independent measurement of $\nu$ is needed, which can introduce additional uncertainty. ![\[fig:vel\] $v_x(y)$ velocity profiles for different shearing laser power densities in the dust particle layer in units of the average thermal velocity $v_\text{th} = 1.8$ mm/s. Different lines belong to the two sides ($y>0$ and $y<0$) of the sheared region.](fig3){width="\columnwidth"} Figure \[fig:vel\] shows time averaged experimental $v_x(y)$ velocity profiles for different shearing laser powers. A $\nu/\eta$ value of $1.64\times10^{13}$ kg$^{-1}$ is derived from these data. The average thermal velocity, estimated form the average peculiar velocities of particles (defined as the velocities relative to the flow) within the sheared region, was found to lie between $v_{\rm th} = 1.6$ mm/s and 2.0 mm/s for all investigated cases using different shearing laser powers, about three times higher compared to the unperturbed case. This increase is due to the energy absorbed from the shearing laser beam. This effect is called shear induced melting and was studied for dusty plasma crystals in detail in [@Nosenko09; @GoreePRL10b]. As a result of this heating process our system is in the strongly coupled liquid phase within the sheared region. In the following we use the average value $v_{\rm th} = 1.8$ mm/s for the presentation of our results. Method 2 of data acquisition was used so far in molecular dynamics simulations only [@DonkoPRL06; @Ramazanov; @DonkoPRE10] and is based on the measurement of the applied shear stress, or, equivalently the off-diagonal element of the pressure tensor: $$\label{eq:P} P_{xy} = \frac{1}{A}\bigg[ \sum_i m v_{i,x} v_{i,y} + \frac{1}{2}\sum_i\sum_{j\ne i} r_{ij,y} F_{ij,x}\bigg],$$ where $A$ is the area of the region of interest, $v_{i,x}$ is the $x$ component of the peculiar velocity of particle $i$, $r_{ij,y}$ is the $y$ component of the distance vector between particle $i$ and $j$, $F_{ij,x}$ is the $x$ component of the force acting on particle $i$ due to the pair interaction with particle $j$. Summation of $i$ is for particles within $A$, while summation of $j$ runs over all particles interacting with particle $i$. The viscosity is obtained by calculating $$\label{eq:eta} \eta = \frac{-P_{xy}}{\dot{\gamma}}.$$ In our experiment the shear rate strongly depends on the $y$-coordinate, due to the exponential velocity profile shown in Fig. \[fig:vel\]. For further evaluation we set up a computational grid along the $y$-axis with a resolution of $\Delta y = 0.16$ mm. In this way $P_{xy}(y)$ and $\dot{\gamma}(y)$ can be measured by reducing the evaluation volume \[$A$ and the summation of $i$ in (\[eq:P\])\] to one grid interval at a time. Intermediate results obtained with 32 mW/mm$^2$ laser power density shear are shown in Fig. \[fig:static\](a-d), illustrating the evaluation process: the observed and time-averaged velocity profile (a) is used to compute its derivative, the shear rate, which is given in (b) in normalized units, $\bar{\gamma} = (\partial v_x/\partial y)(a/v_{\rm th})$ (with $v_{\rm th} = 1.8$ mm/s). The spatial distribution of the off-diagonal element of the pressure tensor follows the trend of the shear rate (c). The viscosity is the ratio of the data in (c) and (b), and is shown in (d). Substituting the $y$-coordinate with the shear rate taken from (b) results in the [*shear rate dependence*]{} of the viscosity, shown in (e) in normalized units, $\bar{\eta} = \eta/\eta_0$, where $\eta_0=mn\omega_0 a^2 = 1.52\times 10^{-12}$ kg/s. ![\[fig:static\] (color online) Spatial dependence of the (a) velocity profile $v_x(y)$; (b) normalized shear rate $\bar{\gamma}$, ; (c) off-diagonal element of the pressure tensor $-P_{xy}$; (d) normalized viscosity. (e): shear rate dependence of the viscosity, $\bar{\eta}(\bar{\gamma}^{1/2})$, for different laser power densities. The solid lines show MD simulation results for $\Gamma$ values indicated.](fig4){width="\columnwidth"} We have adapted our molecular dynamics (MD) simulation code [@DonkoPRL06; @*DonkoMPL07; @DonkoPRE10] to the experimental conditions. This non-equilibrium MD method is based on the homogeneous shear algorithm [@Evans], which has frequently been used in studies of the viscosity of liquids. The approach is based on the Gaussian thermostated SLLOD equations of motion and applies the Lees-Edwards sliding periodic boundary conditions (for details, see [@Evans]). The shear rate, which can be stationary or oscillatory, is an input parameter, the off-diagonal element of the pressure tensor is computed based on (\[eq:P\]), where the summation volume $A$ is the whole simulation box. Viscosity is than calculated using (\[eq:eta\]), as it is done for the experiment. The simulation results are shown in Fig. \[fig:static\](e). An important difference between the experiment and the simulation is in the working principle of the thermostat. In the simulation a linear velocity profile is achieved together with a uniform temperature distribution in the whole simulation cell [@Evans]. In this case the system can be well characterized with the Coulomb coupling parameter $\Gamma$. In the experiment the temperature is not uniform, the system is in crystalline state outside the sheared region and is in liquid state inside. An estimation of an average $\langle\Gamma\rangle \approx 130$ can be given based on the $v_{\rm th}$ data averaged over the investigated region in space. This result confirms non-Newtonian behavior [@DonkoPRL06], the so called shear-thinning effect, where the viscosity decreases with increasing shear rate. In earlier experiments, like [@LinI2001; @NosenkoPRL04; @Gavrikov05], a variation of the shear viscosity with changing laser intensity was already observed, however, it was attributed to the temperature variation near the shearing laser beam(s). The agreement between our spatially resolved experimental results and our fully thermostated simulations points out, that the effect of temperature in this range is marginal, it is the high shear rate that causes most of the variations of the shear viscosity. Taking the average viscosity value of $\bar{\eta}\approx 0.15$ ($\eta\approx 2.2\times 10^{-13}$ kg/s) for intermediate shear rates and comparing it to the ratio $\nu/\eta = 1.64\times10^{13}$ kg$^{-1}$, as obtained when applying Method 1, we estimate the effective dust-background collision frequency to be $\nu \approx 3.7$ s$^{-1}$. This value is slightly higher than the approximated dust-neutral collision frequency defined as $\nu_{\rm dn}=\delta(4\pi/3) r_d^2 n_n v_n m_n/m_d \approx 3.1$ s$^{-1}$, where $r_d$ and $m_d$ are the dust radius and mass; $n_n$, $v_n$ and $m_n$ are the neutral number density, average speed and mass, respectively, $\delta=1.26$ [@Liu2003]. Experiment with harmonic shear ------------------------------ Turning on the rotation of the polar filter P2 (see Fig. \[fig:opt\]) results in a sinusoidally modulated periodic shear in the $x$ direction. The amplitude of the modulated laser power density is set to 32 mW/mm$^2$. The frequency is varied between $\omega_{\rm sh}=3.7$ and 44 rad/s in 25 steps. ![\[fig:fig5\] (color online) Measured phase-resolved shear rate ($\bar{\gamma}$) and off-diagonal element of the pressure tensor ($-P_{xy}$) for a low (a) and a high (b) frequency periodic shear (symbols) and least-square fitted sine functions (lines). ](fig5){width="\columnwidth"} To optimize the signal to noise ratio phase resolved averaging is performed: individual oscillation cycles are identified based on the reference signal (as explained in section II). The period time is divided into ten time slots, and every snapshot is assigned to one of them according to its actual phase angle $\phi$ relative to the reference signal. Despite the low number of particles in narrow slices in space and time, this type of averaging $\sim 1800$ snapshots at a given frequency makes the evaluation Method 2 (based on the pressure tensor and shear rate) possible. Further, $\dot{\gamma}(\phi,y)$ and $P_{xy}(\phi,y)$ are averaged in space in the sheared region $0.5~\text{mm}<|y|<1.5~\text{mm}$ (see Fig. \[fig:static\]). Examples of the measured $\dot{\gamma}(\phi)$ and $P_{xy}(\phi)$ for selected frequencies are presented in Fig. \[fig:fig5\] together with least squares fits in the form $f(\phi)=\xi\sin(\phi+\phi_0)$. Performing the sine function least-square fitting procedure we obtain the amplitude and phase for both the pressure and shear rate for each excitation frequency: $\xi^P(\omega_{\rm sh})$, $\xi^\gamma(\omega_{\rm sh})$, $\phi_0^P(\omega_{\rm sh})$, and $\phi_0^\gamma(\omega_{\rm sh})$, respectively. ![\[fig:dyn1\] (color online) Frequency dependence of the shear viscosity: (a) magnitude $|\eta(\omega)|$, (b) complex argument $\varphi(\omega)$, and (c) the real and imaginary parts $\eta^{\prime}(\omega)$ and $\eta^{\prime\prime}(\omega)$. The lines show MD simulation results for $\Gamma = 200$.](fig6){width="\columnwidth"} The magnitude of the frequency dependent shear viscosity can be calculated from the amplitudes, while the complex argument is given by the difference of the phases: $$\begin{aligned} |\eta(\omega_{\rm sh})| &=& \frac{\xi^P(\omega_{\rm sh})}{\xi^\gamma(\omega_{\rm sh})},\\ \varphi(\omega_{\rm sh}) &=& \phi_0^P(\omega_{\rm sh}) - \phi_0^\gamma(\omega_{\rm sh}). \nonumber\end{aligned}$$ as shown in fig. \[fig:dyn1\](a) and (b). The real and imaginary (dissipative and elastic) parts of the complex viscosity $\eta(\omega_{\rm sh}) = \eta^\prime(\omega_{\rm sh}) - i\eta^{\prime\prime}(\omega_{\rm sh})$ are computed using the magnitude and the complex argument as $$\begin{aligned} \eta^\prime(\omega_{\rm sh}) &=& |\eta(\omega_{\rm sh})|\cos\left[\varphi(\omega_{\rm sh})\right],\\ \eta^{\prime\prime}(\omega_{\rm sh}) &=& |\eta(\omega_{\rm sh})|\sin\left[\varphi(\omega_{\rm sh})\right].\nonumber\end{aligned}$$ With increasing $\omega_\text{sh}$ we observe a decrease of $\eta^\prime$ and an increase of $\eta^{\prime\prime}$, as shown in Fig. \[fig:dyn1\](c). The crossover of the real and imaginary parts can be observed at a frequency $\omega_{\rm cross}/\omega_0 = 0.3 \pm 0.05$. Our MD simulation (using input parameters obtained from the experiment) show remarkably good agreement with the experimental data. Summary ======= We have studied the static and dynamic shear viscosity of a complex plasma layer via the combination of different experimental and simulation techniques. We have developed an optical setup by means of which we applied a static, as well as a harmonically modulated shear on the particle suspension. In the experiments using static shear the viscosity was measured via the calculation of the pressure tensor elements form the recorded particle trajectories. The measurements have quantified the shear thinning effect, the decrease of the shear viscosity with increasing shear rate. These results are consistent with an earlier experiment [@NosenkoPRL04]. Molecular dynamics simulations, modeling a homogeneously sheared liquid with system parameters taken from the experiment support our findings and provide a possibility to rule out the dominance of thermal effects on the observed variations of the shear viscosity. In the experiments with sinusoidally modulated shear the pressure tensor elements have been measured in a time-resolved manner. This way we were able to determine the viscoelastic response of the dust layer. The data have indicated a phase delay between the perturbation and the response, that increases with increasing frequency: we observed a gradual decrease of the viscous term (real part of the viscosity) and a gradual increase of the elastic term (imaginary part of the viscosity). These experimental results are very similar to those found for 3D Yukawa liquids in molecular simulations [@DonkoPRE10]. The appearance of such a rich spectrum of non-Newtonian behavior observed in complex plasma experiments can perhaps be explained by referring to the manyfold of interactions and effects between the dust and the background plasma. Similarly complex plastic response is observed in molecular fluids, like polymer or organic solutions, paints, etc. The fact that our molecular dynamics simulations, considering an idealized Yukawa system (i.e. neglecting friction, ion wakes, etc.), well reproduce the viscoelastic properties of the experimental system, however raises new questions. First, what microscopic feature is minimally necessary to result in a non-Newtonian macroscopic response? Second, at which point does the real complexity of dusty plasmas introduce qualitatively new macroscopic features, not reproducible by the simplified numerical models based on isotropic Yukawa interaction? We can point out only two features, namely, the long range microscopic interaction and the strongly coupled (correlated) macroscopic state of the model system that may in some way be responsible for the computationally observed non-linearities. This research has been supported by the Grants OTKA PD-75113, K-77653, and the János Bolyai Research Foundation of the Hungarian Academy of Sciences. [35]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{} (, ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{} (, ) @noop [****,  ()]{}
--- abstract: 'We construct Hopf algebras whose elements are representations of combinatorial automorphism groups, by generalising a theorem of Zelevinsky on Hopf algebras of representations of wreath products. As an application we attach symmetric functions to representations of graph automorphism groups, generalising and refining Stanley’s chromatic symmetric function.' address: 'Department of Mathematics & Statistics, University of Maine. 5752 Neville Hall, Room 333. Orono, ME 04469 USA' author: - Tyrone Crisp and Caleb Kennedy Hill bibliography: - 'Sn-mod-p.bib' date: 'April 9, 2020' title: Combinatorial Hopf algebras from representations of families of wreath products --- Introduction ============ In this paper we construct Hopf algebras whose elements are linear representations of automorphism groups of certain combinatorial structures. We do this by generalising a theorem of Zelevinsky [@Zelevinsky 7.2] on Hopf algebras of representations of wreath products $S_n\ltimes H^n$ to more general wreath products, and then applying Clifford theory to pass from wreath products to combinatorial automorphism groups. Let us illustrate and motivate the construction with an example. The *chromatic polynomial* of a finite, simple, undirected graph $\Gamma$ is the polynomial $\chi_\Gamma$ satisfying $\chi_\Gamma(m) = \#{\operatorname{Col}}_{\Gamma,m}$, the number of proper $m$-colourings of $\Gamma$ (i.e., labellings of the vertices of $\Gamma$ by numbers $1,\ldots,m$ such that adjacent vertices have distinct labels). This much-studied graph invariant was first introduced by Birkhoff in [@Birkhoff]. A variation on $\chi_\Gamma$, introduced in [@Hanlon] and further studied and generalised in [@Cameron-Kayibi] under the name *orbital chromatic polynomial*, counts the number of orbits in ${\operatorname{Col}}_{\Gamma,m}$ for the natural action of the automorphism group $\operatorname{Aut}\Gamma$. To illustrate, the graphs $$\Gamma: \xygraph{ !{(0,0) }*+{\bullet}="a" !{(1,.5) }*+{\bullet}="b" !{(1,-.5) }*+{\bullet}="c" !{(-1,.5)}*+{\bullet}="d" !{(-1,-.5)}*+{\bullet}="e" "a"-"b"-"c"-"a"-"d"-"e"-"a" } \qquad \qquad \textrm{and} \qquad \qquad \Lambda: \xygraph{ !{(-1,0) }*+{\bullet}="a" !{(0,.5) }*+{\bullet}="b" !{(0,-.5) }*+{\bullet}="c" !{(1,0)}*+{\bullet}="d" !{(2,0)}*+{\bullet}="e" "e"-"d"-"b"-"a"-"c"-"d" "b"-"c" }$$ have $\chi_{\Gamma}=\chi_{\Lambda}$, but $\# (\operatorname{Aut}\Gamma\backslash {\operatorname{Col}}_{\Gamma,3})=3$ while $\# (\operatorname{Aut}\Lambda\backslash {\operatorname{Col}}_{\Lambda,3})=6$. (This pair of graphs is taken from [@Stanley-chromatic Figure 1].) One can generalise and refine these invariants using finite harmonic analysis. Letting ${\Bbbk}^{{\operatorname{Col}}_{\Gamma,m}}$ be the permutation representation of $\operatorname{Aut}\Gamma$ corresponding to the action of $\operatorname{Aut}\Gamma$ on ${\operatorname{Col}}_{\Gamma,m}$ (${\Bbbk}$ being some algebraically closed field of characteristic zero), we consider for each finite-dimensional ${\Bbbk}$-linear representation $\gamma$ of $\operatorname{Aut}\Gamma$ the intertwining number $$\chi_{\Gamma,\gamma}(m)\coloneqq \dim\operatorname{Hom}_{\operatorname{Aut}\Gamma}(\gamma,{\Bbbk}^{{\operatorname{Col}}_{\Gamma,m}}) = \frac{1}{\#\operatorname{Aut}\Gamma} \sum_{g\in \operatorname{Aut}\Gamma} \operatorname{ch}_\gamma(g) \cdot \#{\operatorname{Col}}_{\Gamma,m}^g$$ where $\operatorname{ch}_\gamma$ is the character of the representation $\gamma$, and ${\operatorname{Col}}_{\Gamma,m}^g$ is the set of colourings fixed by the automorphism $g$. Hanlon observed in that the cardinalities of these fixed sets are themselves chromatic polynomials, and so each $\chi_{\Gamma,\gamma}$ is a polynomial. Putting $\gamma={\operatorname{reg}}_{\operatorname{Aut}\Gamma}$, the regular representation, gives the chromatic polynomial, while putting $\gamma={\operatorname{triv}}_{\operatorname{Aut}\Gamma}$, the trivial representation, gives the orbital chromatic polynomial. It is clear from the definition that the map $\gamma\mapsto \chi_{\Gamma,\gamma}$ is additive, and so decomposing the regular representation into irreducibles gives $$\chi_\Gamma= \sum_{\gamma\in \operatorname{Irr}(\operatorname{Aut}\Gamma)} (\dim \gamma) \chi_{\Gamma,\gamma}$$ where the sum runs over the set of isomorphism classes of irreducible representations. The polynomials $\chi_{\Gamma,\gamma}$, and the decomposition of $\chi_\Gamma$ that they afford, deserve closer study. In particular, one would like to understand how these polynomials behave with respect to unions and decompositions of graphs, and it is at this point that Hopf algebras enter the picture. The use of Hopf algebras to study assembly/disassembly constructions is well established, both in combinatorics (see, e.g., [@Joni-Rota], [@Schmitt-HACS], [@Schmitt-IHA], [@Aguiar-Mahajan], [@Grinberg-Reiner]) and in representation theory (see, e.g., [@Geissinger], [@Zelevinsky], [@vanLeeuwen], [@Aguiar-et-al], [@Shelley-Abrahamson]). An example of particular relevance to graph colouring is the *Hopf algebra of graphs* $\mathcal G$ [@Schmitt-HACS]: this is the free abelian group with basis the set of isomorphism classes of finite simple graphs; with multiplication given by disjoint union of graphs; and with comultiplication given by partitions into pairs of subgraphs. The character ${\mathcal{G}}\to {\mathbb{Z}}$ given by $\Gamma\mapsto \chi_{\Gamma}(1)$ induces, as shown in [@ABS], a morphism of Hopf algebras ${\mathcal{G}}\to {\operatorname{Sym}}_{{\mathbb{Z}}}$ into the Hopf algebra of symmetric functions with ${\mathbb{Z}}$-coefficients; this map sends $\Gamma$ to the *chromatic symmetric function* $X_\Gamma$ introduced by Stanley in [@Stanley-chromatic], and the chromatic polynomial $\chi_\Gamma$ can be recovered from $X_\Gamma$ by specialisation: $\chi_\Gamma(m) = X_\Gamma(1^m)$. For a discussion of how the Hopf-algebra point of view illuminates certain properties of $\chi_\Gamma$ and $X_\Gamma$, see [@Grinberg-Reiner 7.3]. In Section \[sec:graphs\] of this paper, as an instance of the general results obtained in Sections \[sec:Young\] and \[sec:Hopf\], we construct a Hopf algebra ${\mathcal{A}}$ whose underlying additive group is free abelian with basis $\bigsqcup_{\Gamma}\operatorname{Irr}(\operatorname{Aut}\Gamma)$, the set of isomorphism classes of irreducible representations of the automorphism groups of finite simple graphs (modulo graph isomorphisms). The multiplication/comultiplication in ${\mathcal{A}}$ are given by combining union/decomposition of graphs with induction/restriction of representations. The map sending $\gamma\in \operatorname{Irr}(\operatorname{Aut}\Gamma)$ to $\chi_{\Gamma,\gamma}(1)\in {\mathbb{Z}}$ induces a homomorphism of Hopf algebras ${\mathcal{A}}\to {\operatorname{Sym}}_{{\mathbb{Z}}}$, thus associating a symmetric function $X_{\Gamma,\gamma}$ to each finite-dimensional representation $\gamma$ of $\operatorname{Aut}\Gamma$. The polynomial $\chi_{\Gamma,\gamma}$ defined above is recovered from $X_{\Gamma,\gamma}$ by specialisation. The map sending a graph $\Gamma$ to the regular representation of $\operatorname{Aut}\Gamma$ gives an embedding of Hopf algebras ${\mathcal{G}}{\hookrightarrow}{\mathcal{A}}$, and the decomposition of the regular representation into irreducibles yields an equality of symmetric functions $$X_\Gamma = \sum_{\gamma\in \operatorname{Irr}(\operatorname{Aut}\Gamma)}(\dim\gamma)X_{\Gamma,\gamma}.$$ We thus obtain a refinement of the polynomial invariants $\chi_{\Gamma,\gamma}$ by symmetric functions, generalising Stanley’s refinement of $\chi_\Gamma$ by $X_\Gamma$; and we obtain identities among the $X_{\Gamma,\gamma}$ (and, by specialisation, among the $\chi_{\Gamma,\gamma}$) for varying $\Gamma$ and $\gamma$ from the fact that the map $\gamma\mapsto X_{\Gamma,\gamma}$ is a homomorphism of Hopf algebras. The further study of the graph invariants $X_{\Gamma,\gamma}$ and $\chi_{\Gamma,\gamma}$ will be taken up in future work. We now describe the connection with wreath products, still in the example of graph colourings. For each $n\geq 0$ let $E_n$ denote the set of two-element subsets of $\{1,\ldots,n\}$. The symmetric group $S_n$ acts in a natural way on $E_n$, and hence on the group $S_2^{E_n}$ of functions $E_n\to S_2$, and on the set $\operatorname{Irr}(S_2^{E_n})$ of irreducible representations of this abelian group. The $S_n$-orbits in $\operatorname{Irr}(S_2^{E_n})$ can be identified with the isomorphism classes of graphs with $n$ vertices, in such a way that the stabiliser of a point in $\operatorname{Irr}(S_2^{E_n})$ is equal to the automorphism group of the corresponding graph. Clifford theory (as explained in this context in [@James-Kerber Section 4.3]) then yields an identification $$\label{eq:Clifford-intro}\tag{$\star$} \bigsqcup_{\Gamma} \operatorname{Irr}(\operatorname{Aut}\Gamma) \xrightarrow{\cong} \bigsqcup_{n\geq 0} \operatorname{Irr}(S_n\ltimes S_2^{E_n})$$ between the basis for ${\mathcal{A}}$ and the irreducible representations of the wreath product groups $S_n\ltimes S_2^{E_n}$. The representation theory of wreath products can be quite complicated: indeed, the bijection shows that classifying the irreducible representations of the groups $S_n\ltimes S_2^{E_n}$ for all $n$ means classifying all finite simple graphs up to isomorphism *and* classifying the irreducible representations of all finite groups (since every finite group is, as shown by Frucht [@Frucht], the automorphism group of a graph). There is, however, one aspect of the representation theory of the groups $S_n\ltimes S_2^{E_n}$ that is easily understood: namely, the way in which the representations of these groups fit together for different $n$. Generalising work of Zelevinsky [@Zelevinsky], who considered wreath products of the form $S_n\ltimes H^n$, we prove that for each suitable family of $S_n$-sets $Y_n$, and for each finite group $H$, the free abelian group with basis $\bigsqcup_{n\geq 0} \operatorname{Irr}(S_n\ltimes H^{Y_n})$ can be given a natural Hopf-algebra structure. In fact we obtain three different (in general) Hopf algebra structures: one a positive self-adjoint Hopf algebra as in [@Zelevinsky], and another dual pair of non-PSH but connected, commutative, and cocommutative Hopf algebras. (In the situation studied by Zelevinsky these three Hopf algebras are all identical.) Our Hopf algebras come equipped moreover with a canonical ${\mathbb{Z}}$-valued character; as shown by Aguiar, Bergeron, and Sottile [@ABS] this is equivalent to admitting a canonical homomorphism into the Hopf algebra of symmetric functions. Putting $Y_n=E_n$ and $H=S_2$ yields the Hopf algebra ${\mathcal{A}}$ of representations of graph automorphisms, and the symmetric functions $X_{\Gamma,\gamma}$. More examples of this kind are possible: for example, letting $Y_n$ be the set of all nonempty subsets of $\{1,\ldots,n\}$ has the effect of replacing graphs by hypergraphs; letting $Y_n$ be the set of ordered two-element subsets gives directed graphs; and replacing $S_2$ by another group $H$ has the effect of introducing labellings of the edges of our (hyper)graphs by the nontrivial irreducible representations of $H$. The paper is organised as follows. In Section \[sec:Young\] we first describe the families of sets $Y_n$ that go into our construction. The definition is easily stated: we consider endofunctors on the category of finite sets and injective maps that preserve the empty set and preserve intersections; then $Y_n$ is the value of such a functor on the set $\{1,\ldots,n\}$. We then define induction and restriction functors between the representations of the wreath products $S_n\ltimes H^{Y_n}$ for varying $n$; we consider both the standard induction/restriction functors, and a variant of these functors similar to the parabolic induction/restriction functors from the representation theory of reductive groups. These functors become, in Section \[subsec:R-Hopf\], the multiplication and comultiplication maps in our Hopf algebras. In Section \[subsec:Clifford\] we apply Clifford theory to yield a second description of our Hopf algebras in terms of representations of automorphisms of certain combinatorial structures. In Section \[subsec:basic\] we show that each of our Hopf algebras contains a sub-Hopf-algebra of representations of the base group $H^{Y_n}$, an example being the subalgebra ${\mathcal{G}}$ of ${\mathcal{A}}$; and in Section \[subsec:zeta\] we compute the canonical maps from our Hopf algebras to ${\operatorname{Sym}}_{{\mathbb{Z}}}$, under the assumption that the coefficient group $H$ is abelian. Section \[sec:graphs\] then presents the example of ${\mathcal{A}}$ that we outlined above. Our constructions bear a resemblance to known constructions of Hopf algebras from species, such as those described in [@Schmitt-HACS] and [@Aguiar-Mahajan] for example, although as far as we are aware the main construction that we study here has not previously appeared in the literature beyond the special cases $Y_N=\emptyset$ and $Y_N=N$. There is however one concrete point of overlap between our construction and [@Schmitt-HACS]: if $H$ is abelian then one of the sub-Hopf-algebras that we construct in Section \[subsec:basic\]—i.e., the subalgebra generalising the subalgebra ${\mathcal{G}}$ of ${\mathcal{A}}$—is isomorphic to the Hopf algebra of a coherent exponential species as defined in [@Schmitt-HACS]; see Proposition \[prop:HACS\]. Acknowledgements {#acknowledgements .unnumbered} ---------------- The first author thanks Ehud Meir and Uri Onn for early discussions on the topic of this project, the idea for which first arose during our joint work on [@CMO1]. Some of the results presented here also appear in the second author’s MA thesis, submitted to the University of Maine in Spring 2020. Young sets and wreath products {#sec:Young} ============================== Young sets ---------- In this section we define the combinatorial objects from which we shall construct families of wreath product groups. We let ${\operatorname{Set}}$ denote the category of finite sets, while ${\operatorname{Set}}^\times$ and ${\operatorname{Set}}^{{\mathrm{inj}}}$ denote the subcategories of bijective maps and injective maps, respectively. A functor $Y:{\operatorname{Set}}^{\mathrm{inj}}\to {\operatorname{Set}}^{{\mathrm{inj}}}$ thus assigns to each finite set $N$ a finite set $Y_N$, and to each injective map of finite sets $w:N\to M$ an injective map $Y_w:Y_N\to Y_M$. When $M=N$ we obtain an action of the symmetric group $S_N$ of $N$ on the set $Y_N$. We will often write $w:Y_N\to Y_M$, instead of $Y_w$; and in the case where $w$ is the inclusion of $N$ as a subset of $M$ we shall omit $w$ from the notation entirely and regard $Y_N$ as a subset of $Y_M$. \[def:Young-set\] A *Young set* is a functor $Y: {\operatorname{Set}}^{{\mathrm{inj}}} \to {\operatorname{Set}}^{{\mathrm{inj}}}$ satisfying $Y_\emptyset = \emptyset$ and $Y_K\cap Y_L=Y_{K\cap L}$ for all pairs of subsets $K,L$ of the same finite set $N$. The name ‘Young set’ was chosen because these sets, and the families of groups they give rise to, play an anlogous role in our construction to that played by the Young subgroups in the representation theory of the symmetric groups. \[examples:Young-set\] In most of these examples we describe the action of the functor on objects only, the action on morphisms being the obvious one. 1. The empty example: $Y_N=\emptyset$ for all sets $N$. We denote this example by $\emptyset$. 2. The basic example: $Y_N=N$, the identity functor. We denote this example by ${\mathrm{id}}$. 3. Products, coproducts, and composites: if $Y$ and $Y'$ are Young sets, then so are the product $(Y\times Y')_N\coloneqq Y_N\times Y'_N$; the coproduct $(Y\sqcup Y')_N\coloneqq Y_N\sqcup Y'_N$; and the composite $(Y\circ Y')_N\coloneqq Y_{Y'_N}$. For instance, for each fixed $m\geq 1$ we obtain a Young set ${\mathrm{id}}^m:N\mapsto N^m$. 4. Quotients: let $Y$ be a Young set equipped with an action of a fixed group $G$ by natural transformations; then the quotient by the $G$-action yields a Young set $Y/G$ with $(Y/G)_N = Y_N/G$. For instance, taking $Y_N=N^m$, on which $G=S_m$ acts by permuting coordinates, we obtain the Young set of unordered $m$-tuples of elements of $N$. 5. Subsets and multisets: the functor $N\mapsto \{\text{nonempty subsets of $N$}\}$ is a Young set, as is the functor $N\mapsto \{\text{$r$-element subsets of $N$}\}$ for each $r\geq 1$. More generally, let $m$ be a positive integer and let $\xi:\{1,\ldots,m\} \to {\mathcal P({{\mathbb{N}}})}$ be a function ($\mathcal P$ denotes the power set and ${\mathbb{N}}=\{0,1,\ldots\}$). To these data we associate the Young set $$Y_N\coloneqq \left\{ f: N\to \{0,1,\ldots,m\}\ \left|\ \begin{gathered} \exists n\in N\text{ with $f(n)\neq 0$, and } \\ \text{$\#f^{-1}(i)\in \xi(i)$ for every $i\geq 1$}\end{gathered}\right.\right\}.$$ The map $Y_K\to Y_N$ associated to an injection of sets $K{\hookrightarrow}N$ is given by extending functions by $0$. Taking $m=1$ and $\xi(1)=\{1,2,3,\ldots\}$ gives the Young set of nonempty subsets of $N$; taking $m=1$ and $\xi(1)=\{r\}$ gives the $r$-element subsets; while taking $m\geq 2$ gives multisets of elements of $Y_N$ with up to $m$ repetitions of each element, with the function $\xi$ imposing restrictions on the number of times each allowed multiplicity occurs. 6. Permutations: $Y_N=S_N\setminus \{{\mathrm{id}}_N\}$, the set of nontrivial permutations of $N$. This is a Young set with the action on morphisms given by assigning to each injective map of sets $w:K\to N$ and each permutation $g$ of $K$ the permutation $w_*g$ of $N$ defined by $$w_*g (n)\coloneqq \begin{cases} wg(k) & \text{if }n=w(k)\in w(K)\\ n & \text{otherwise.}\end{cases}$$ Families of wreath products from Young sets ------------------------------------------- We begin with some generalities on wreath products; see, e.g., [@James-Kerber Chapter 4] for more information. Let $W$ be a finite group acting on a finite set $Y$, and let $H$ be another finite group. The set $H^Y$ of functions $f:Y\to H$ is made into a group by setting $(f_1 f_2)(y)\coloneqq f_1(y) f_2(y)$. The group $W$ acts on $H^Y$ by $(w f)(y) \coloneqq f(w^{-1}y)$, for $w\in W$, $f\in H^Y$, and $y\in Y$. The [wreath product]{} of $W$ with $H$ over $Y$ is defined to be the semidirect product $W\ltimes H^Y$. Thus as a set $W\ltimes H^Y= W\times H^Y$, with group operation $$(w_1,f_1)(w_2,f_2) = \big(w_1 w_2, (w_2^{-1}f_1) f_2 \big).$$ The maps $w\mapsto (w,1)$ and $f\mapsto (1,f)$ identify $W$ and $H^Y$ with subgroups of $W\ltimes H^Y$. The *support* of a function $f\in H^Y$ is defined by $\operatorname{supp}(f)=\{y\in Y\ |\ f(y)\neq 1_H\}$. For each subset $Y'$ of $Y$ we regard $H^{Y'}$ as a subgroup of $H^Y$, namely the subgroup $\{f:Y\to H\ |\ \operatorname{supp}(f) \subseteq Y''\}$. If $W'$ is a subgroup of $W$ such that $Y'$ is $W'$-invariant, then the embeddings $W'\subseteq W\subseteq W\ltimes H^Y$ and $H^{Y'}\subseteq H^Y \subseteq W\ltimes H^Y$ give an embedding $W'\ltimes H^{Y'}\subseteq W\ltimes H^Y$. If $W_1$ acts on $Y_1$, and $W_2$ acts on $Y_2$, then there is an obvious isomorphism $$(W_1\ltimes H^{Y_1})\times (W_2\ltimes H^{Y_2}) \xrightarrow{\cong} (W_1\times W_2)\ltimes H^{Y_1\sqcup Y_2}.$$ So much for generalities. We shall study the representation theory of the family of wreath product groups $$G_N(Y,H) \coloneqq S_N \ltimes H^{Y_N}$$ associated to a Young set $Y$ and an auxiliary finite group $H$. 1. $G_\emptyset(Y,H)=S_\emptyset \ltimes H^\emptyset$ is always the trivial group. 2. $G_N(\emptyset,H)=S_N$ for every $H$, and $G_N(Y,S_1)=S_N$ for every $Y$. 3. $G_N({\mathrm{id}},H)=S_N\ltimes H^N$, the standard wreath product studied in [@Zelevinsky §7]. For example, $G_{[n]}({\mathrm{id}}, S_2)$ (where $[n]=\{1,\ldots,n\}$) is the hyperoctahedral group, whose representations were first worked out by Young in [@Young]. 4. $G_{[n]}({\mathrm{id}}^2,C_p)$ (where $C_p$ is the cyclic group of prime order $p$) is isomorphic to the group of those invertible $n\times n$ matrices over the ring ${\mathbb{Z}}/p^2{\mathbb{Z}}$ that are congruent, modulo $p$, to a permutation matrix. One can think of this group as a simplified model of $\operatorname{GL}_n({\mathbb{Z}}/p^2{\mathbb{Z}})$ (which is an extension of $\operatorname{GL}_n({\mathbb{Z}}/p{\mathbb{Z}})$ by $C_p^{n^2}$). The induction functors that we shall consider below are, in this example, the analogues of the functors used to study the representations of $\operatorname{GL}_n({\mathbb{Z}}/p^n{\mathbb{Z}})$ in [@CMO1] and [@CMO2]. Young subgroups of $G_N(Y,H)$ ----------------------------- We are going to define analogues, in the wreath products $G_N(Y,H)=S_N\ltimes H^{Y_N}$, of the Young subgroups of the symmetric groups. We begin with some notation related to set partitions. A *weak partition* of a finite set $N$ is a finite unordered collection $\lambda=(L_i\ |\ i\in I)$ of subsets $L_i\subseteq N$, called the *blocks* of $\lambda$, having $L_i\cap L_j=\emptyset$ for $i\neq j$, and $N=\bigcup_{i\in I} L_i$. One or more of the $L_i$ may be empty. We denote by ${\operatorname{Part}}^w_N$ the set of all weak partitions of $N$. To each $\lambda=(L_i)\in {\operatorname{Part}}^w_N$ we associate the Young subgroup $S_\lambda\subseteq S_N$ consisting of those permutations that leave invariant each of the blocks $L_i$. There is an obvious isomorphism $\prod_{i} S_{L_i} \cong S_{\lambda}$. Inserting or removing $\emptyset$s from a weak partition does not change the Young subgroup. For $\lambda,\mu\in {\operatorname{Part}}^w_N$ we write $\lambda\leq \mu$ to mean that each block of $\mu$ is a union of blocks of $\lambda$. This is not a partial order, but it restricts to a partial order on partitions (i.e., weak partitions with no empty blocks). We have $\lambda \leq \mu$ if and only if $S_\lambda \subseteq S_\mu$. Given $\lambda,\mu\in {\operatorname{Part}}^w_N$ we let $\lambda \wedge \mu\in {\operatorname{Part}}^w_N$ be the weak partition whose blocks are the intersections $L_i\cap M_j$ of the blocks of $\lambda$ and $\mu$. We have $S_{\lambda\wedge\mu}=S_{\lambda}\cap S_{\mu}$. Each bijective map of sets $w:N\to M$ induces a bijective map ${\operatorname{Part}}^w_N\to{\operatorname{Part}}^w_M$, by applying $w$ to each block of each partition. In particular, the group $S_N$ acts on ${\operatorname{Part}}^w_N$. The group $S_\lambda$ fixes the partition $\lambda$, but the isotropy group of $\lambda$ may be larger than $S_\lambda$. For each bijection $w:N\to M$ and each $\lambda \in {\operatorname{Part}}^w_N$ we have ${}^wS_\lambda = S_{w\lambda}$, where ${}^wS_\lambda=wS_\lambda w^{-1}\subset S_M$. We now return to our groups $G_N(Y,H)$. The Young set $Y$ and auxiliary group $H$ will be fixed throughout this section, so we drop them from the notation and just write $G_N$. For each weak partition $\lambda=(L_i)$ of $N$ the subsets $Y_{L_i}$ of $Y_N$ are pairwise disjoint, by our assumption that $Y_K\cap Y_L=Y_{K\cap L}$. We denote by $Y_\lambda \coloneqq \bigsqcup_{i} Y_{L_i}\subseteq Y_N$ the union of these subsets. We have $Y_\lambda \cap Y_\mu = Y_{\lambda \wedge\mu}$ for all $\lambda,\mu\in {\operatorname{Part}}^w_N$, and $Y_\lambda \subseteq Y_\mu$ for all $\lambda \leq \mu\in {\operatorname{Part}}^w_N$. When $Y={\mathrm{id}}$ we have $Y_\lambda=Y_N$ for every $\lambda$ and every $N$, but in general $Y_\lambda \subsetneq Y_N$. The functoriality of $Y$ ensures that for each bijection $w:N\to M$ and each $\lambda \in {\operatorname{Part}}^w_N$ we have $wY_\lambda = Y_{w\lambda}$ as subsets of $Y_M$. In particular, $Y_\lambda$ is $S_\lambda$-invariant. We now have subgroups $S_\lambda \subseteq S_N$ and $H^{Y_\lambda}\subseteq H^{Y_N}$. We take the semidirect product of these groups to obtain a subgroup $G_\lambda\subseteq G_N$: $$G_\lambda(Y,H) \coloneqq S_\lambda \ltimes H^{Y_\lambda}.$$ We have a canonical isomorphism of groups $\prod_i G_{L_i}\cong G_\lambda$, coming from the corresponding isomorphisms $\prod_i S_{L_i}\cong S_\lambda$ and $\prod_i H^{Y_{L_i}} \cong H^{\sqcup_i Y_{L_i}} = H^{Y_\lambda}$. Let us list some properties of the groups $G_\lambda$; all of these are immediate consequences of the corresponding facts about the groups $S_\lambda$ and the sets $Y_\lambda$. \[lem:G-properties\] Let $N$ and $M$ be finite sets. 1. If $\lambda \leq \mu\in {\operatorname{Part}}^w_N$ then $G_\lambda \subseteq G_\mu$. 2. For all $\lambda,\mu\in {\operatorname{Part}}^w_N$ we have $G_\lambda \cap G_\mu = G_{\lambda \wedge \mu}$. 3. For each bijective map $w:N\to M$ and each $\lambda \in {\operatorname{Part}}^w_N$ we have ${}^wG_\lambda = G_{w\lambda}$. For each $\lambda \leq \mu \in {\operatorname{Part}}^w_N$ we consider the following subgroups of $G_\mu$: $$P_\lambda^\mu \coloneqq S_\lambda \ltimes H^{Y_\mu}\qquad\text{and}\qquad U_\lambda^\mu \coloneqq H^{Y_\mu \setminus Y_\lambda}.$$ Let us list some properties of these groups; all of these follow easily from the definitions. \[lem:P-U-properties\] Let $N$ and $M$ be finite sets. 1. If $\lambda \leq \mu$ in ${\operatorname{Part}}^w_N$ then the group $U_\lambda^\mu$ is normalised by $G_\lambda$, and the map $G_\lambda \times U_\lambda^\mu \to P_\lambda^\mu$ given by multiplication in the group $G_\mu$ is a bijection; thus $P_\lambda^\mu$ is the internal semidirect product $G_\lambda \ltimes U_\lambda^\mu$. 2. If $w:N\to M$ is a bijection of sets then for all $\lambda\leq \mu\in {\operatorname{Part}}^w_N$ we have ${}^wP_\lambda^\mu = P_{w\lambda}^{w\mu}$ and ${}^wU_\lambda^\mu= U_{w\lambda}^{w\mu}$. 3. For all $\lambda,\mu\in {\operatorname{Part}}^w_N$ we have $G_\lambda \cap U_\mu^N = U_{\lambda \wedge\mu}^\lambda$. We next recall some terminology from [@Zelevinsky]: for each $\lambda \in {\operatorname{Part}}^w_N$ we say that a subgroup $G\subseteq G_N$ is *decomposable* with respect to $(G_\lambda,U_\lambda^N)$ if the intersection $P_\lambda^N\cap G$ decomposes as the semidirect product $(G_\lambda\cap G)\ltimes ( U_\lambda^N\cap G)$. \[lem:decomposability\] Let $\lambda,\mu\in {\operatorname{Part}}^w_N$ be weak partitions of a finite set $N$. Each of the subgroups $P_\mu^N$, $G_\mu$, and $U_\mu^N$ of $G_N$ is decomposable with respect to $(G_\lambda,U_\lambda^N)$. Compute as follows: $$\begin{aligned} & P_\lambda^N \cap P_\mu^N = (S_\lambda \cap S_\mu) \ltimes H^{Y_N} = (S_{\lambda\wedge\mu}\ltimes H^{Y_\lambda})\ltimes H^{Y_N\setminus Y_\lambda}= (G_\lambda\cap P_\mu^N)\ltimes (U_\lambda^N\cap P_\mu^N),\\ & P_\lambda^N\cap G_\mu = (S_\lambda \cap S_\mu) \ltimes H^{Y_\mu} = ( S_{\lambda\wedge\mu}\ltimes H^{Y_{\lambda\wedge\mu}})\ltimes H^{Y_\mu\setminus Y_{\lambda\wedge\mu}} = (G_\lambda \cap G_\mu) \ltimes (U_\lambda^N \cap G_\mu), \\ & P_\lambda^N\cap U_\mu^N = U_\mu^N = H^{Y_N\setminus Y_\mu} = H^{Y_\lambda \setminus Y_{\lambda\wedge\mu}} \times H^{Y_N \setminus (Y_\lambda \cup Y_\mu)} = (G_\lambda \cap U_\mu^N) \ltimes (U_\lambda^N\cap U_\mu^N). \end{aligned}$$ Induction and restriction functors ---------------------------------- We continue to fix a Young set $Y$ and a finite group $H$, and write $G_N$ for the wreath product $G_N(Y,H)=S_N\ltimes H^{Y_N}$. We also fix an algebraically closed field ${\Bbbk}$ of characteristic zero. For each finite group $G$ we let $\operatorname{Rep}(G)$ denote the category whose objects are the finite-dimensional ${\Bbbk}$-linear representations of $G$, and whose morphisms are the $G$-equivariant linear maps. For all ordered pairs of weak partitions $\lambda\leq \mu\in {\operatorname{Part}}^w_N$ we consider the following functors: 1. Let $ \operatorname{i}_\lambda^\mu : \operatorname{Rep}(G_\lambda) \to \operatorname{Rep}(G_\mu) $ be the functor of inflation from $G_\lambda$ to $P_\lambda^\mu$ (i.e., let $U_\lambda^\mu$ act trivially), followed by induction from $P_\lambda^\mu$ to $G_\mu$. 2. Let $ \operatorname{r}^\mu_\lambda: \operatorname{Rep}(G_\mu) \to \operatorname{Rep}(G_\lambda) $ be the functor that sends each $G_\mu$-representation $V$ to the $G_\lambda$-invariant subspace $V^{U_\lambda^\mu}$ of $U_\lambda^\mu$-fixed vectors. 3. Let $ \operatorname{res}^\mu_\lambda: \operatorname{Rep}(G_\mu)\to \operatorname{Rep}(G_\lambda) $ be the usual restriction functor. For the Young set $Y_N=N$ the groups $U_\lambda^\mu$ are trivial, and so there is no difference between the functors $\operatorname{r}^\mu_\lambda$ and $\operatorname{res}^\mu_\lambda$. In general $\operatorname{r}_\lambda^\mu$ is a subfunctor of $\operatorname{res}^\mu_\lambda$, and the inclusion can be proper. \[lem:adjoints\] The functors $\operatorname{i}_\lambda^\mu$ and $\operatorname{r}_\lambda^\mu$ are two-sided adjoints to one another. Since the groups in question are all finite and ${\Bbbk}$ has characteristic zero, induction from $P_\lambda^\mu$ to $G_\mu$ is a two-sided adjoint to restriction from $G_\mu$ to $P_\lambda^\mu$, while inflation from $G_\lambda$ to $P_\lambda^\mu$ is a two-sided adjoint to the functor of $U_\lambda^\mu$-invariants. \[lem:transitivity\] For each ordered triple $\lambda \leq \mu \leq \nu \in {\operatorname{Part}}^w_N$ there are natural isomorphisms of functors $$\operatorname{i}_\lambda^\nu \cong \operatorname{i}_\mu^\nu \operatorname{i}_\lambda^\mu \qquad \text{and}\qquad \operatorname{r}_\lambda^\nu \cong \operatorname{r}^\mu_\lambda \operatorname{r}^\nu_\mu.$$ By the uniqueness of adjoint functors it will suffice to prove the assertion about $\operatorname{r}^\nu_\lambda$. For each representation $V$ of $G_\nu$ we have $\operatorname{r}^\nu_\lambda(V)= V^{U^\nu_\lambda}$, while $\operatorname{r}^\mu_\lambda \operatorname{r}^\nu_\mu (V)= \left( V^{U^\nu_\mu}\right)^{U^\mu_\lambda}$. The decomposition of sets $Y_\nu \setminus Y_\lambda = (Y_\nu \setminus Y_\mu) \sqcup (Y_\mu\setminus Y_\lambda)$ leads to an internal direct-product decomposition of groups $U^\nu_\lambda = U^\nu_\mu \times U^\mu_\lambda$, showing that $\operatorname{r}^\nu_\lambda(V)$ and $\operatorname{r}^\mu_\lambda\operatorname{r}^\nu_\mu(V)$ are in fact equal as $G_\lambda$-invariant subspaces of $V$. For each bijection of sets $w:N\to M$ we have an equivalence $$\label{eq:Adw-R} \operatorname{Ad}_w:\operatorname{Rep}(G_N)\xrightarrow{\rho\mapsto \rho(w^{-1}\cdot w)} \operatorname{Rep}(G_M).$$ \[lem:w-i-r\] For each bijection of sets $w:N\to M$ and each ordered pair of weak partitions $\lambda \leq \mu \in {\operatorname{Part}}^w_N$ we have natural isomorphisms of functors $$\operatorname{Ad}_w \operatorname{i}_\lambda^\mu \cong \operatorname{i}_{w\lambda}^{w\mu} \operatorname{Ad}_w,\quad \operatorname{Ad}_w\operatorname{r}^\mu_\lambda \cong \operatorname{r}^{w\mu}_{w\lambda} \operatorname{Ad}_w,\quad \text{and}\quad \operatorname{Ad}_w \operatorname{res}^\mu_\lambda \cong \operatorname{res}^{w\mu}_{w\lambda} \operatorname{Ad}_w.$$ Once again by the uniqueness of adjoints it suffices to consider the functors $\operatorname{r}$ and $\operatorname{res}$. The statement about $\operatorname{res}$ is clearly true, while the assertion about $\operatorname{r}$ follows from the equality ${}^wU_\lambda^\mu=U_{w\lambda}^{w\mu}$ observed in Lemma \[lem:P-U-properties\]. We conclude this section by establishing Mackey-type formulas for the composition of our restriction and induction functors. The formulas are instances of [@Zelevinsky Theorem A3.1]. \[prop:Mackey\] For each finite set $N$ and each pair of weak partitions $\lambda,\mu\in {\operatorname{Part}}^w_N$ we have isomorphisms of functors $$\begin{aligned} \operatorname{r}^N_{\lambda}\operatorname{i}_{\mu}^N & \cong \bigoplus_{S_\lambda w S_\mu \in S_\lambda \backslash S_N/S_\mu} \operatorname{i}^{\lambda}_{\lambda \wedge w\mu} \operatorname{Ad}_w \operatorname{r}^{\mu}_{w^{-1}\lambda \wedge \mu}\quad \text{and} \\ \operatorname{res}^N_{\lambda}\operatorname{i}_{\mu}^N & \cong \bigoplus_{S_\lambda w S_\mu \in S_\lambda \backslash S_N/S_\mu} \operatorname{i}^{\lambda}_{\lambda\wedge w\mu} \operatorname{Ad}_w \operatorname{res}^{\mu}_{w^{-1}\lambda \wedge \mu}. \end{aligned}$$ We first establish the formula for $\operatorname{r}^N_{\lambda}\operatorname{i}_{\mu}^N$ by applying [@Zelevinsky Theorem A3.I] to the following choices of groups: $${\mathtt{G}}=G_N,\quad {\mathtt{M}}=G_{\mu},\quad {\mathtt{U}}=U_{\mu}^N,\quad {\mathtt{P}}=P_{\mu}^N,\quad {\mathtt{N}}=G_{\lambda},\quad {\mathtt{V}}=U_{\lambda}^N, \quad {\mathtt{Q}}=P_{\lambda}^N.$$ (${\mathtt{G}}$, ${\mathtt{P}}$, etc., designate the objects denoted by those letters in [@Zelevinsky A3], while $G$, $P$, etc., refer to objects defined in this paper.) The characters ${\theta}$ and $\psi$ appearing in [@Zelevinsky] are here taken to be trivial. The double-coset space ${\mathtt{Q}}\backslash {\mathtt{G}}/{\mathtt{P}}$ is computed thus: $$P_{\lambda}^N \backslash G_N / P_{\mu}^N = (S_\lambda \ltimes H^{Y_N})\backslash (S_N\ltimes H^{Y_N}) / (S_{\mu}\ltimes H^{Y_N}) \cong S_\lambda \backslash S_N / S_{\mu}.$$ For each $w\in S_N$ the groups ${}^w{\mathtt{P}}=P_{w\mu}^N$, ${}^w{\mathtt{M}}=G_{w\mu}$, and ${}^w{\mathtt{U}}=U_{w\mu}^N$ are decomposable with respect to $({\mathtt{N}}, {\mathtt{V}})= (G_\lambda, U_\lambda^N)$ by Lemma \[lem:decomposability\]. Likewise ${}^w{\mathtt{Q}}$, ${}^w{\mathtt{N}}$, and ${}^w{\mathtt{V}}$ are decomposable with respect to $({\mathtt{M}},{\mathtt{U}})$, so the hypothesis (D) of [@Zelevinsky p. 168] is satisfied. Lemmas \[lem:G-properties\] and \[lem:P-U-properties\] yield the following identifications of the groups ${\mathtt{M}}'$, ${\mathtt{N}}'$, etc., appearing on [@Zelevinsky p. 168]: $${\mathtt{M}}'= G_{w^{-1}\lambda \wedge \mu},\quad {\mathtt{N}}' = G_{\lambda \wedge w\mu},\quad {\mathtt{V}}'=U_{w^{-1}\lambda\wedge\mu}^\mu,\quad {\mathtt{U}}'=U_{\lambda \wedge w\mu}^{\lambda}.$$ Having made these identifications, an application of [@Zelevinsky Theorem A3.1] gives the stated formula for $\operatorname{r}^N_{\lambda}\operatorname{i}_{\mu}^N$. The proof of the formula for $\operatorname{res}^N_{\lambda}\operatorname{i}_{\mu}^N$ is very similar: we now take $${\mathtt{G}}=G_N,\quad {\mathtt{M}}=G_{\mu},\quad {\mathtt{U}}=U_{\mu}^n,\quad {\mathtt{P}}=P_{\mu}^N,\quad {\mathtt{N}}={\mathtt{Q}}=G_{\lambda},\quad {\mathtt{V}}=\{1\}.$$ We still have ${\mathtt{Q}}\backslash {\mathtt{G}}/{\mathtt{P}}\cong S_\lambda \backslash S_N/ S_\mu$, and the decomposability hypothesis is again satisfied by virtue of Lemma \[lem:decomposability\]. We now have $${\mathtt{M}}'= G_{w^{-1}\lambda \wedge \mu},\quad {\mathtt{N}}' = G_{\lambda\wedge w\mu},\quad {\mathtt{V}}'=\{1\},\quad {\mathtt{U}}' =U_{\lambda \wedge w\mu}^{\lambda},$$ and the formula from [@Zelevinsky Theorem A3.1] becomes in this case the proposed formula for $\operatorname{res}^N_{\lambda}\operatorname{i}_{\mu}^N$. Hopf algebras associated to Young sets {#sec:Hopf} ====================================== Construction of the Hopf algebras {#subsec:R-Hopf} --------------------------------- We continue to consider the family of groups $G_N=G_N(Y,H)= S_N\ltimes H^{Y_N}$ associated to a Young set $Y$ and a finite group $H$. For each finite group $G$ we let $R(G)$ denote the Grothendieck group of $\operatorname{Rep}(G)$. Thus $R(G)$ is a free abelian group with basis $\operatorname{Irr}(G)$, the set of isomorphism classes of irreducible ${\Bbbk}$-linear representations of $G$. For all pairs $K,L$ of finite sets the isomorphism $G_K\times G_L\xrightarrow{\cong} G_{K,L}$ and the bijection $\operatorname{Irr}(G_K)\times \operatorname{Irr}(G_L)\xrightarrow{(V,V')\mapsto V\otimes_{{\Bbbk}} V'} \operatorname{Irr}(G_K\times G_L)$ yield a canonical isomorphism $R(G_K)\otimes_{{\mathbb{Z}}} R(G_L) \xrightarrow{\cong} R(G_{K,L})$ which we shall frequently invoke without further comment. We define $\mathcal R_{Y,H}$, or $\mathcal R$ for short, to be the free abelian group $$\mathcal R = \big(\bigoplus_{N\in {\operatorname{Set}}} R(G_N)\big)_{{\operatorname{Set}}^\times}$$ where the subscript indicates that we take coinvariants for the groupoid of set bijections; that is to say, we impose the relation $\rho = \operatorname{Ad}_w \rho$ whenever $\rho$ and $w$ are as in . Thus $\mathcal R$ is a free abelian group with basis $\left(\bigsqcup_{N}\operatorname{Irr}(G_N)\right)_{{\operatorname{Set}}^\times}$. We grade $\mathcal R$ so that $R(G_N)$ sits in degree $\#N$. We consider the following graded, ${\mathbb{Z}}$-linear maps: 1. multiplication: $m:{\mathcal R}\otimes_{{\mathbb{Z}}} {\mathcal R}\to {\mathcal R}$ defined as the direct sum of the maps $$R(G_K)\otimes_{{\mathbb{Z}}} R(G_L) \xrightarrow{\cong} R(G_{K,L}) \xrightarrow{\operatorname{i}_{K,L}^{K\sqcup L}} R(G_{K\sqcup L}).$$ 2. comultiplication: $\Delta:{\mathcal R}\to {\mathcal R}\otimes_{{\mathbb{Z}}} {\mathcal R}$ defined on $\rho\in R(G_N)$ by $$\label{eq:Delta-R-definition} \Delta(\rho) = \sum_{S_N(K)\in S_N\backslash {\mathcal P({N})}} \operatorname{res}^N_{K,K^c} \rho.$$ Here the sum is over a set of representatives for the $S_N$-orbits of subsets of $N$, and $K^c=N\setminus K$. The representation $\operatorname{res}^N_{K,K^c}\rho$ of $G_{K,K^c}$ is regarded as an element of $\mathcal R\otimes_{{\mathbb{Z}}}\mathcal R$ via the canonical isomorphism $R(G_{K,K^c})\cong R(G_K)\otimes_{{\mathbb{Z}}} R(G_{K^c})$. 3. another comultiplication: $\delta:{\mathcal R}\to {\mathcal R}\otimes_{{\mathbb{Z}}} {\mathcal R}$ defined on $\rho\in R(G_N)$ by $$\delta(\rho) = \sum_{S_N(K)\in S_N\backslash {\mathcal P({N})}} \operatorname{r}^N_{K,K^c} \rho.$$ This is to be understood in the same way as . 4. unit: $e:{\mathbb{Z}}\to {\mathcal R}$ defined by setting $e(1)\coloneqq {\operatorname{triv}}_{G_\emptyset}$, the unique element of $\operatorname{Irr}(G_\emptyset)$. 5. counit: $\epsilon:{\mathcal R}\to {\mathbb{Z}}$ defined by setting $\epsilon({\operatorname{triv}}_{G_\emptyset})=1$, and $\epsilon(\rho)=0$ for all other irreducible representations $\rho$. Note that Lemma \[lem:w-i-r\] ensures that $m$, $\Delta$, and $\delta$ are well-defined on ${\operatorname{Set}}^\times$-coinvariants. \[thm:PSH\] Fix a Young set $Y$ and a finite group $H$. 1. The maps $m$, $\Delta$, $e$, and $\epsilon$ make ${\mathcal R}_{Y,H}$ into a graded, connected, commutative, and cocommutative Hopf algebra over ${\mathbb{Z}}$. We denote this Hopf algebra by ${\mathcal R}^{\Delta}_{Y,H}$. 2. The maps $m$, $\delta$, $e$, and $\epsilon$ and the basis $\big(\bigsqcup_{N} \operatorname{Irr}(G_N)\big)_{{\operatorname{Set}}^\times}$ make ${\mathcal R}_{Y,H}$ into a *PSH algebra*: a graded, connected, positive, self-adjoint Hopf algebra over ${\mathbb{Z}}$ (cf. [@Zelevinsky 1.4]). We denote this PSH algebra by ${\mathcal R}^{\delta}_{Y,H}$. $\mathcal R^{\delta}_{\emptyset,1}$ and $\mathcal R^{\Delta}_{\emptyset,1}$ are both the Hopf algebra of representations of the symmetric groups, which is isomorphic to the Hopf algebra ${\operatorname{Sym}}_{{\mathbb{Z}}}$ of symmetric functions with ${\mathbb{Z}}$ coefficients; see [@Zelevinsky §5, §6]. Both $\mathcal R^{\delta}_{{\mathrm{id}},H}$ and $\mathcal R^{\Delta}_{{\mathrm{id}},H}$ are the Hopf algebras constructed in [@Zelevinsky §7]. Unlike in these examples, the Hopf algebras $\mathcal R^\delta_{Y,H}$ and $\mathcal R^\Delta_{Y,H}$ are generally not isomorphic to one another: see for instance Section \[sec:graphs\]. Taking the dual of the Hopf algebra ${\mathcal R}^{\Delta}_{Y,H}$ gives a third Hopf-algebra structure on ${\mathcal R}_{Y,H}$, in which the multiplication is given by the usual induction functors $\operatorname{ind}_{G_{K,L}}^{G_{K\sqcup Ll}}$, while the comultiplication is given by the functors $\operatorname{r}^{K\sqcup L}_{K,L}$. Of course, the PSH algebra ${\mathcal R}^{\delta}_{Y,H}$ is its own dual. The proof that ${\mathcal R}^\Delta$ and ${\mathcal R}^\delta$ satisfy the Hopf axioms—that is, the axioms listed in [@Zelevinsky 1.3]—is similar for the two cases, and both are similar to the case of $Y =\emptyset$ and $H=\{1\}$ established in [@Zelevinsky 6.2]. Most of the axioms follow in a very straightforward way from the basic properties of the functors $\operatorname{i}$, $\operatorname{r}$, and $\operatorname{res}$ observed in the previous section; for example, the associativity of multiplication follows from Lemma \[lem:transitivity\]. The compatibility between multiplication and comultiplication—that is, the commutativity of the diagram $$\xymatrix@C=60pt{ {\mathcal R}\otimes_{{\mathbb{Z}}} {\mathcal R} \ar[r]^-{m} \ar[d]_-{\Delta\otimes \Delta} & {\mathcal R} \ar[r]^-{\Delta} & {\mathcal R}\otimes_{{\mathbb{Z}}} {\mathcal R} \\ {\mathcal R}\otimes_{{\mathbb{Z}}} {\mathcal R}\otimes_{{\mathbb{Z}}} {\mathcal R} \otimes_{{\mathbb{Z}}} {\mathcal R} \ar[rr]^-{w\otimes x\otimes y\otimes z\longmapsto w\otimes y\otimes x\otimes z} & & {\mathcal R}\otimes_{{\mathbb{Z}}} {\mathcal R} \otimes_{{\mathbb{Z}}} {\mathcal R}\otimes_{{\mathbb{Z}}} {\mathcal R} \ar[u]_-{m\otimes m} }$$ and of the corresponding diagram for $\delta$—follows from Proposition \[prop:Mackey\] just as in [@Zelevinsky A3.2]. Thus ${\mathcal R}^\Delta$ and ${\mathcal R}^\delta$ are connected, graded Hopf algebras. The proofs of parts (1) and (2) diverge at this point; let us handle (2) first. We must verify that ${\mathcal R}^\delta$ satisfies the additional axioms from [@Zelevinsky 1.4]; again, the argument closely follows that of [@Zelevinsky 6.2]. The fact that $m$ and $\delta$ are adjoints to one another with respect to the inner products induced by our choice of basis follows from the fact that $\operatorname{i}_{K,L}^{K\sqcup L}$ and $\operatorname{r}^{K\sqcup L}_{K,L}$ are adjoint functors (Lemma \[lem:adjoints\]). The positivity of all of the structure maps relative to our chosen basis follows immediately from the fact that all of these maps are defined via functors between representation categories. This completes our proof of part (2). To complete the proof of part (1) we must show that the Hopf algebra ${\mathcal R}^\Delta$ is commutative and cocommutative. We have ${\mathcal R}^\Delta={\mathcal R}^\delta$ as algebras, and the PSH algebra ${\mathcal R}^\delta$ is automatically commutative (see [@Zelevinsky Proposition 1.6]). So we are left to prove that ${\mathcal R}^\Delta$ is cocommutative, which amounts to the assertion that for all subsets $K\subseteq N$ the diagram $$\label{eq:cocommutative-proof-1} \xymatrix@R=5pt{ & R(G_{K,K^c})\ar[r]^-{\cong} & R(G_K)\otimes_{{\mathbb{Z}}} R(G_{K^c}) \ar[dd]^-{\text{flip}}\\ R(G_{N})\ar[ur]^-{\operatorname{res}^{N}_{K,K^c}} \ar[dr]_-{\operatorname{res}^{N}_{K^c,K}} & & \\ & R(G_{K^c,K}) \ar[r]^-{\cong} & R(G_{K^c})\otimes_{{\mathbb{Z}}} R(G_K) }$$ commutes. But this is obvious because $G_{K,K^c}$ and $G_{K^c,K}$ are the same subgroup of $G_N$. Application of Clifford theory {#subsec:Clifford} ------------------------------ Let $Y$ be a Young set, let $H$ be a finite group, and let $G_N=G_N(Y,H)=S_N\ltimes H^{Y_N}$ as before. An application of Clifford theory gives a description of the set $\operatorname{Irr}(G_N)$ in terms of orbits and isotropy groups for the action of $S_N$ on the set $\operatorname{Irr}(H^{Y_N})$. In this section we shall briefly recall how this correspondence works (referring to [@James-Kerber Section 4.3], for instance, for the details); and we then use this correspondence to give another description of the Hopf algebras $\mathcal R^{\Delta}_{Y,H}$ and $\mathcal R^{\delta}_{Y,H}$. Fix a set $\widehat{H}$ of representatives for the isomorphism classes of irreducible representations of $H$. We then have, for each finite set $N$, a bijection $$\widehat{H}^{Y_N} \xrightarrow{F\mapsto \pi_F} \operatorname{Irr}(H^{Y_N}),\quad \pi_F(f)\coloneqq \bigotimes_{y\in Y_N} F(y)\left(f(y)\right) \in \operatorname{GL}\left(\bigotimes_{y\in Y_N} V_{F(y)}\right) \qquad (f\in H^{Y_N}).$$ Here $V_{F(y)}$ is the vector space underlying the representation $F(y)\in \widehat{H}$, and $F(y)\left(f(y)\right)$ is the linear map $V_{F(y)}\to V_{F(y)}$ by which $f(y)\in H$ acts under the representation $F(Y)$. For each bijection of sets $w:N\to M$ and each $F\in \widehat{H}^{Y_N}$ we define $wF\in \widehat{H}^{Y_M}$ by $wF(y)\coloneqq F(w^{-1}y)$. Setting $M=N$ gives an action of the group $S_N$ on the function space $\widehat{H}^{Y_N}$. For each $F\in \widehat{H}^{Y_N}$ we define $$\operatorname{Aut}F \coloneqq \{w\in S_N\ |\ wF=F\} \quad \text{and}\quad G_F\coloneqq (\operatorname{Aut}F)\ltimes H^{Y_N}\subseteq G_N.$$ For each representation $\gamma$ of $\operatorname{Aut}F$ we let $\gamma\ltimes \pi_F$ be the representation of $G_F=\operatorname{Aut}F\ltimes H^{Y_N}$ on the tensor product vector space $V_\gamma\otimes_{{\Bbbk}} \bigotimes_{y\in Y_N} V_{F(y)}$, where $H^{Y_N}$ acts trivially on $V_\gamma$ and by $\pi_F$ on $\bigotimes_y V_{F(y)}$, and where $\operatorname{Aut}F$ acts on $V_\gamma$ by $\gamma$ and on $\bigotimes_y F_y$ by permuting the factors: $$w \bigotimes_{y\in Y_N} v_y \coloneqq \bigotimes_{y\in Y_N} v_{w^{-1}y} \qquad (v_y\in V_{F(y)}).$$ This is well-defined because $V_{F(y)}$ and $V_{F(w^{-1}y)}$ are the same vector space. For each bijection of sets $w:N\to M$ and each $F\in \widehat{H}^{Y_N}$ we have ${}^w\operatorname{Aut}F=\operatorname{Aut}wF$, giving an equivalence $$\label{eq:Adw-M} \operatorname{Ad}_w: \operatorname{Rep}(G_N)\xrightarrow{\gamma\mapsto \gamma(w^{-1}\cdot w)} \operatorname{Rep}(G_M).$$ Clifford theory, in this case, says that the maps $$\Phi_F: \operatorname{Irr}( \operatorname{Aut}F ) \to \operatorname{Irr}(G_N),\qquad \Phi_F(\gamma)\coloneqq \operatorname{ind}_{G_F}^{G_N} (\gamma\ltimes \pi_F ),$$ defined for each $F\in \widehat{H}^{Y_N}$, assemble into a bijective map $$\label{eq:Clifford-Mackey-bijection} \Bigg( \bigsqcup_{\substack{N\in {\operatorname{Set}}, \\ F\in \widehat{H}^{Y_N}}} \operatorname{Irr}(\operatorname{Aut}F) \Bigg)_{{\operatorname{Set}}^\times} \xrightarrow{\quad \bigsqcup \Phi_F\quad} \Big( \bigsqcup_{N\in {\operatorname{Set}}} \operatorname{Irr}(G_N)\Big)_{{\operatorname{Set}}^\times}.$$ As before, the subscript ${\operatorname{Set}}^\times$ indicates the quotient space for the actions and of the groupoid ${\operatorname{Set}}^\times$. Now consider $$\label{eq:M-definition} \mathcal M_{Y,H} \coloneqq \Big( \bigoplus_{\substack{N\in {\operatorname{Set}}\\ F\in \widehat{H}^{Y_N}}} R( \operatorname{Aut}F)\Big)_{{\operatorname{Set}}^\times}$$ which is a free abelian group with graded basis $$\label{eq:M-basis} \Big( \bigsqcup_{\substack{N\in {\operatorname{Set}}, \\ F\in \widehat{H}^{Y_N}}} \operatorname{Irr}(\operatorname{Aut}F) \Big)_{{\operatorname{Set}}^\times} .$$ The bijection of bases from gives an isomorphism of groups $\Phi:\mathcal M_{Y,H} \xrightarrow{\cong} \mathcal R_{Y,H}$, and hence Theorem \[thm:PSH\] furnishes $\mathcal M_{Y,H}$ with two Hopf-algebra structures. Our purpose in this section is to describe these structures explicitly. Since $Y$ and $H$ will be fixed we shall henceforth drop them from the notation when convenient, writing $\mathcal M$ for $\mathcal M_{Y,H}$ and $\mathcal R$ for $\mathcal R_{Y,H}$. For each pair of finite sets $K,L$ we have an $S_{K,L}$-equivariant embedding $$\label{eq:F-sqcup} \widehat{H}^{Y_K} \times \widehat{H}^{Y_L} \xrightarrow{\cong} \widehat{H}^{Y_K\sqcup Y_L} = \widehat{H}^{Y_{K,L}} {\hookrightarrow}\widehat{H}^{Y_{K\sqcup L}}$$ where the last arrow is defined by extending each function $F:Y_{K,L}\to \widehat{H}$ to a function $Y_{K\sqcup L}\to \widehat{H}$ by defining $F(y)={\operatorname{triv}}_H$ for all $y\in Y_{K\sqcup L}\setminus (Y_K\sqcup Y_L)$. Here ${\operatorname{triv}}_H$ denotes the one-dimensional trivial representation of $H$. We shall denote the embedding by $(F_K,F_L)\mapsto F_K\sqcup F_L$. The standard embedding $S_K\times S_L {\hookrightarrow}S_{K\sqcup L}$ restricts to an embedding $\operatorname{Aut}F_K \times \operatorname{Aut}{F_L}{\hookrightarrow}\operatorname{Aut}(F_K\sqcup F_L)$, and so we have an induction functor $$\operatorname{ind}_{\operatorname{Aut}F_K \times \operatorname{Aut}F_L}^{\operatorname{Aut}(F_K\sqcup F_L)} : \operatorname{Rep}(\operatorname{Aut}F_K\times \operatorname{Aut}F_L) \to \operatorname{Rep}(\operatorname{Aut}(F_K\sqcup F_L)).$$ On the free abelian group $\mathcal M$ we define a graded multiplication $\mathcal M\otimes_{{\mathbb{Z}}} \mathcal M\to \mathcal M$ as the direct sum of the maps $$\label{eq:M-mult-definition} R(\operatorname{Aut}F_K)\otimes_{{\mathbb{Z}}} R(\operatorname{Aut}F_L) \xrightarrow{\cong} R(\operatorname{Aut}F_K \times \operatorname{Aut}F_L) \xrightarrow{\operatorname{ind}} R(\operatorname{Aut}(F_K\sqcup F_L)).$$ The transitivity of induction ensures that $\mathcal M$ becomes an associative graded algebra with this multiplication; the unit is the trivial representation of the trivial automorphism group $\operatorname{Aut}F_\emptyset$, where $F_\emptyset\in \widehat{H}^{Y_\emptyset}$ is the empty function. \[prop:Clifford-Mackey-multiplication\] The map $\Phi : \mathcal M\to \mathcal R$ is an isomorphism of graded algebras. We already know that $\Phi$ is a graded isomorphism of abelian groups. The map $\Phi_\emptyset: R(\operatorname{Aut}F_\emptyset) \to R(G_\emptyset)$ sends the trivial representation to the trivial representation, which is to say, $\Phi$ sends the unit of $\mathcal M$ to the unit of $\mathcal R$. The proposition thus amounts to the assertion that for all finite sets $K,L$, and all $F_K\in \widehat{H}^{Y_K}$ and $F_L\in \widehat{H}^{Y_L}$, the diagram $$\label{eq:Clifford-Mackey-multiplication-diag} \xymatrix@C=40pt{ \operatorname{Rep}(G_{K,L}) \ar[r]^-{\operatorname{i}_{K,L}^{K\sqcup L}} & \operatorname{Rep}(G_{K\sqcup L}) \\ \operatorname{Rep}(\operatorname{Aut}F_K \times \operatorname{Aut}F_L) \ar[u]^-{\Phi \otimes \Phi } \ar[r]^-{\operatorname{ind}} & \operatorname{Rep}(\operatorname{Aut}( F_K \sqcup F_L)) \ar[u]_-{\Phi} }$$ commutes. This commutativity is a special case of [@CMO1 Theorems 3.6 & 3.14]. To be specific, set ${\mathtt{G}}=G_{K\sqcup L}$, ${\mathtt{L}}=G_{K,L}$, ${\mathtt{U}}={\mathtt{U_0}}=U_{K,L}^{K\sqcup L}$, ${\mathtt{V}}={\mathtt{V_0}}=\{1\}$, ${\mathtt{G_0}}=H^{Y_{K\sqcup L}}$, ${\mathtt{L_0}}=H^{Y_{K,L}}$, and ${\mathtt{\psi}} = \pi_{F_K}\otimes_{{\Bbbk}}\pi_{F_L} \in \operatorname{Irr}(H^{Y_{K,L}})$. (We are writing ${\mathtt{G}}$ (etc.) to designate the object called $G$ (etc.) in [@CMO1 Section 3].) We then have, still in the notation of [@CMO1], ${\mathtt{{\varphi}}}=\pi_{F_K\sqcup F_L}$, ${\mathtt{L(\psi)}}=(\operatorname{Aut}F_K\times \operatorname{Aut}F_L)\ltimes H^{Y_{K,L}}$, ${\mathtt{G({\varphi})}}=\operatorname{Aut}(F_K \sqcup F_L)\ltimes H^{Y_{K\sqcup L}}$, ${\mathtt{U({\varphi})}}={\mathtt{U}}$, ${\mathtt{V({\varphi})}}={\mathtt{V}}$, ${\mathtt{\overline{L}(\psi)}}=\operatorname{Aut}F_K \times \operatorname{Aut}F_L$, ${\mathtt{\overline{G}({\varphi})}}=\operatorname{Aut}(F_K \sqcup F_L)$, and ${\mathtt{\overline{U}({\varphi})}}={\mathtt{\overline{V}({\varphi})}}=\{1\}$. The cocycles appearing in [@CMO1 Theorem 3.14] are trivial in this instance. All of these identifications being made, the functor $\operatorname{i}$ appearing in [@CMO1 Theorem 3.6] is the functor $\operatorname{i}_{K,L}^{K\sqcup L}$, while the functor ${\mathtt{\operatorname{i}_{\overline{U}({\varphi}),\overline{V}({\varphi})}^{{\varphi}'}}}$ appearing in [@CMO1 Theorem 3.14] is the usual induction functor $\operatorname{Rep}(\operatorname{Aut}F_K \times \operatorname{Aut}F_L)\to \operatorname{Rep}(\operatorname{Aut}(F_K\sqcup F_L))$. Pasting together the two commuting diagrams from [@CMO1 Theorems 3.6 & 3.14] then yields the commuting diagram . To describe the comultiplication maps on $\mathcal M$ we need some more notation. For each $F\in \widehat{H}^{Y_N}$ and each subset $K\subseteq N$ we let $F{\vert}_{Y_K}$ be the restriction of the function $F$ to the subset $Y_K\subseteq Y_N$, and we let $$(\operatorname{Aut}F)_K=\operatorname{Aut}_F\cap S_{K,K^c} = \{w\in \operatorname{Aut}F\ |\ wK=K\}$$ be the stabiliser of $K$ for the action of $\operatorname{Aut}F\subseteq S_N$ on the power set ${\mathcal P({N})}$. The group $(\operatorname{Aut}F)_K$ leaves the subsets $Y_K,Y_{K^c}\subseteq Y_N$ invariant, and we obtain an embedding of groups $$(\operatorname{Aut}F)_{K} \xrightarrow{w\mapsto (w{\vert}_{Y_K}, w{\vert}_{Y_{K^c}})} \operatorname{Aut}(F{\vert}_{Y_K})\times \operatorname{Aut}(F{\vert}_{Y_{K^c}}) \subseteq S_K\times S_L.$$ Now given $F\in \widehat{H}^{Y_N}$ and a representation $\gamma$ of $\operatorname{Aut}F$ we define $$\label{eq:Delta-M-definition} \Delta_{\mathcal M}\gamma \coloneqq \sum_{\operatorname{Aut}F(K)\in \operatorname{Aut}F\backslash {\mathcal P({N})}} \operatorname{ind}_{(\operatorname{Aut}F)_K}^{\operatorname{Aut}(F{\vert}_{Y_K})\times \operatorname{Aut}(F{\vert}_{Y_{K^c}})}\big( ( \operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_K} \gamma)\otimes_{{\Bbbk}}\pi_{F{\vert}_{Y_N\setminus Y_{K,K^c}}}\big).$$ We are summing over a set of representatives for the $\operatorname{Aut}F$-orbits in ${\mathcal P({N})}$; the group $(\operatorname{Aut}F)_K $ acts on the representation $\pi_{F{\vert}_{Y_N\setminus Y_{K,K^c}}}=\bigotimes_{y\in Y_N\setminus Y_{K,K^c}} F(y)$ by permuting the tensor factors; and each summand on the right-hand side of the formula is regarded as an element of $\mathcal M\otimes_{{\mathbb{Z}}}\mathcal M$ via the canonical isomorphisms $R(G\times G')\cong R(G)\otimes_{{\mathbb{Z}}} R(G')$. Note that when $H$ is abelian, so that each of its irreducible representations is one-dimensional, the representation $\pi_{F{\vert}_{Y_N\setminus Y_{K,K^c}}}$ is the trivial one-dimensional representation of $(\operatorname{Aut}F)_K$, so the above formula simplifies to $$\label{eq:Delta-M-abelian} \Delta_{\mathcal M}\gamma \coloneqq \sum_{\operatorname{Aut}F(K)\in \operatorname{Aut}F\backslash {\mathcal P({N})}} \operatorname{ind}_{(\operatorname{Aut}F)_K}^{\operatorname{Aut}(F{\vert}_{Y_K})\times \operatorname{Aut}(F{\vert}_{Y_{K^c}})} \operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_K} \gamma \qquad (H\text{ abelian}).$$ To define the second comultiplication $\delta_{\mathcal M}$ we need one more piece of terminology: the *support* of a function $F\in \widehat{H}^{Y_N}$ is defined by $\operatorname{supp}F = \{y\in Y_N\ |\ F(y)\neq {\operatorname{triv}}_H\}$. This is an $\operatorname{Aut}F$-invariant subset of $Y_N$. Now for $F\in \widehat{H}^{Y_N}$ and $\gamma\in \operatorname{Rep}(\operatorname{Aut}F)$ we define $$\label{eq:delta-M-definition} \delta_{\mathcal M}\gamma \coloneqq \sum_{\substack{\operatorname{Aut}F(K)\in \operatorname{Aut}F\backslash {\mathcal P({N})}\\ \operatorname{supp}F \subseteq Y_{K,K^c}}} \operatorname{ind}_{(\operatorname{Aut}F)_K}^{\operatorname{Aut}(F{\vert}_{Y_K})\times \operatorname{Aut}(F{\vert}_{Y_{K^c}})} \operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_K} \gamma.$$ Finally, we define $\epsilon_{\mathcal M}:\mathcal M\to {\mathbb{Z}}$ by declaring that for the empty function $F_\emptyset\in \widehat{H}^{Y_\emptyset}$ the map $\epsilon_{\mathcal M}:R(\operatorname{Aut}F_\emptyset)\to {\mathbb{Z}}$ sends the trivial representation to $1$, while for all $N\neq\emptyset$ and all $F\in \widehat{H}^{Y_N}$ the map $\epsilon_{\mathcal M}:R(\operatorname{Aut}F)\to {\mathbb{Z}}$ is identically zero. \[cor:M-Hopf\] The graded algebra isomorphism $\Phi:\mathcal M\to \mathcal R$ of Proposition \[prop:Clifford-Mackey-multiplication\] relates the maps $\Delta_{\mathcal M}$, $\delta_{\mathcal M}$, and $\epsilon_{\mathcal M}$ defined above to the structure maps $\Delta$, $\delta$, and $\epsilon$ on $\mathcal R$ as follows: $$\Delta\Phi = (\Phi\otimes \Phi)\Delta_{\mathcal M}, \qquad \delta\Phi=(\Phi\otimes\Phi)\delta_{\mathcal M},\qquad \text{and}\qquad \epsilon\Phi=\epsilon_{\mathcal M}.$$ Consequently the graded algebra $\mathcal M$ equipped with the comultiplication $\Delta_{\mathcal M}$ and counit $\epsilon_{\mathcal M}$ becomes a connected, graded, commutative, and cocommutative Hopf algebra; while the graded algebra $\mathcal M$ equipped with the comultiplication $\delta_{\mathcal M}$, the counit $\epsilon_{\mathcal M}$, and the basis becomes a PSH algebra. The identity $\epsilon\Phi=\epsilon_{\mathcal M}$ is easily verified: $\Phi$ is an isomorphism of unital graded algebras, and $\epsilon$ and $\epsilon_{\mathcal M}$ are the inverses of the respective unit maps. To verify the formula for $\Delta_{\mathcal M}$ we fix a function $F\in \widehat{H}^{Y_N}$, a representation $\gamma$ of $\operatorname{Aut}F$, and a subset $K\subseteq N$. We will prove that the term $$\label{eq:M-Hopf-1} (\Delta \Phi \gamma)_K \coloneqq \operatorname{res}^N_{K,K^c} \operatorname{ind}_{G_F}^{G_N}(\gamma\ltimes \pi_F)$$ corresponding to the orbit $S_N(K)$ in the definition of $\Delta$ is equal to the sum $$\label{eq:M-Hopf-2} \begin{aligned} & (\Phi\otimes\Phi)(\Delta_{\mathcal M}\gamma)_K \\ & = \sum_{\operatorname{Aut}F(L)\in \operatorname{Aut}F\backslash S_N(K)} (\Phi\otimes\Phi)\operatorname{ind}_{(\operatorname{Aut}F)_L}^{\operatorname{Aut}(F{\vert}_{Y_L})\times \operatorname{Aut}(F{\vert}_{Y_{L^c}})}\big( (\operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_L} \gamma) \otimes_{{\Bbbk}} \pi_{F{\vert}_{Y_N\setminus Y_{L,L^c}}} \big) \end{aligned}$$ of the images, under $\Phi\otimes\Phi$ of the terms in the sum associated to the $\operatorname{Aut}F$-orbits in $S_N(K)$. Choose a set $W$ of representatives for the double-coset space $\operatorname{Aut}_F \backslash S_N / S_{K,K^c}$. Observing that $$\begin{aligned} G_F\backslash G_N/ G_{K,K^c} & = (\operatorname{Aut}F\ltimes H^{Y_{N}})\backslash (S_N\ltimes H^{Y_N}) / (S_{K,K^c} \ltimes H^{Y_{K,K^c}}) \\ & \cong \operatorname{Aut}F\backslash S_N/ S_{K,K^c} \end{aligned}$$ shows that $W$ is also a set of representatives for $G_F \backslash G_N /G_{K,K^c}$. Now $S_{K,K^c}$ is precisely the isotropy group of $K$ for the action of $S_N$ on ${\mathcal P({N})}$, and so the map $w\mapsto \operatorname{Aut}F(wK)$ gives a bijection $W\cong \operatorname{Aut}F\backslash S_N(K)$. Applying the standard Mackey formula [@Mackey Theorem 1] to , using the set of double-coset representatives $W$ and recalling that the relation $\operatorname{Ad}_w\rho=\rho$ holds in $\mathcal R$, we find $$\label{eq:Mackey-coproduct-step2} \begin{aligned} \big(\Delta\Phi\gamma\big)_{K} & = \sum_{w\in W} \operatorname{Ad}_{w^{-1}} \operatorname{ind}_{{}^wG_{K,K^c}\cap G_F}^{{}^w G_{K,K^c}} \operatorname{res}^{G_F}_{{}^wG_{K,K^c}\cap G_F} (\gamma\ltimes \pi_F) \\ & = \sum_{\operatorname{Aut}F(L)\in \operatorname{Aut}F\backslash S_N(K)} \operatorname{ind}_{G_{L,L^c}\cap G_F}^{G_{L,L^c}} \operatorname{res}^{G_F}_{G_{L,L^c}\cap G_F}(\gamma\ltimes \pi_F). \end{aligned}$$ For each $L\in S_N(K)$ we have $G_{L,L^c}\cap G_F = (\operatorname{Aut}F)_L \ltimes H^{Y_{L,L^c}}$. The restriction of the representation $\gamma\ltimes \pi_F$ to this group is equal to $$(\operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_L} \gamma)\ltimes (\pi_{F{\vert}_{Y_L}}\otimes_{{\Bbbk}}\pi_{F{\vert}_{Y_{L^c}}}\otimes_{{\Bbbk}} \pi_{F{\vert}_{Y_N\setminus Y_{L,L^c}}}).$$ The group $H^{Y_{L,L^c}}$ acts trivially in the representation $\pi_{F{\vert}_{Y_N\setminus Y_{L,L^c}}}$, so we may rewrite this last displayed representation as $$\big( (\operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_L} \gamma) \otimes_{{\Bbbk}} \pi_{F{\vert}_{Y_N\setminus Y_{L,L^c}}} \big) \ltimes (\pi_{F{\vert}_{Y_L}}\otimes_{{\Bbbk}}\pi_{F{\vert}_{Y_{L^c}}}).$$ Writing $A_L\coloneqq \operatorname{Aut}(F{\vert}_{Y_L})\times \operatorname{Aut}(F{\vert}_{Y_{L^c}})$ to compactify the notation, we continue the computation from to find $$\label{eq:Mackey-coproduct-step3} \begin{aligned} & \big(\Delta\Phi\gamma\big)_{K} = \sum_{\operatorname{Aut}F(L)} \operatorname{ind}_{(\operatorname{Aut}F)_L \ltimes H^{Y_{L,L^c}}}^{G_{L,L^c}} \big(\big( (\operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_L} \gamma) \otimes_{{\Bbbk}} \pi_{F{\vert}_{Y_N\setminus Y_{L,L^c}}} \big)\ltimes (\pi_{F{\vert}_L}\otimes_{{\Bbbk}}\pi_{F{\vert}_{L^c}}) \big) \\ & = \sum_{\operatorname{Aut}F(L)} \operatorname{ind}_{A_L\ltimes H^{Y_{L,L^c}}}^{G_{L,L^c}} \operatorname{ind}_{(\operatorname{Aut}F)_L \ltimes H^{Y_{L,L^c}}}^{A_L\ltimes H^{Y_{L,L^c}}} \big(\big( (\operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_L} \gamma) \otimes_{{\Bbbk}} \pi_{F{\vert}_{Y_N\setminus Y_{L,L^c}}} \big) \ltimes (\pi_{F{\vert}_L}\otimes_{{\Bbbk}}\pi_{F{\vert}_{L^c}}) \big) \\ & = \sum_{\operatorname{Aut}F(L)} \operatorname{ind}_{A_L\ltimes H^{Y_{L,L^c}}}^{G_{L,L^c}} \bigg( \Big( \operatorname{ind}_{(\operatorname{Aut}F)_L}^{A_L} \big( (\operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_L} \gamma) \otimes_{{\Bbbk}} \pi_{F{\vert}_{Y_N\setminus Y_{L,L^c}}} \big)\Big) \ltimes (\pi_{F{\vert}_L}\otimes_{{\Bbbk}} \pi_{F{\vert}_{L^c}})\bigg) \\ & = \sum_{\operatorname{Aut}F(L)} (\Phi\otimes\Phi) \operatorname{ind}_{(\operatorname{Aut}F)_L}^{A_L} \big( (\operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_L} \gamma) \otimes_{{\Bbbk}} \pi_{F{\vert}_{Y_N\setminus Y_{L,L^c}}} \big) = (\Phi\otimes\Phi)(\Delta_{\mathcal M}\gamma)_K \end{aligned}$$ as required. We turn now to the relation $\delta\Phi=(\Phi\otimes\Phi)\delta_{\mathcal M}$, keeping all of the notation established so far. To obtain $(\delta\Phi\gamma)_{K}$ from $(\Delta\Phi\gamma)_{K}$ we must project the latter onto the space of $U_{K,K^c}^N$-fixed vectors. Equivalently, we must project each of the representations $\rho_L\coloneqq \operatorname{ind}_{G_{L,L^c}\cap G_F}^{G_{L,L^c}} \operatorname{res}^{G_F}_{G_{L,L^c}\cap G_F}(\gamma\ltimes \pi_F)$ occuring in the last line of onto its subspace of $U_{L,L^c}^N$-invariants. The group $U_{L,L^c}^N$ acts on $\rho_L$ by a sum of $S_{L,L^c}$-conjugates of the irreducible representation $\pi_{F{\vert}_{Y_N\setminus {Y_{L,L^c}}}}$, and so the space of $U_{L,L^c}^N$-fixed vectors will be zero if one of the factors $F(y)$ (for $y\in Y_N\setminus Y_{L,L^c}$) is nontrivial; while on the other hand this space of invariants will be all of $\rho_L$ if all of the $F(y)$ are trivial. Since $F(y)={\operatorname{triv}}_H$ for all $y\in Y_N\setminus Y_{L,L^c}$ precisely when $\operatorname{supp}F\subset Y_{L,L^c}$, we obtain from $$\begin{aligned} \big( \delta \Phi\gamma\big)_{K} & = \sum_{\substack{\operatorname{Aut}F(L)\in \operatorname{Aut}F\backslash S_N(K) \\ \operatorname{supp}F\subseteq Y_{L,L^c}}} (\Phi\otimes\Phi)\operatorname{ind}_{(\operatorname{Aut}F)_L}^{\operatorname{Aut}(F{\vert}_{Y_L})\times \operatorname{Aut}(F{\vert}_{Y_{L^c}})}\big( (\operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_L} \gamma) \otimes_{{\Bbbk}} \pi_{F{\vert}_{Y_N\setminus Y_{L,L^c}}} \big) \\ & = \sum_{\substack{\operatorname{Aut}F(L)\in \operatorname{Aut}F\backslash S_N(K) \\ \operatorname{supp}F\subseteq Y_{L,L^c}}} (\Phi\otimes\Phi)\operatorname{ind}_{(\operatorname{Aut}F)_L}^{\operatorname{Aut}(F{\vert}_{Y_L})\times \operatorname{Aut}(F{\vert}_{Y_{L^c}})} \operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_L} \gamma \\ & = (\Phi\otimes\Phi)(\delta_{\mathcal M}\gamma)_K \end{aligned}$$ as required. Zelevinsky’s structure theorem for PSH algebras [@Zelevinsky Theorems 2.2 & 3.1] identifies the PSH algebra $\mathcal M^\delta_{Y,H}$ with a tensor product of copies of the Hopf algebra ${\operatorname{Sym}}_{{\mathbb{Z}}}$ of symmetric functions, indexed by the set of primitive irreducible elements of $\mathcal M^\delta_{Y,H}$. The set of primitive irreducibles can readily be identified from the formula for the comultiplication $\delta_{\mathcal M}$. For each finite set $N\neq \emptyset$ let us call a function $F\in \widehat{H}^{Y_N}$ *primitive* if $\operatorname{supp}F \not\subseteq Y_{K,K^c}$ for any $K\subsetneq N$. We denote by $\widehat{H}^{Y_N}_{{\operatorname{prim}}}$ the set of all such functions. The empty function $F_\emptyset \in H^{Y_\emptyset}$ is, by definition, not primitive. For example: 1. If $\#N=1$ then every function in $\widehat{H}^{Y_N}$ is primitive. If $Y={\mathrm{id}}$ then these are the only primitive functions. 2. For $Y_N=N^2$, if we identify $\widehat{H}^{Y_N}$ with the set of $N\times N$ matrices with entries in $\widehat{H}$, then the non-primitive functions are those whose corresponding matrix can be put into block-diagonal form $\left[\begin{smallmatrix} {\mbox{\large$\ast$}}& {\operatorname{triv}}_H \\ {\operatorname{triv}}_H & {\mbox{\large$\ast$}}\end{smallmatrix}\right]$ by conjugating by a permutation matrix. 3. For $Y_N=S_N\setminus \{{\mathrm{id}}_N\}$ a function $F\in \widehat{H}^{Y_N}$ is primitive if and only if its support generates a transitive subgroup of $S_N$. The set of irreducible primitive elements of $\mathcal M^\delta_{Y,H}$ is now the following subset of the canonical basis: $${\operatorname{Prim}}(Y,H) \coloneqq \bigg( \bigsqcup_{\substack{N\neq\emptyset\\ F\in \widehat{H}^{Y_N}_{{\operatorname{prim}}}}} \operatorname{Irr}(\operatorname{Aut}F) \bigg)_{{\operatorname{Set}}^\times}.$$ As noted in [@Zelevinsky 4.19, 7.4], the structure theory of PSH algebras gives a parametrisation of the irreducible representations of the groups $G_N(Y,H)$ in terms of partition-valued functions on the set ${\operatorname{Prim}}(Y,H)$. In contrast to the cases $Y_N=\emptyset$ and $Y_N=N$ considered in [@Zelevinsky], this parametrisation for a general Young set does not necessarily reduce the classification of the irreducible representations of the $G_N(Y,H)$s to a manageable problem. In the example considered in Section \[sec:graphs\], for instance, the set of primitive irreducibles contains all irreducible representations of all finite groups; see Remark \[rem:graph-PSH\]. The basic subalgebra {#subsec:basic} -------------------- We continue to fix a Young set $Y$ and auxiliary group $H$, and often omit them from the notation. We are going to construct Hopf subalgebras $\mathcal B^{\Delta/\delta}$ of our Hopf algebras $\mathcal M^{\Delta/\delta}\cong \mathcal R^{\Delta/\delta}$ from the representations of the [base group]{} $H^{Y_N}\subseteq G_N$. When $H$ is abelian the algebra $\mathcal B^{\Delta}$ is the Hopf algebra associated by Schmitt in [@Schmitt-HACS Section 3.3] to the coherent exponential species $N\mapsto \widehat{H}^{Y_N}$; see Proposition \[prop:HACS\]. As an additive group we define $$\mathcal B=\mathcal B_{Y,H}\coloneqq \big(\bigoplus_{N\in {\operatorname{Set}}}R(H^{Y_N}) \big)_{{\operatorname{Set}}^\times}$$ where the subscript ${\operatorname{Set}}^\times$ again indicates coinvariants by set isomorphisms: that is, we impose the relation $\pi=\operatorname{Ad}_w\pi$ in $\mathcal B$ for all representations $\pi$ of $H^{Y_N}$ and all bijective maps $w:N\to M$. Thus $\mathcal B$ is a free abelian group with basis $\big( \bigsqcup_{N} \{\pi_F\ |\ F\in\widehat{H}^{Y_N}\}\big)_{{\operatorname{Set}}^\times}$. We grade $\mathcal B$ by putting $R(H^{Y_N})$ in degree $\#N$. The operation $$\widehat{H}^{Y_K}\times \widehat{H}^{Y_L} \xrightarrow{(F_K,F_L)\mapsto F_K\sqcup F_L} \widehat{H}^{Y_{K\sqcup L}}$$ of induces a multiplication $R(H^{Y_K})\otimes_{{\mathbb{Z}}} R(H^{Y_L}) \to R(H^{Y_{K\sqcup L}})$, turning $\mathcal B$ into an associative graded algebra, with unit $\pi_{F_\emptyset}$ (the one-dimensional trivial representation of the trivial group $H^{Y_\emptyset}$). We define the counit $\epsilon_{\mathcal B}$ by $\epsilon_{\mathcal B}\pi_{F_\emptyset}=1$ and $\epsilon_{\mathcal B}\pi_F=0$ for all other $F$. Given $F\in \widehat{H}^{Y_N}$ we define $$\label{eq:Delta-B-def} \begin{aligned} \Delta_{\mathcal B} \pi_F & \coloneqq \sum_{K\subseteq N} \left(\dim \pi_{F{\vert}_{Y_N\setminus Y_{K,K^c}}}\right) \pi_{F{\vert}_{Y_{K}}} \otimes \pi_{F{\vert}_{Y_{K^c}}}\qquad \text{and}\\ \delta_{\mathcal B} \pi_F & \coloneqq \sum_{\substack{K\subseteq N \\ \operatorname{supp}F \subseteq Y_{K,K^c}}} \pi_{F{\vert}_{Y_{K}}} \otimes \pi_{F{\vert}_{Y_{K^c}}}. \end{aligned}$$ Here we have $$\dim \pi_{F{\vert}_{Y_N\setminus Y_{K,K^c}}} = \prod_{y\in Y_N\setminus Y_{K,K^c}} \dim F(y)$$ where $\dim$ denotes the dimension of the underlying ${\Bbbk}$-vector space. When $H$ is abelian all of these dimensions are $1$ and so the formula for $\Delta_{\mathcal B}$ simplifies to $$\label{eq:Delta-B-abelian} \qquad \qquad \qquad \Delta_{\mathcal B} \pi_F = \sum_{K\subseteq N} \pi_{F{\vert}_{Y_{K}}} \otimes \pi_{F{\vert}_{Y_{K^c}}} \qquad\qquad \text{($H$ abelian)}.$$ \[example:binomial-Hopf-algebra\] Taking $H=1$ the trivial group, and $Y$ an arbitrary Young set, there is a unique $F_N\in \widehat{H}^{Y_N}$ for each finite set $N$, and the map $\pi_{F_N}\mapsto x^{\#N}$ induces an isomorphism $\mathcal B^{\delta}_{Y,1}=\mathcal B^{\Delta}_{Y,H}\xrightarrow{\cong} {\mathbb{Z}}[x]$ to the binomial Hopf algebra over ${\mathbb{Z}}$, i.e., ${\mathbb{Z}}[x]$ with its usual multiplication and with comultiplication $\Delta(x^n) = \sum_{k=0}^n \binom{n}{k} x^k$. For each finite group $G$ we let ${\operatorname{reg}}_G$ be the regular representation: i.e., the representation on ${\Bbbk}^G$ by permuting coordinates. \[prop:regular-embedding\] The map $${\operatorname{reg}}: \mathcal B \to \mathcal M,\qquad \pi_F \mapsto {\operatorname{reg}}_{\operatorname{Aut}F}$$ is an embedding of unital graded algebras, and it satisfies $$\Delta_{\mathcal M} {\operatorname{reg}}= ({\operatorname{reg}}\otimes {\operatorname{reg}})\Delta_{\mathcal B},\quad \delta_{\mathcal M}{\operatorname{reg}}= ({\operatorname{reg}}\otimes{\operatorname{reg}})\delta_{\mathcal B},\quad \text{and} \quad \epsilon_{\mathcal M}{\operatorname{reg}}= {\operatorname{reg}}\epsilon_{\mathcal B}.$$ Thus the comultiplication maps $\Delta_{\mathcal B}$ and $\delta_{\mathcal B}$ each equip $\mathcal B$ with the structure of a connected, commutative and cocommutative graded Hopf algebra. Note that the map ${\operatorname{reg}}:\mathcal B\to \mathcal M$ does not send irreducibles to irreducibles. In particular, $\mathcal B^\delta$ is not a PSH algebra, as is already evident in Example \[example:binomial-Hopf-algebra\]. The map ${\operatorname{reg}}$ is clearly injective, graded, and intertwines the units and counits. It is also easy to see that ${\operatorname{reg}}$ is a morphism of algebras: given $F_K\in \widehat{H}^{Y_K}$ and $F_L\in \widehat{H}^{Y_L}$, the tensor product ${\operatorname{reg}}_{F_K}\otimes_{{\Bbbk}} {\operatorname{reg}}_{F_L}$ is the regular representation of $\operatorname{Aut}F_K \times \operatorname{Aut}F_L$, and performing the multiplication in $\mathcal M$—i.e., inducing this representation up to $\operatorname{Aut}(F_K\sqcup F_K)$—gives the regular representation of $\operatorname{Aut}(F_K\sqcup F_K)$. It remains to prove that $\Delta_{\mathcal M}{\operatorname{reg}}= ({\operatorname{reg}}\otimes{\operatorname{reg}})\Delta_{\mathcal B}$ and $\delta_{\mathcal M}{\operatorname{reg}}=({\operatorname{reg}}\otimes{\operatorname{reg}})\delta_{\mathcal B}$. To do this we first note that for each $w\in \operatorname{Aut}F$ and each $K\subseteq N$ we have $\pi_{F{\vert}_{Y_{wK}}}=\pi_{F{\vert}_{wY_K}} = \operatorname{Ad}_{w} \pi_{(w^{-1}F){\vert}_{Y_K}} = \pi_{F{\vert}_{Y_K}}$ in $\mathcal B$. So the summands in the definition are constant on the $\operatorname{Aut}F$-orbits in ${\mathcal P({N})}$. The number of sets $wK$ in the orbit $\operatorname{Aut}F(K)$ is equal to the index $[\operatorname{Aut}F : (\operatorname{Aut}F)_K]$, and so we may rewrite the definitions as follows: $$\begin{aligned} \Delta_{\mathcal B} \pi_F & \coloneqq \sum_{\operatorname{Aut}F(K) \in \operatorname{Aut}F\backslash {\mathcal P({N})}} \left(\dim \pi_{F{\vert}_{Y_N\setminus Y_{K,K^c}}}\right) [\operatorname{Aut}F: (\operatorname{Aut}F)_K] \pi_{F{\vert}_{Y_{K}}} \otimes \pi_{F{\vert}_{Y_{K^c}}}\\ \delta_{\mathcal B} \pi_F & \coloneqq \sum_{\substack{\operatorname{Aut}F(K)\in \operatorname{Aut}F\backslash {\mathcal P({N})} \\ \operatorname{supp}F \subseteq Y_{K,K^c}}} [\operatorname{Aut}F: (\operatorname{Aut}F)_K] \pi_{F{\vert}_{Y_{K}}} \otimes \pi_{F{\vert}_{Y_{K^c}}}. \end{aligned}$$ Comparing the above formulas with the definitions and of $\Delta_{\mathcal M}$ and $\delta_{\mathcal M}$, we see that we must prove that for all $K\subseteq N$ and all $F\in \widehat{H}^{Y_N}$ that $$\begin{aligned} & \operatorname{ind}_{(\operatorname{Aut}F)_K}^{\operatorname{Aut}(F{\vert}_{Y_K})\times \operatorname{Aut}(F{\vert}_{Y_{K^c}})}\big( ( \operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_K} {\operatorname{reg}}_{\operatorname{Aut}F})\otimes_{{\Bbbk}}\pi_{F{\vert}_{Y_N\setminus Y_{K,K^c}}}\big)\\ & = (\dim \pi_{F{\vert}_{Y_N\setminus Y_{K,K^c}}}) [\operatorname{Aut}F : (\operatorname{Aut}F)_{K}] {\operatorname{reg}}_{\operatorname{Aut}(F{\vert}_{Y_K})\times \operatorname{Aut}( F{\vert}_{Y_{K^c}})}. \end{aligned}$$ Since $\operatorname{ind}$ sends regular representations to regular representations it will suffice to prove that $$\label{eq:B-proof-last} (\operatorname{res}^{\operatorname{Aut}F}_{(\operatorname{Aut}F)_{K}} {\operatorname{reg}}_{\operatorname{Aut}F})\otimes_{{\Bbbk}} \pi_{F{\vert}_{Y_N\setminus Y_{K,K^c}}} = (\dim \pi_{F{\vert}_{Y_N\setminus Y_{K,K^c}}}) [\operatorname{Aut}F : (\operatorname{Aut}F)_{K}] {\operatorname{reg}}_{(\operatorname{Aut}F)_K}.$$ For every group $G$, subgroup $G'$, and representation $\rho\in \operatorname{Rep}(G')$ we have $$\operatorname{res}^{G}_{G'}{\operatorname{reg}}_{G} = [G: G'] {\operatorname{reg}}_{G'} \quad \text{ and }\quad {\operatorname{reg}}_{G'}\otimes_{{\Bbbk}} \rho = (\dim \rho){\operatorname{reg}}_{G'}$$ so the equality does hold. When $H$ is abelian the Hopf algebra $\mathcal B^{\Delta}$ is the same as one constructed in [@Schmitt-HACS], as we shall now explain. \[prop:HACS\] Let $Y$ be a Young set and let $H$ be a finite *abelian* group. 1. The contravariant functor $E:{\operatorname{Set}}^{{\mathrm{inj}}}\to {\operatorname{Set}}$ defined on objects by $N\mapsto \widehat{H}^{Y_N}$ and on morphisms by $w \mapsto (F\mapsto F\circ Y_w)$ is a *coherent exponential $R$-species* as defined in [@Schmitt-HACS 3.3]: it is the exponential of the contravariant functor ${\operatorname{Set}}^\times\to {\operatorname{Set}}^\times$ given by $N\mapsto \widehat{H}^{Y_N}_{{\operatorname{prim}}}$. 2. The Hopf algebra $\mathcal B_{Y,H}^{\Delta}$ is isomorphic to the Hopf algebra $\mathcal B_E$ associated to the coherent exponential species $E$ in [@Schmitt-HACS 3.3]. Fix a finite set $N$ and a function $F\in \widehat{H}^{Y_N}$. There is a unique partition $\lambda=(L_i\ |\ i\in I)\in {\operatorname{Part}}_N$ and primitive functions $F_i\in \widehat{H}^{Y_{L_i}}_{{\operatorname{prim}}}$ such that $F=\bigsqcup_{i\in I} F_i$: namely, take $\lambda \coloneqq \bigwedge_{\operatorname{supp}F\subseteq Y_{\lambda'}} \lambda'$ and, writing $\lambda=(L_i \ |\ i\in I)$, take $F_i\coloneqq F{\vert}_{L_i}$. The map sending $F$ to the assembly $\{F_i\ |\ i\in I\}$ then identifies $E$ with the exponential species $\exp \widehat{H}^{Y}_{{\operatorname{prim}}}$. The coherence of this species amounts to the property that for each subset $K\subseteq N$ we have $F{\vert}_K = \bigsqcup_i F_i{\vert}_{K\cap L_i}$, where the $L_i$ and $F_i$ are as above; this is clear, since $\operatorname{supp}(F{\vert}_K) = K\cap \operatorname{supp}F \subseteq K\cap Y_\lambda$, and $F_i=F{\vert}_{L_i}$. Now the identification between Schmitt’s $\mathcal B_E$ and our $\mathcal B_{Y,H}^\Delta$ follows immediately from a comparison of the definitions of multiplication and comultiplication in these two Hopf algebras. The canonical character and symmetric functions {#subsec:zeta} ----------------------------------------------- We conclude our general study of the Hopf algebras $\mathcal R_{Y,H}^{\Delta/\delta}$, $\mathcal M_{Y,H}^{\Delta/\delta}$, and $\mathcal B_{Y,H}^{\Delta/\delta}$ by observing that they all carry a canonical ${\mathbb{Z}}$-valued character, and hence a canonical Hopf-algebra homomorphism into the Hopf algebra of symmetric functions. \[def:zeta\] Let $Y$ be a Young set and let $H$ be a finite group, and consider the algebras $\mathcal R=\mathcal R_{Y,H}$, $\mathcal M=\mathcal M_{Y,H}$, and $\mathcal B=\mathcal B_{Y,H}$. We define ${\mathbb{Z}}$-linear maps $\zeta_{\mathcal R}:\mathcal R\to {\mathbb{Z}}$, $\zeta_{\mathcal M}:\mathcal M\to {\mathbb{Z}}$ and $\zeta_{\mathcal B}:\mathcal B\to {\mathbb{Z}}$ as follows: 1. For each finite set $N$ and each $\rho\in \operatorname{Irr}(G_N(Y,H))$, $$\zeta_{\mathcal R}(\rho) = \begin{cases} 1 & \text{if }\rho={\operatorname{triv}}_{G_N} \\ 0 & \text{otherwise.}\end{cases}$$ 2. For each $F\in \widehat{H}^{Y_N}$ and each $\gamma\in \operatorname{Irr}(\operatorname{Aut}F)$, $$\zeta_{\mathcal M}(\gamma) = \begin{cases} 1 & \text{if $\operatorname{supp}F=\emptyset$ and $\gamma={\operatorname{triv}}_{\operatorname{Aut}F}$}\\ 0 & \text{otherwise.} \end{cases}$$ 3. For each $F\in \widehat{H}^{Y_N}$, $$\zeta_{\mathcal B}(\pi_F) = \begin{cases} 1 & \text{if }\operatorname{supp}F=\emptyset \\ 0 & \text{otherwise.}\end{cases}$$ \[lem:zeta\] Each of the maps $\zeta$ defined above is an algebra homomorphism, and the diagram $$\xymatrix@C=60pt@R=30pt{ \mathcal B \ar[r]^-{{\operatorname{reg}}} \ar[dr]_-{\zeta_{\mathcal B }} & \mathcal M \ar[d]^-{\zeta_{\mathcal M }} \ar[r]^-{\Phi}_-{\cong} & \mathcal R \ar[dl]^-{\zeta_{\mathcal R }}\\ & {\mathbb{Z}}& }$$ commutes. Here $\Phi$ is the isomorphism of Proposition \[prop:Clifford-Mackey-multiplication\]. We will prove that the diagram commutes and that $\zeta_{\mathcal R}$ is an algebra homomorphism. Since the maps $\mathcal B\to \mathcal M \to \mathcal R$ are algebra homomorphisms, this will imply that $\zeta_{\mathcal M}$ and $\zeta_{\mathcal B}$ are also algebra homomorphisms. For each $F\in \widehat{H}^{Y_N}$ the regular representation of $\operatorname{Aut}F$ decomposes as one copy of the trivial representation plus some nontrivial representations. We thus have $$\zeta_{\mathcal M}{\operatorname{reg}}(\pi_F) = \zeta_{\mathcal M}({\operatorname{triv}}_{\operatorname{Aut}F}) + \sum \zeta_{\mathcal M} {\small\left(\begin{array}{c}\text{nontrivial}\\ \text{representations} \end{array} \right) }= \begin{cases} 1 & \text{if $\operatorname{supp}F=\emptyset$} \\ 0 & \text{otherwise}\end{cases}$$ which is equal to $\zeta_{\mathcal B}(\pi_F)$. So the left-hand triangle in the diagram commutes. Next, given $F\in \widehat{H}^{Y_N}$ and $\gamma\in \operatorname{Irr}(\operatorname{Aut}F)$, recall that the isomorphism $\Phi:\mathcal M\to \mathcal R$ of Proposition \[prop:Clifford-Mackey-multiplication\] sends $\gamma\in \operatorname{Irr}(\operatorname{Aut}F)$ to the representation $\operatorname{ind}_{G_F}^{G_N} (\gamma\ltimes \pi_F)$ of $G_N$. This representation is trivial precisely when $\pi_F$ is the trivial representation of $H^{Y_N}$—i.e., when $\operatorname{supp}F=\emptyset$—and when $\gamma$ is the trivial representation of $\operatorname{Aut}F=S_N$. Thus $\zeta_{\mathcal R} \Phi(\gamma) = \zeta_{\mathcal M}(\gamma)$, and so the diagram in the lemma commutes. Finally, to show that $\zeta_{\mathcal R}$ is an algebra homomorphism, fix finite sets $K$ and $L$ and irreducible representations $\rho_K\in \operatorname{Irr}(G_K)$ and $\rho_L\in \operatorname{Irr}(G_L)$. The product $\rho_K\rho_L$ of these representations in $\mathcal R$ is the representation $\operatorname{i}_{K,L}^{K\sqcup L}(\rho_K\otimes_{{\Bbbk}}\rho_L)$ of $G_{K\sqcup L}$. Since $\operatorname{i}_{K,L}^N$ is adjoint to $\operatorname{r}_{K,L}^N$ (Lemma \[lem:adjoints\]), and since $\operatorname{r}^N_{K,L}({\operatorname{triv}}_{G_N}) = {\operatorname{triv}}_{G_{K,L}}$ (obviously), we have $$\zeta_{\mathcal R}(\rho_K\rho_L) = \dim \operatorname{Hom}_{G_N} \left({\operatorname{triv}}_{G_N}, \operatorname{i}_{K,L}^N(\rho_K\otimes_{{\Bbbk}}\rho_L)\right) = \dim \operatorname{Hom}_{G_{K,L}} \left({\operatorname{triv}}_{G_{K,L}}, \rho_L\otimes_{{\Bbbk}} \rho_L \right) .$$ The last intertwining space is one-dimensional if both $\rho_K$ and $\rho_L$ are trivial, and it is zero otherwise. Thus $\zeta_{\mathcal R}(\rho_K\rho_L)=\zeta_{\mathcal R}(\rho_K)\zeta_{\mathcal R}(\rho_L)$ as required. Let ${\operatorname{Sym}}_{{\mathbb{Z}}}$ denote the Hopf algebra of symmetric functions, in variables $x_1,x_2,\ldots$, with ${\mathbb{Z}}$ coefficients. It follows from [@ABS Theorem 4.3] and Lemma \[lem:zeta\] that there is a commuting diagram of morphisms of Hopf algebras $$\label{eq:Psi-diagram} \xymatrix@C=60pt@R=30pt{ \mathcal B^{\Delta} \ar[r]^-{{\operatorname{reg}}} \ar[dr]_-{\Psi_{\mathcal B}} & \mathcal M^\Delta \ar[d]^-{\Psi_{\mathcal M}} \ar[r]^-{\Phi}_-{\cong} & \mathcal R^\Delta\ar[dl]^-{\Psi_{\mathcal R }}\\ & {\operatorname{Sym}}_{{\mathbb{Z}}} & }$$ determined uniquely by the requirement that $$\Psi_{\mathcal R}(\rho)(1,0,0,\ldots) = \zeta_{\mathcal R}(\rho)\quad \text{for all }\rho\in \mathcal R.$$ (There is also a corresponding diagram for $\mathcal B^\delta$, $\mathcal M^\delta$, and $\mathcal R^\delta$, but here we shall focus on the $\Delta$ Hopf algebras.) We are going to compute the maps $\Psi$ explicitly in terms of monomial symmetric functions, assuming the auxiliary group $H$ to be abelian. First we shall need some more notation. Consider the set ${\operatorname{Comp}}_N$ of *compositions* of $N$: these are *ordered* lists $\kappa=(K_1,\ldots, K_{\ell})$ of mutually disjoint, nonempty blocks $K_i\subseteq N$ satisfying $\bigcup_i K_i=N$. Each composition $\kappa$ determines a partition $\overline{\kappa}\in {\operatorname{Part}}_N$ by forgetting the order of the blocks, and we shall accordingly extend the notation previously established for partitions to compositions: thus $G_\kappa$ means $G_{{\overline}\kappa}$, and so on. As with partitions, the group $S_N$ acts on ${\operatorname{Comp}}_N$. The isotropy group of $\kappa\in {\operatorname{Comp}}_N$ is precisely the Young subgroup $S_{\kappa}\subseteq S_N$. The $S_N$-orbits in ${\operatorname{Comp}}_N$ are parametrised by the set of *integer compositions* ${\operatorname{Comp}}_{\# N}$—i.e., the set of ordered lists of positive integers summing to $\#N$—via the map sending a set composition $\kappa=(K_1,\ldots,K_\ell)$ to the integer composition $\#\kappa=(\#K_1,\ldots,\#K_\ell)$. For each integer composition $\alpha \in {\operatorname{Comp}}_{\#N}$ we let ${\operatorname{Comp}}_{N,\alpha}$ denote the corresponding orbit: $${\operatorname{Comp}}_{N,\alpha}\coloneqq \{\kappa\in {\operatorname{Comp}}_N\ |\ \#\kappa=\alpha\}.$$ Now for each function $F\in \widehat{H}^{Y_N}$, and each integer composition $\alpha \in {\operatorname{Comp}}_{\#N}$, we define $${\operatorname{Comp}}_{F, \alpha} \coloneqq \{\kappa \in {\operatorname{Comp}}_{N,\alpha}\ |\ \operatorname{supp}F\cap Y_{{\kappa}}=\emptyset\}.$$ The action of $S_N$ on ${\operatorname{Comp}}_N$ restricts to an action of $\operatorname{Aut}F$ on ${\operatorname{Comp}}_{F, \alpha}$, and we let $\rho_{F,\alpha}$ be the corresponding permutation representation of $\operatorname{Aut}F$ on ${\Bbbk}^{{\operatorname{Comp}}_{F,\alpha}}$. For each integer composition $\alpha\in {\operatorname{Comp}}_n$ we let $M_\alpha$ denote the associated monomial quasisymmetric function [@Stanley-EC2 7.19]. Although we will ultimately be writing down formulas for symmetric functions, the expressions are more natural when written in terms of the quasisymmetric functions $M_{\alpha}$. We now return to the diagram . An explicit formula, involving iterated comultiplication, is given in [@ABS (4.2)] for the Hopf-algebra morphism to ${\operatorname{Sym}}_{{\mathbb{Z}}}$ induced by a character. In the case of the Hopf algebra $\mathcal R^\Delta$, where the comultiplication $\Delta$ is given by the simple formula , the iterates of the comultiplication are easily computed, and the formula [@ABS (4.2)] for the map $\Psi_{\mathcal R}:\mathcal R\to {\operatorname{Sym}}_{{\mathbb{Z}}}$ takes a correspondingly simple form: for each representation $\rho$ of $G_N$ we have $$\label{eq:Psi-R} \Psi_{\mathcal R}(\rho) = \sum_{\alpha\in {\operatorname{Comp}}_{\#N}} \left(\dim \rho^{G_{\kappa_\alpha}}\right) M_{\#\kappa}$$ where $\kappa_\alpha$ is any element of the orbit ${\operatorname{Comp}}_{N,\alpha}$, and $\dim \rho^{G_{\kappa_{\alpha}}}$ is the ${\Bbbk}$-dimension of the space of $G_{\kappa_\alpha}$-fixed vectors in the representation $\rho$. The formula is valid for all auxiliary groups $H$. We shall now use this formula and the diagram to compute the maps $\Psi_{\mathcal M}$ and $\Psi_{\mathcal B}$, under the assumption that $H$ is abelian. \[prop:Psi-M\] Let $Y$ be a Young set, let $H$ be a finite *abelian* group, and consider the Hopf algebras $\mathcal M=\mathcal M^{\Delta}_{Y,H}$ and $\mathcal B=\mathcal B_{Y,H}^{\Delta}$. The Hopf-algebra morphism $\Psi_{\mathcal M}:\mathcal M\to {\operatorname{Sym}}_{{\mathbb{Z}}}$ induced by the character $\zeta_{\mathcal M}$ is given, for each finite set $N$, each function $F\in \widehat{H}^{Y_N}$, and each representation $\gamma$ of $\operatorname{Aut}F$, by $$\Psi_{\mathcal M}(\gamma) = \sum_{\alpha \in {\operatorname{Comp}}_{\#N}} \left(\dim\operatorname{Hom}_{\operatorname{Aut}F}(\rho_{F,\alpha},\gamma)\right) M_\alpha.$$ The Hopf-algebra morphism $\Psi_{\mathcal B}:\mathcal B\to {\operatorname{Sym}}_{{\mathbb{Z}}}$ is given, for each $F\in \widehat{H}^{Y_N}$, by $$\Psi_{\mathcal B}(\pi_F) = \sum_{\alpha\in {\operatorname{Comp}}_{\#N}} \left( \# {\operatorname{Comp}}_{F,\alpha} \right) M_\alpha.$$ Fix $N$, $F$, and $\gamma$, let $\alpha\in {\operatorname{Comp}}_{\#N}$ be an integer partition, and let $\kappa_\alpha$ be any element of ${\operatorname{Comp}}_{N,\alpha}$. Since $\Psi_{\mathcal M} = \Psi_{\mathcal R}\circ \Phi$ the formula shows that the coefficient of $M_\alpha$ in $\Psi_{\mathcal M}(\gamma)$ is the dimension of $ (\operatorname{ind}_{G_F}^{G_N}(\gamma\ltimes \pi_F))^{G_{\kappa_\alpha}}$. The Mackey formula for $\operatorname{ind}$ and $\operatorname{res}$ implies that this dimension is equal to $$\label{eq:Phi-M-proof-1} \sum_{G_FwG_{\kappa_\alpha}\in G_F\backslash G_N/ G_{\kappa_\alpha}} \dim\operatorname{Hom}_{ {}^wG_{{\kappa_\alpha}}\cap G_F} ({\operatorname{triv}}, \gamma\ltimes \pi_F).$$ Considering the double-coset space indexing the sum, we find $$G_F\backslash G_N/G_{\kappa_\alpha} = (\operatorname{Aut}F\ltimes H^{Y_N})\backslash (S_N\ltimes H^{Y_N})/ (S_{\kappa_\alpha}\ltimes H^{Y_{\kappa_\alpha}}) \cong \operatorname{Aut}F\backslash S_N / S_{\kappa_\alpha}.$$ Now recall that $S_{\kappa_\alpha}$ is the isotropy group and ${\operatorname{Comp}}_{N,\alpha}$ is the orbit of $\kappa_\alpha$ for the action of $S_N$ on ${\operatorname{Comp}}_N$; thus the map $w\mapsto w\kappa_\alpha$ induces a bijection $$\operatorname{Aut}F\backslash S_N/S_{\kappa_\alpha} \xrightarrow{\cong} \operatorname{Aut}F\backslash {\operatorname{Comp}}_{N,\alpha}.$$ For each $w\in S_N$, setting $\kappa=w\kappa_\alpha$, we have $${}^wG_{\kappa_\alpha}\cap G_F = (S_{\kappa} \cap \operatorname{Aut}F) \ltimes H^{Y_{\kappa}} = (\operatorname{Aut}F)_{\kappa}\ltimes H^{Y_{\kappa}}$$ where $(\operatorname{Aut}F)_{\kappa}$ indicates the isotropy group of $\kappa$ in $\operatorname{Aut}F$. Making these identifications, becomes $$\label{eq:Phi-M-proof-2} \sum_{\operatorname{Aut}F(\kappa)\in \operatorname{Aut}F\setminus {\operatorname{Comp}}_{N,\alpha}} \dim\operatorname{Hom}_{ (\operatorname{Aut}F)_{\kappa}\ltimes H^{Y_{\kappa}}}({\operatorname{triv}},\gamma\ltimes \pi_F).$$ For each $\kappa\in {\operatorname{Comp}}_{N,\alpha}$ the abelian group $H^{Y_{\kappa}}$ acts on the representation $\gamma\ltimes \pi_F$ by the character $\pi_{F{\vert}_{Y_\kappa}}$. So the representation $\gamma\ltimes \pi_F$ is either trivial on $H^{Y_{\kappa}}$, or else it contains no nonzero $H^{Y_{\kappa}}$-fixed vectors. The former possibility occurs precisely when $\operatorname{supp}F \cap Y_{\kappa} =\emptyset$; recall that this is, by definition, the condition that $\kappa$ belong to ${\operatorname{Comp}}_{F,\alpha}$. Now the character $\pi_F$ is trivial on $\operatorname{Aut}F$, and so is equal to $$\label{eq:Phi-M-proof-3} \begin{aligned} & \sum_{\operatorname{Aut}F(\kappa)\in \operatorname{Aut}F\backslash {\operatorname{Comp}}_{F,\alpha}} \dim\operatorname{Hom}_{(\operatorname{Aut}F)_{\kappa}}({\operatorname{triv}},\gamma) \\ & = \sum_{\operatorname{Aut}F(\kappa)\in \operatorname{Aut}F\backslash {\operatorname{Comp}}_{F,\alpha}} \dim \operatorname{Hom}_{\operatorname{Aut}F}\left(\operatorname{ind}_{(\operatorname{Aut}F)_{\kappa}}^{\operatorname{Aut}F}{\operatorname{triv}}_{(\operatorname{Aut}F)_{\kappa}},\gamma\right) \end{aligned}$$ where the equality is Frobenius reciprocity. The $\operatorname{Aut}F$-representation $\operatorname{ind}_{(\operatorname{Aut}F)_{\kappa}}^{\operatorname{Aut}F}{\operatorname{triv}}_{(\operatorname{Aut}F)_{\kappa}}$ is the permutation representation associated to the orbit $\operatorname{Aut}F(\kappa)\subseteq {\operatorname{Comp}}_{F,\alpha}$, and so summing over these orbits gives the permutation representation $\rho_{F,\alpha}$ as claimed. Turning to $\Psi_{\mathcal B}$, using the formula for $\Psi_{\mathcal M}$ just established and the equality $\Psi_{\mathcal B}=\Psi_{\mathcal M}\circ {\operatorname{reg}}$, we find that for $F\in \widehat{H}^{Y_N}$ and $\alpha\in {\operatorname{Comp}}_N$ the coefficient of $M_\alpha$ in $\Psi_{\mathcal B}(\pi_F)$ is $$\dim\operatorname{Hom}_{\operatorname{Aut}F}(\rho_{F,\alpha}, {\operatorname{reg}}_F) = \dim \rho_{F,\lambda} = \#{\operatorname{Comp}}_{F,\alpha}.\qedhere$$ Graph automorphisms and colourings {#sec:graphs} ================================== For certain choices of Young set $Y$ and auxiliary group $H$ the Hopf algebras $\mathcal M^{\delta,\Delta}_{Y,H}$ and $\mathcal B^{\delta,\Delta}_{Y,H}$, and the associated symmetric functions, admit descriptions in terms of isomorphism classes, automorphism groups, and colourings of familiar combinatorial objects. In this section we shall examine one such example. The Hopf algebra of graphs and chromatic symmetric functions ------------------------------------------------------------ Our graphs are finite, simple, and undirected: so a graph $\Gamma$ is a finite set $V(\Gamma)$ of vertices, and a finite set $E(\Gamma)\subseteq \{ \text{$2$-element subsets of $V(\Gamma)$}\}$ of edges. An isomorphism of graphs $\Gamma\to \Lambda$ is a bijection of vertex-sets $V(\Gamma)\to V(\Lambda)$ whose induced map on the power sets ${\mathcal P({V(\Gamma)})}\to {\mathcal P({V(\Lambda)})}$ restricts to a bijection $E(\Gamma)\to E(\Lambda)$. The disjoint union of graphs is defined by taking disjoint unions of vertex- and edge-sets. For each graph $\Gamma$ and each subset $U\subseteq V(\Gamma)$ the induced graph $\Gamma{\vert}_U$ is defined by $V(\Gamma{\vert}_U)=U$ and $E(\Gamma{\vert}_U)=E(\Gamma)\cap {\mathcal P({U})}$. Given a graph $\Gamma$ and an integer composition $\alpha=(\alpha_1,\ldots,\alpha_\ell)\in {\operatorname{Comp}}_{\#V(\Gamma)}$, a *proper $\alpha$-colouring* of $\Gamma$ is a function $\kappa:V(\Gamma) \to \{1,\ldots,\ell\}$ satisfying $\#\kappa^{-1}(i)=\alpha_i$ for all $i$, and $\kappa(v)\neq\kappa(w)$ for all $\{v,w\}\in E(\Gamma)$. The set of all such colourings is denoted by ${\operatorname{Col}}_{\Gamma,\alpha}$. The *Hopf algebra of graphs* ([@Schmitt-IHA Section 12]) is $${\mathcal{G}}= \bigoplus_{[\Gamma]} {\mathbb{Z}}[\Gamma]$$ where the sum is over the set of isomorphism classes $[\Gamma]$ of finite graphs. We grade ${\mathcal{G}}$ so that $[\Gamma]$ sits in degree $\#V(\Gamma)$. The multiplication in ${\mathcal{G}}$ is $[\Gamma]\otimes_{{\mathbb{Z}}} [\Lambda] \mapsto [\Gamma\sqcup\Lambda]$, and the comultiplication is $$\Delta_{{\mathcal{G}}}[\Gamma] = \sum_{U\in {\mathcal P({V(\Gamma)})}} [\Gamma{\vert}_U]\otimes_{{\mathbb{Z}}} [\Gamma{\vert}_{U^c}]$$ where $U^c=V(\Gamma)\setminus U$. The unit of ${\mathcal{G}}$ is the empty graph, and the counit is the map ${\mathcal{G}}\to {\mathbb{Z}}$ sending the empty graph to $1$ and all other graphs to zero. These operations make ${\mathcal{G}}$ a connected, commutative and cocommutative Hopf algebra. The algebra ${\mathcal{G}}$ has a canonical character $\zeta_{{\mathcal{G}}}:{\mathcal{G}}\to {\mathbb{Z}}$, given by $$\zeta_{{\mathcal{G}}}[\Gamma]=\begin{cases} 1 & \text{if }E(\Gamma)=\emptyset \\ 0 & \text{otherwise.}\end{cases}$$ The associated Hopf morphism $\Psi_{{\mathcal{G}}}:{\mathcal{G}}\to {\operatorname{Sym}}_{{\mathbb{Z}}}$ sends $[\Gamma]$ to the *chromatic symmetric function* $$X_\Gamma \coloneqq \sum_{\alpha\in {\operatorname{Comp}}_{\#V(\Gamma)}} (\#{\operatorname{Col}}_{\Gamma,\alpha}) M_\alpha.$$ This symmetric function was first defined by Stanley in [@Stanley-chromatic]. The connection to the Hopf algebra ${\mathcal{G}}$ and the character $\zeta_{{\mathcal{G}}}$ was pointed out in [@ABS Example 4.5]. A Hopf algebra and symmetric functions associated to representations of graph automorphisms ------------------------------------------------------------------------------------------- We are going to study an enlargement of ${\mathcal{G}}$. For each graph $\Gamma$ we let $\operatorname{Aut}\Gamma$ denote the group of graph-automorphisms of $\Gamma$. For each isomorphism of graphs $w:\Gamma\to \Lambda$ we have an isomorphism of Grothendieck groups $$\label{eq:graph-isomorphism} \operatorname{Ad}_w :R(\operatorname{Aut}\Gamma) \xrightarrow{\ \gamma\mapsto \gamma(w^{-1}\cdot w)\ } R(\operatorname{Aut}\Lambda).$$ Consider the free abelian group $${\mathcal{A}}\coloneqq \Big(\bigoplus_{\Gamma} R(\operatorname{Aut}\Gamma)\Big)_{\mathrm{Graph}^\times}$$ where the sum is over finite simple graphs $\Gamma$, and the subscript indicates that we impose the relation $\gamma=\operatorname{Ad}_w\gamma$ for all $\gamma$ and $w$ as in (that is, we take the coinvariants of the groupoid of graph isomorphisms). We grade ${\mathcal{A}}$ so that $R(\operatorname{Aut}\Gamma)$ sits in degree $\#V(\Gamma)$. Given graphs $\Gamma$ and $\Lambda$ there is an obvious inclusion of groups $$\operatorname{Aut}\Gamma \times \operatorname{Aut}\Lambda {\hookrightarrow}\operatorname{Aut}(\Gamma \sqcup \Lambda)$$ whence an induction functor $$\label{eq:induction-graph-automorphisms} \operatorname{ind}_{\operatorname{Aut}\Gamma\times \operatorname{Aut}\Lambda}^{\operatorname{Aut}(\Gamma\sqcup\Lambda)}:\operatorname{Rep}(\operatorname{Aut}\Gamma \times \operatorname{Aut}\Lambda) \to \operatorname{Rep}(\operatorname{Aut}(\Gamma \sqcup \Lambda)).$$ On the free abelian group ${\mathcal{A}}$ we define a graded multiplication ${\mathcal{A}}\otimes_{{\mathbb{Z}}} {\mathcal{A}}\to {\mathcal{A}}$ as the direct sum of the maps $$\label{eq:graph-algebra-product} R(\operatorname{Aut}\Gamma)\otimes_{{\mathbb{Z}}} R(\operatorname{Aut}\Lambda)\xrightarrow{\cong} R(\operatorname{Aut}\Gamma \times \operatorname{Aut}\Lambda) \xrightarrow{\operatorname{ind}} R(\operatorname{Aut}(\Gamma \sqcup\Lambda)).$$ This product makes ${\mathcal{A}}$ into an associative graded algebra; the unit is the trivial representation of the automorphism group of the empty graph. Next, for each graph $\Gamma$ and each subset $U\subseteq V(\Gamma)$ we define $$(\operatorname{Aut}\Gamma)_U\coloneqq \{w\in \operatorname{Aut}\Gamma\ |\ w(U)=U\}$$ to be the isotropy group of $U$ for the action of $\operatorname{Aut}\Gamma$ on the power set ${\mathcal P({V(\Gamma)})}$. This is by definition a subgroup of $\operatorname{Aut}\Gamma$; it is also in a natural way a subgroup of the product $\operatorname{Aut}(\Gamma{\vert}_U)\times \operatorname{Aut}(\Gamma{\vert}_{U^c})$, via the map $w\mapsto (w{\vert}_U,w{\vert}_{U^c})$. Let $\Delta_{{\mathcal{A}}}:{\mathcal{A}}\to {\mathcal{A}}\otimes_{{\mathbb{Z}}}{\mathcal{A}}$ be the graded ${\mathbb{Z}}$-linear map defined, for each graph $\Gamma$ and each representation $\gamma$ of $\operatorname{Aut}\Gamma$, by $$\label{eq:Delta-Graph-definition} \Delta_{{\mathcal{A}}}\gamma = \sum_{\operatorname{Aut}\Gamma(U)\in \operatorname{Aut}\Gamma\backslash {\mathcal P({V(\Gamma)})}} \operatorname{ind}_{(\operatorname{Aut}\Gamma)_U}^{\operatorname{Aut}(\Gamma{\vert}_U)\times \operatorname{Aut}(\Gamma{\vert}_{U^c})} \operatorname{res}^{\operatorname{Aut}\Gamma}_{(\operatorname{Aut}\Gamma)_U} \gamma.$$ Here the sum is over the $\operatorname{Aut}\Gamma$-orbits of subsets of $V(\Gamma)$, and we are using the canonical isomorphisms $R(G)\otimes_{{\mathbb{Z}}}R(G')\xrightarrow{\cong} R(G\times G')$ to view each summand as an element of $R(\operatorname{Aut}(\Gamma{\vert}_U))\otimes_{{\mathbb{Z}}} R(\operatorname{Aut}(\Gamma{\vert}_{U^c}))\subset{\mathcal{A}}\otimes_{{\mathbb{Z}}}{\mathcal{A}}$. We also let $\epsilon_{{\mathcal{A}}}:{\mathcal{A}}\to {\mathbb{Z}}$ be the map sending the trivial representation of $\operatorname{Aut}\emptyset$ to $1$, and all other irreducible representations to $0$. Let $\zeta_{{\mathcal{A}}}:{\mathcal{A}}\to {\mathbb{Z}}$ be the ${\mathbb{Z}}$-linear map defined, for each graph $\Gamma$ and each representation $\gamma$ of $\operatorname{Aut}\Gamma$, by $$\zeta_{{\mathcal{A}}}(\gamma) = \begin{cases} 1 & \text{if $E(\Gamma)=\emptyset$ and $\gamma={\operatorname{triv}}_{\operatorname{Aut}\Gamma}$}\\ 0 & \text{otherwise.}\end{cases}$$ Finally, for each graph $\Gamma$ and each integer composition $\alpha\in {\operatorname{Comp}}_{\#V(\Gamma)}$ recall that ${\operatorname{Col}}_{\Gamma,\alpha}$ is the set of proper $\alpha$-colourings of $\Gamma$. The group $\operatorname{Aut}\Gamma$ acts on this set by $w\kappa(v)\coloneqq \kappa(w^{-1}v)$, and we let $\rho_{\Gamma,\alpha}$ be the corresponding permutation representation of $\operatorname{Aut}\Gamma$ on ${\Bbbk}^{{\operatorname{Col}}_{\Gamma,\alpha}}$. \[thm:GraphAlg\] 1. The comultiplication $\Delta_{{\mathcal{A}}}$ and counit $\epsilon_{{\mathcal{A}}}$ make ${\mathcal{A}}$ into a connected, commutative, and cocommutative graded Hopf algebra. 2. The map ${\operatorname{reg}}:{\mathcal{G}}\to {\mathcal{A}}$ sending $[\Gamma]$ to the regular representation of $\operatorname{Aut}\Gamma$ is an embedding of Hopf algebras. 3. The map $\zeta_{{\mathcal{A}}}:{\mathcal{A}}\to {\mathbb{Z}}$ is an algebra homomorphism, and the induced Hopf-algebra homomorphism ${\mathcal{A}}\to {\operatorname{Sym}}_{{\mathbb{Z}}}$ sends each representation $\gamma$ of $\operatorname{Aut}\Gamma$ to the symmetric function $$X_{\Gamma,\gamma} \coloneqq \sum_{\alpha\in {\operatorname{Comp}}_{\#V(\Gamma)}} \left( \dim\operatorname{Hom}_{\operatorname{Aut}\Gamma}(\rho_{\Gamma,\alpha},\gamma)\right) M_\alpha.$$ 4. For each graph $\Gamma$ we have $X_{\Gamma,{\operatorname{reg}}_{\operatorname{Aut}\Gamma}}=X_{\Gamma}$, Stanley’s chromatic symmetric function; and $$X_\Gamma = \sum_{\gamma\in \operatorname{Irr}(\operatorname{Aut}\Gamma)} (\dim \gamma)X_{\Gamma,\gamma}$$ where $\dim\gamma$ is the dimension of the ${\Bbbk}$-vector space underlying the representation $\gamma$. Let $E$ be the Young set with $E_N=\{\text{two-element subsets of $N$}\}$ (cf. Examples \[examples:Young-set\]). We are going to prove the theorem by identifying ${\mathcal{A}}$ and ${\mathcal{G}}$ with the Hopf algebras $\mathcal M_{E,S_2}^{\Delta}$ and $\mathcal B_{E,S_2}^{\Delta}$ of Section \[sec:Hopf\]. We have $\widehat{S_2}=\{{\operatorname{triv}},\operatorname{sign}\}$, so a function $F\in \widehat{S_2}^{E_N}$ is completely determined by its support, $\operatorname{supp}F = F^{-1}(\operatorname{sign})$. The map sending a function $F\in \widehat{S_2}^{E_N}$ to the graph $\Gamma_F$ with $V(\Gamma_F)=N$ and $E(\Gamma_F)=\operatorname{supp}F$ is a natural isomorphism between $\widehat{S_2}^{E_N}$ and the set of graphs with vertex-set $N$, where ‘natural’ means with respect to set bijections $N\to M$. In particular we have the equality $\operatorname{Aut}F=\operatorname{Aut}\Gamma_F$ of subgroups of $S_N$. These identifications yield grading-preserving bijections between the canonical bases of $\mathcal M_{E,S_2}$ and of ${\mathcal{A}}$, and between the canonical bases of $\mathcal B_{E,S_2}$ and of ${\mathcal{G}}$. We thus have isomorphisms of graded abelian groups $$\label{eq:M-G-iso} \mathcal M_{E,S_2} \xrightarrow[\cong]{\ R(\operatorname{Aut}F)\ni \gamma\mapsto \gamma\in R(\operatorname{Aut}\Gamma_F)\ } {\mathcal{A}}\quad \text{and}\quad \mathcal B_{E,S_2} \xrightarrow[\cong]{\pi_F\mapsto [\Gamma_F]} {\mathcal{G}}$$ making the diagram $$\xymatrix{ \mathcal B_{E,S_2} \ar[r]^-{{\operatorname{reg}}} \ar[d]_-{\cong} & \mathcal M_{E,S_2} \ar[d]^-{\cong} \\ {\mathcal{G}}\ar[r]^-{{\operatorname{reg}}} & {\mathcal{A}}}$$ commute. It is now an easy matter to match up the definitions of the Hopf-algebra structures and conclude that the isomorphisms intertwine the units, the counits, the multiplications, the comultiplications, and the characters $\zeta$ on either side. To prove the formula for $X_{\Gamma,\gamma}$ in part (3) it suffices to note that for each finite set $N$, each function $F\in \widehat{S_2}^{E_N}$, and each integer composition $\alpha\in {\operatorname{Comp}}_{\#N}$, the map ${\operatorname{Comp}}_{F,\alpha} \to {\operatorname{Col}}_{\Gamma_F,\alpha}$ sending the composition $(K_1,\ldots, K_\ell)$ to the proper $\alpha$-colouring $K_i\to \{i\}$ is a bijection that is equivariant for the action of $\operatorname{Aut}F=\operatorname{Aut}\Gamma_F$. Thus the given formula for $X_{\Gamma,\gamma}$ is an instance of Proposition \[prop:Psi-M\]. Finally, for part (4), recall that the chromatic symmetric function $X_\Gamma$ is the image of $[\Gamma]$ under the Hopf-algebra morphism ${\mathcal{G}}\to {\operatorname{Sym}}_{{\mathbb{Z}}}$ induced by the character $\zeta_{\mathcal{G}}$, while $X_{\Gamma,{\operatorname{reg}}_{\operatorname{Aut}\Gamma}}$ is the image of $[\Gamma]$ under the morphism induced by the character $\zeta_{{\mathcal{A}}}\circ {\operatorname{reg}}$. Since $\zeta_{{\mathcal{G}}}= \zeta_{{\mathcal{A}}}\circ {\operatorname{reg}}$ these two symmetric functions coincide. Now the map $\gamma\mapsto X_{\Gamma,\gamma}$ is additive, and so the asserted decomposition of $X_\Gamma$ follows from the decomposition of the regular representation of ${\operatorname{Aut}\Gamma}$ into irreducibles. The symmetric function $X_{\Gamma,{\operatorname{triv}}}$ associated to the trivial representation of $\operatorname{Aut}\Gamma$ is the generating function for the number of $\operatorname{Aut}\Gamma$-orbits of proper colourings; its specialisation $X_{\Gamma,{\operatorname{triv}}}(1^m)$ is the orbital chromatic symmetric function [@Hanlon], [@Cameron-Kayibi]. Consider, for instance the following pair of graphs from [@Stanley-chromatic Figure 1]: $$\xygraph{ !{(0,0) }*+{\bullet}="a" !{(1,.5) }*+{\bullet}="b" !{(1,-.5) }*+{\bullet}="c" !{(-1,.5)}*+{\bullet}="d" !{(-1,-.5)}*+{\bullet}="e" "a"-"b"-"c"-"a"-"d"-"e"-"a" } \qquad\qquad \textrm{and} \qquad\qquad \xygraph{ !{(-1,0) }*+{\bullet}="a" !{(0,.5) }*+{\bullet}="b" !{(0,-.5) }*+{\bullet}="c" !{(1,0)}*+{\bullet}="d" !{(2,0)}*+{\bullet}="e" "e"-"d"-"b"-"a"-"c"-"d" "b"-"c" }$$ Stanley observed that these nonisomorphic graphs have the same chromatic symmetric function. As we observed in the introduction, the symmetric function $X_{\Gamma,{\operatorname{triv}}}$ distinguishes these graphs. The function $X_{\Gamma,{\operatorname{triv}}}$ likewise distinguishes the graphs $$\xygraph{ !{(-1,.5)}*+{\bullet}="a" !{(-1,-.5)}*+{\bullet}="b" !{(0,.5)}*+{\bullet}="c" !{(0,-.5)}*+{\bullet}="d" !{(1,.5)}*+{\bullet}="e" !{(1,-.5)}*+{\bullet}="f" "a"-"b"-"d"-"c"-"e" "d"-"f" "a"-"d" } \qquad\qquad \textrm{and}\qquad\qquad \xygraph{ !{(-1,.5)}*+{\bullet}="a" !{(-1,-.5)}*+{\bullet}="b" !{(0,.5)}*+{\bullet}="c" !{(0,-.5)}*+{\bullet}="d" !{(1,.5)}*+{\bullet}="e" !{(1,-.5)}*+{\bullet}="f" "a"-"b"-"d"-"c"-"e" "d"-"f" "b"-"c" }$$ which were shown in [@Orellana-Scott] to have equal chromatic symmetric functions. \[rem:graph-PSH\] We have concentrated on the $\Delta$ Hopf algebras, but all of the above applies equally well to the $\delta$ Hopf algebras. The coproducts $\delta_{{\mathcal{G}}}$ and $\delta_{{\mathcal{A}}}$ are defined by restricting the sums in the definitions of $\Delta_{{\mathcal{G}}}$ and $\Delta_{{\mathcal{A}}}$ to subsets $U\subseteq V(\Gamma)$ that are unions of connected components of $\Gamma$. The resulting PSH algebra ${\mathcal{A}}^\delta$ has for its set of primitive elements the union of the sets $\operatorname{Irr}(\operatorname{Aut}\Gamma)$ over the set of isomorphism classes of *connected* graphs $\Gamma$. Note that every finite group arises as the automorphism group of a connected graph ([@Frucht] again), so the set of primitive irreducibles is still extremely complicated.
--- abstract: 'Weyl semimetals (WSM) are a new class of topological materials that exhibit a bulk Hall effect and a chiral magnetic effect. The topological contribution of these unusual electromagnetic responses can be characterized by an axion term $\theta \textbf{E} \cdot \textbf{B}$ with space and time dependent axion angle $\theta (\textbf{r} ,t)$. In this paper we compute the electromagnetic fields produced by an electric charge near a topological Weyl semimetal with two Weyl nodes, in the equilibrium state, at zero electric chemical potential, and with broken time-reversal symmetry. We find that, as in ordinary metals and dielectrics, outside the WSM the electric field is mainly determined by the optical properties of the material. The magnetic field is, on the contrary, of topological origin due to the magnetoelectric effect of topological phases. We show that the magnetic field exhibits an interesting behavior above the WSM as compared with that induced above a topological insulator: the field lines begin at the surface and then end at the surface (but not at the same point). This distinctive behavior of the magnetic field is an experimentally observable signature of the anomalous Hall effect in the bulk of the WSM. We discuss two experimental setups for testing our predictions of the induced magnetic field.' author: - 'A. Martín-Ruiz' - 'M. Cambiaso' - 'L. F. Urrutia' title: Electromagnetic fields induced by an electric charge near a Weyl semimetal --- Introduction ============ Materials characterized by topological order, or simply topological materials have attracted great attention recently both from the theoretical and experimental fronts. The best studied of these are the topological insulators (TIs), which are characterized by a gapped bulk and protected boundary modes that are robust against disorder [@Qi-Review; @Hassan-Review]. Up to recent times, one usually associated topologically nontrivial properties with gapped systems; however, we have learned that gapless (semi)metallic states may be topologically nontrivial in the same sense as gapped insulators. A particularly interesting state of matter is the topological Weyl semimetal (WSM), which may be thought of as a 3D analog of graphene. These are states characterized by phases with broken time-reversal (TR) or inversion (I) symmetry, whose electronic structure contains a pair of Weyl nodes (band crossing points) in the Brillouin zone (BZ) provided the Fermi level is close to the Weyl nodes. WSMs possess protected gapless surface states on disconnected Fermi arcs with end points at the projection of the bulk nodes onto the surface BZ [@Armitage-Review]. The WSM phase was first theoretically predicted in pyrochlore iridates (such as Y$_2$Ir$_2$O$_7$) in 2011 [@TaAsTheo] and experimentally discovered in TaAs four years later [@TaAs-Huang; @TaAs-Lv; @TaAs-Xu; @TaAs-Yang; @TaAs-Xu2]. Besides their spectroscopic distinguishing features, topological phases also exhibit unusual electromagnetic (EM) responses that are a direct macroscopic manifestation of the nontrivial topology of their band structure. It has been shown that the EM response of both topological insulators [@Qi-TFT; @Essin-TFT; @Wu-TFT] and Weyl semimetals [@Burkov-TFT; @Burkov2-TFT; @Tewari-TFT] is described by the so-called $\theta$ term in the EM action, $S _{\theta} \propto \int \theta (\textbf{r} , t) \, \textbf{E} \cdot \textbf{B} \, d ^{3} \textbf{r} dt$. For TIs, the only nonzero value compatible with TR symmetry is $\theta = \pi$, and thus has no effect on Maxwell equations in the bulk. Its only real effect, a half-quantized Hall effect on the sample’s surfaces, becomes manifest only in the presence of surface magnetization. When TR and I symmetries are broken in the bulk, such as in a topological Weyl semimetal, the axion field $\theta$ may also acquire linear space and time dependence $\theta \left( \textbf{r} , t \right) = 2 \textbf{b} \cdot \textbf{r} - 2 b _{0} t$, where $2 \textbf{b}$ is the separation between the Weyl nodes in momentum space and $2 \hbar b _{0}$ is their energy offset. Unlike the $\theta$ term for TIs, the analogous term for WSMs modifies Maxwell equations in the bulk and thus has observable physical consequences, namely the anomalous Hall effect (AHE) and the chiral magnetic effect (CME). A number of physical effects, mainly optical, have been predicted on the basis of this theory. For example, the magneto-optical Faraday and Kerr rotation [@Kerr-Faraday/WSM] and the Casimir effect [@Casimir/WSM], and the appearance of plasmon polaritons [@Plasmons/WSM] and helicons [@Helicons/WSM] at the sample’s surface. In this paper we are concerned with a particular physical effect associated with the anomalous Hall effect. One striking consequence of the $\theta$ term in topological insulators is the image magnetic monopole effect, namely, the appearance of a magnetic field that resembles the one produced by a magnetic monopole when an electric charge is put near the material’s surface [@Qi-Monopole; @Karch; @MCU-GreenTI]. Physically, the monopole magnetic field is induced by a circulating Hall current on the TI surface, centered at the position of the charge projected onto the TI. In this paper we tackle the analogous effect in topological Weyl semimetals. To be precise, we investigate the electromagnetic fields induced by an electric charge above a WSM in the equilibrium state, at zero electric chemical potential, and with broken TR symmetry. We assume the charge to be located along the axis defined by the separation between the Weyl nodes in the BZ, i.e. near the surface without Fermi arcs. What is relevant in our configuration is that due to the magnetoelectric effect in WSMs, a magnetic field is induced. Outside the material, the magnetic field is noteworthy as it is not radial (as that produced by a magnetic monopole). Indeed, its physical origin is the anomalous Hall effect in the bulk, which as we will see, can be interpreted in terms of a family of $(2+1)$-dimensional subsystems parametrized by the coordinate along the nodal separation. Each subsystem exhibits a quantum-like Hall effect, such that a WSM can be effectively understood as a chain of 2D Dirac surface states. The rest of this paper is organized as follows. In Sec. \[EM-Response\] we briefly review the electromagnetic response of topological Weyl semimetals. The central part of this paper is presented in Sec. \[CalEMfields\], where we compute the EM fields produced by an electric charge above a WSM. In Sec. \[Force\] we compute the interaction energy and the force that the material exerts upon the static charge. We close with a brief summary of our results and conclusions in Sec. \[Conclusions\], where we also discuss two possible experimental setups to eventually measure the resulting magnetic field. Appendix \[DetSol\] contains the details of the calculation of the required scalar and vector potentials determining the electromagnetic fields. Throughout the paper we use Gaussian units. Electromagnetic response of Weyl semimetals {#EM-Response} =========================================== The low energy physics of a Weyl semimetal with two nodes is described by the linearized Hamiltonian [@Armitage-Review] $$\begin{aligned} H = v _{F} \hbar \tau ^{z} \boldsymbol{\sigma} \cdot \left( \textbf{k} + \tau ^{z} \textbf{b} \right) + \hbar \tau ^{z} b _{0} , \label{Hamiltonian}\end{aligned}$$ where $v _{F}$ is the Fermi velocity and $\textbf{k} = - i \nabla$. The operator $\boldsymbol{\tau}$ describes the node degree of freedom, while $\boldsymbol{\sigma}$ describes the conduction-valence band degree of freedom. The separation of Weyl nodes in the BZ is governed by the broken symmetries in the bulk. A broken TR symmetry implies $\mathbf{b} \neq 0$ and this will produce a separation of the Weyl nodes in momentum by an amount $2 \mathbf{b}$, each node located at $\pm \mathbf{b}$. On the other hand, a broken I symmetry implies $b_0 \neq 0$ which will produce a separation of the Weyl nodes in energy, by an amount $2 \hbar b_0$. The terms proportional to $b _{0}$ and $\textbf{b}$ in the Hamiltonian (\[Hamiltonian\]) can be gauged away and it reduces to $H = v _{F} \hbar \tau ^{z} \boldsymbol{\sigma} \cdot \textbf{k}$. The chiral transformation in euclidean space $\psi ^{\prime} \to e^{-i \tau^z \theta/2} \psi$, with $\theta (\textbf{r}, t = i \tau) = 2 \textbf b \cdot \textbf r - 2 i b_0 \tau$ (and corresponding for $\psi^\dag$) indeed gauges away the terms $b_0 \tau^z$ and $\tau^z \boldsymbol{b} \cdot \boldsymbol{\sigma}$ but it also changes the integration measure in the path integral and thus the seeming chiral symmetry of the fermionic field is broken, which is nothing more than the chiral anomaly. This gives rise to an unusual EM response described by an additional $\theta$ term in the action of the electromagnetic field [@Burkov-TFT; @Burkov2-TFT; @Tewari-TFT] $$\begin{aligned} S _{\theta} = \frac{\alpha}{4 \pi ^{2}} \int \theta (\textbf{r} , t) \, \textbf{E} \cdot \textbf{B} \, dt \, d ^{3} \textbf{r} , \label{ThetaTerm}\end{aligned}$$ where $\alpha = e ^{2} / \hbar c$ is the fine-structure constant and $\theta (\textbf{r} , t) = 2 \textbf{b} \cdot \textbf{r} - 2 b _{0} t$ is the so-called axion field [@PQ; @Wilczek]. Topological response of WSMs is thus described by an action similar to that of axion-electrodynamics. It is useful to compare this with the $\theta$ term in the effective action of 3D topological insulators. In that case $\theta = \pi$ is the only nonzero value consistent with TR symmetry [@Qi-TFT; @Essin-TFT; @Wu-TFT]. The EM response of 3D TIs is rather simple, since the only nontrivial physical effect is to generate a half-quantized quantum Hall effect on the sample’s surfaces. Indeed, a general method to describe the topological magnetoelectric effect in 3D TIs has been elaborated in Refs. [@MCU-GreenTI; @Martin; @MCU1; @MCU2; @MCU4] by means of Green’s function techniques. Unlike the $\theta$ term in 3D TIs, Eq. (\[ThetaTerm\]) does modify Maxwell equations in the bulk of a Weyl semimetal and thus provides additional observable consequences. The physical manifestation of the Chern-Simons-like term (\[ThetaTerm\]) can be best understood from the associated equations of motion. Varying the full action $S _{0} + S _{\theta}$, where $S _{0} = \frac{1}{8 \pi} \int \left[ \epsilon \textbf{E} ^{2} - (1 / \mu) \textbf{B} ^{2} \right] dt \, d ^{3} \textbf{r}$ is the usual nontopological Maxwell action for electromagnetic fields in matter, we find that the axionic term (\[ThetaTerm\]) changes two of the four Maxwell equations, i.e. $$\begin{aligned} \nabla \cdot \textbf{D} &= 4 \pi \left( \rho - \frac{\alpha}{2 \pi ^{2}} \textbf{b} \cdot \textbf{B} \right) , \label{Gauss}\end{aligned}$$ and $$\begin{aligned} \nabla \times \textbf{H} - \frac{1}{c} \frac{\partial \textbf{D}}{\partial t} &= \frac{4 \pi}{c} \left( \textbf{J} + \frac{\alpha}{2 \pi ^{2}} c \textbf{b} \times \textbf{E} - \frac{\alpha}{2 \pi ^{2}} b _{0} \textbf{B} \right) , \label{Ampere}\end{aligned}$$ with the constitutive relations $\textbf{D} = \tilde{\epsilon} \, \textbf{E}$ and $\textbf{H} = \textbf{B} / \tilde{\mu}$. Faraday’s law, $\nabla \times \textbf{E} = - c ^{-1} \partial \textbf{B} / \partial t$, and the equation stating the absence of magnetic monopoles, $\nabla \cdot \textbf{B} = 0$, remain unaltered. Here, $\tilde{\epsilon} = \epsilon + i \sigma _{xx} (\omega) / \omega$ and $\tilde{\mu} = 1 + \chi _{m}$, where $\epsilon $ is the static permittivity, $\sigma _{xx} (\omega)$ is the longitudinal conductivity and $\chi _{m}$ is the magnetic susceptibility that we assume is negligible for the WSM . In general, the electric current $\textbf{J}$ depends on both the electric and magnetic fields. As in ordinary metals, in the linear response regime, the electric field-dependent current is given by $\textbf{J} ^{\mbox{\scriptsize (E)}} = \sigma _{ij} (\omega) E _{j} \hat{\textbf{e}} _{i}$, where the frequency-dependent conductivity tensor $\sigma _{ij} (\omega)$ can be derived by using, for example, the semiclassical Boltzmann transport theory. In addition, if we have chiral fermions in a magnetic field with chemical potentials $\mu _{\mbox{\scriptsize L}}$ and $\mu _{\mbox{\scriptsize R}}$ for left- and right-handed fermions, respectively, there are two additional $\textbf{B}$-dependent current terms, namely, $$\begin{aligned} \textbf{J} ^{\mbox{\scriptsize (B)}} = \frac{\alpha}{2 \pi ^{2}} \mu _{5} \textbf{B} \quad , \quad \textbf{J} _{5} ^{\mbox{\scriptsize (B)}} = \frac{\alpha}{2 \pi ^{2}} \mu \textbf{B} , \label{JB}\end{aligned}$$ where $\mu _{5} = ( \mu _{\mbox{\scriptsize L}} - \mu _{\mbox{\scriptsize R}} ) / 2$ and $ \mu = ( \mu _{\mbox{\scriptsize L}} + \mu _{\mbox{\scriptsize R}} ) / 2$ are the chiral and the electric chemical potentials, respectively. The most salient features of Weyl physics are fully contained in the inhomogeneous Maxwell equations (\[Gauss\]) and (\[Ampere\]). For example, the $\textbf{b}$-dependent terms encode the anomalous Hall effect that is expected to occur in a Weyl semimetal with broken TR symmetry [@AHE-Yang; @AHE-Burkov; @AHE-Grushin; @AHE-Gorbar]. The $b _{0}$-dependent term that arises in Weyl semimetals with broken I symmetry, describes only one part of the celebrated chiral magnetic effect, namely, the generation of an electric current driven by an applied magnetic field. The second part of the CME is given by $\textbf{J} ^{\mbox{\scriptsize (B)}} $ in Eq. (\[JB\]), which arises from an imbalance between chemical potentials of right- and left-handed fermions. The total contribution to the CME current is: $$\label{eq: cme curr} \mathbf{J}_{\mathrm{CME}} = -\frac{\alpha}{2 \pi ^{2}} \left( b _{0} - \mu _{5} \right) \mathbf{B},$$ that vanishes for $b _{0} = \mu _{5} $ in which case the WSM is said to be at the equilibrium state. On the other hand, $\mathbf{J}_5^{B}$ in Eq. (\[JB\]) that is identified with the chiral separation effect, vanishes for $\mu = 0$, condition that defines the neutrality point. For a detailed discussion of the chiral magnetic effect and the chiral separation effect see Ref. [@Landsteiner]. The vanishing of the CME in solid state context is addressed in Ref. [@CME-Vazifeh; @CME-Kharzeev; @CME-Ma]. Calculation of the EM fields {#CalEMfields} ============================ Statement of the problem ------------------------ An electric charge near the surface of a 3D TI induces a vortex Hall current (because of the in-plane component of the electric field produced by the charge) generating a magnetic field that resembles the one produced by a magnetic monopole [@Qi-Monopole; @Karch; @MCU-GreenTI]. Similar image monopole is predicted when a charge is near the surface of linear magnetoelectric material [@Dzyaloshinskii]. In this paper we consider an electric charge near the surface of a topological Weyl semimetal. Due to the broken symmetries in the bulk, additional nontrivial topological effects may result as compared to the case of the TIs. Specifically, we are concerned with the anomalous Hall effect of WSMs in the equilibrium state and at the neutrality point. Charge neutrality can be attained for some WSMs under specific circumstances and it is not an unrealistic assumption. Theoretical and experimental studies involving WSMs at neutrality have been of considerable interest, as shown in the following cases. In Ref. [@Sbierski_2014] numerical calculations for transport properties are performed. Longitudinal and transversal conductivities and also topological Kerr and Faraday rotations were reported in Ref. [@Kerr-Faraday/WSM]. In Ref. [@Sun_2016] the authors report that TaAs exhibits strong spin Hall effect precisely at neutrality. In [@Xu_2016], experimental confirmation of an optical conductivity with linear dependence in frequency (indicative of the Fermi level intersecting the Weyl nodes) for TaAs at $T=5 K$. A theoretical study of light propagation in a WSM finding unconventional electromagnetic modes was found [@Ferreiros_2016]. In Ref. [@Holder_2017] neutral WSMs in presence of strong disorder were studied finding that the residual conductivity is qualitatively larger than previously estimated. Realistic studies of transversal magnetoresistance and Shubnikovde Hass oscillations in WSM both away and at the neutrality point were considered in Ref. [@Klier_2017]. Finally, in Ref. [@Zhang_2018] the authors carried a theoretical *ab initio* study of Berry curvature dipole in WSM. These examples show us that neutrality is not only a simplifying assumption, but rather a relevant one to be considered. ![Illustration of an electric charge above the surface of a Weyl semimetal. We also represent the $\textbf{k}$-space picture showing the location of the Weyl nodes (blue and red dots as sources and sinks of Berry curvature) along $k _{z}$ axis in the bulk BZ and the Fermi arcs (lines ending at the projection of the Weyl nodes) on the surface BZ.[]{data-label="figure"}](FigV1.pdf) Let us consider the geometry shown in Fig. \[figure\]. The lower half-space ($z<0$) is occupied by a topological Weyl semimetal with a pair of nodes separated along the $k _{z}$-direction in the bulk BZ, while the upper half-space ($z>0$) is occupied by a dielectric fluid. An electric charge is brought near the surface that does not support Fermi-arc electronic states, in this case the $xy$-plane for $\textbf{b} = b \hat{\textbf{e}} _{z}$. Being this a static problem, it is appropriate to neglect all frequency dependence on the conductivities, such that the EM response of the WSM is fully captured by Eqs. (\[Gauss\]) and (\[Ampere\]), with $b _{0} = \mu _{5}$ and $\mu = 0$. Since $\theta (z = 0) = 0$, there are no surface currents, and the resulting material is just a bulk Hall medium with current responses given by the transverse Hall conductivity $$\begin{aligned} \sigma _{xy} = - \sigma _{yx} = \frac{e ^{2} b}{2 \pi ^{2} \hbar} . \label{sigmaxy}\end{aligned}$$ The analogous problem of a charge located in front of a surface that supports Fermi arcs would also be of interest. However, from a practical point of view, we start from the assumption that the WSM phase has been properly characterized, such that the surfaces with/without Fermi arcs have been identified and then we can choose the configuration depicted in Fig. \[figure\]. In fact, when a WSM phase is produced from a Dirac semimetal by applying an external magnetic field, the separation between nodes will be along the field direction and thus the identification of the faces supporting surface states is possible. Hereafter we concentrate on the surface without Fermi arcs, and we left the complementary problem for future investigations. For the sake of generality, in section \[GenSol\] we solve Maxwell equations (\[Gauss\]) and (\[Ampere\]) by considering two semi infinite bulk Hall materials, characterized by the parameters $(\epsilon _{1} , b _{1})$ for $z<0$ and $(\epsilon _{2} , b _{2})$ for $z>0$, separated by the surface $z = 0$. The inhomogeneity in $\epsilon (\textbf{r})$ and $\sigma _{xy} (\textbf{r})$ is therefore limited to a finite discontinuity across the interface. Our results correctly reproduce the ones reported in Ref. [@ChiralMatter] for an infinite chiral medium, and the well-known electrostatic field produced by a charge near a dielectric medium [@Schwinger] as well. In the last section \[PartSol\] we take the limit $b _{2} = 0$, which yields the electromagnetic fields produced by an electric charge in a dielectric fluid above the surface of a WSM. General solution and consistency checks {#GenSol} --------------------------------------- Since the homogeneous Maxwell equations relating the potentials to the fields are not modified by the $\theta$ term, the static electric and magnetic fields can be written in terms of the scalar $\Phi$ and vector $\textbf{A}$ potentials according to $\textbf{E} = - \nabla \Phi$ and $\textbf{B} = \nabla \times \textbf{A}$. In the Coulomb gauge $\nabla \cdot \textbf{A} = 0$, for a pointlike electric charge of strength $q$ at $\textbf{r} ^{\prime} = z ^{\prime} \hat{\textbf{e}} _{z}$ with $z ^{\prime} >0$ (that is the charge lies in medium 2), the electromagnetic potentials satisfy the equations $$\begin{aligned} - \nabla \cdot \left[ \epsilon (z) \nabla \Phi \right] + \frac{4 \pi}{c} \sigma _{xy} (z) \, \hat{\textbf{e}} _{z} \cdot \nabla \times \textbf{A} &= 4 \pi \rho (\textbf{r}) , \label{Maxwell1} \\ - \nabla ^{2} \textbf{A} + \frac{4 \pi}{c} \sigma _{xy} (z) \, \hat{\textbf{e}} _{z} \times \nabla \Phi &= 0 \label{Maxwell2} ,\end{aligned}$$ where $\rho (\textbf{r}) = q \delta (\textbf{r} - \textbf{r} ^{\prime})$ is the charge density. To obtain the general solution for the EM potentials, we must solve equations (\[Maxwell1\]) and (\[Maxwell2\]) in the bulk Hall systems and satisfy the appropriate boundary conditions. Working in cylindrical coordinates $(\rho, \varphi ,z)$ to exploit the axial symmetry of the problem, we introduce the reduced scalar potential $\phi (z , z ^{\prime} ; k _{\perp})$ through the $2+1$ representation $$\begin{aligned} \Phi (\textbf{r}) = 4 \pi \int \frac{ d ^{2} \textbf{k} _{\perp}}{(2 \pi) ^{2}} e ^{i \textbf{k} _{\perp} \cdot \boldsymbol{\rho}} \phi (z , z ^{\prime} ; k _{\perp}) , \label{RedEscPot}\end{aligned}$$ where $\textbf{k} _{\perp} = (k _{x} , k _{y})$ and $\boldsymbol{\rho} = (x ,y)$ are the momentum and position parallel to the $xy$-plane. Expressing the area element in polar coordinates, $d ^{2} \textbf{k} _{\perp} = k _{\perp} dk _{\perp} d \varphi$, and choosing the $k _{x}$ axis in the direction of the vector $\boldsymbol{\rho}$, then $\textbf{k} _{\perp} \cdot \boldsymbol{\rho} = k _{\perp} \rho \cos \varphi$ and the angular integration can be performed to obtain $$\begin{aligned} \Phi (\rho , z) = 2 \int _{0} ^{\infty} k _{\perp} \, J _{0} (k _{\perp} \rho) \phi (z , z ^{\prime} ; k _{\perp}) dk _{\perp} , \label{RedEscPot2}\end{aligned}$$ where $J _{n}$ is the $n$th order Bessel function of the first kind. Inserting this ansatz into the equations of motion and assuming the axial symmetry for the vector potential, we introduce the analogous $2+1$ dimensional representation $$\begin{aligned} \Psi (\rho , z) = 2 \int _{0} ^{\infty} k _{\perp} \, J _{1} (k _{\perp} \rho) \psi (z , z ^{\prime} ; k _{\perp}) dk _{\perp} , \label{RedVecPot}\end{aligned}$$ which defines the vector potential through $\textbf{A} = \Psi (\rho , z) \hat{\textbf{e}} _{\varphi}$, choice that naturally satisfies the Coulomb gauge $\nabla \cdot \textbf{A} = \rho ^{-1} \partial _{\varphi} \Psi = 0$. The problem now consists in determining the reduced functions $\phi$ and $\psi$. Inserting the above $2+1$ representations into Eqs. (\[Maxwell1\]) and (\[Maxwell2\]) we obtain $$\begin{aligned} - \frac{\partial}{\partial z} \left( \epsilon \frac{\partial \phi}{\partial z} \right) + k ^{2} _{\perp} \epsilon \phi + \frac{4 \pi}{c} k _{\perp} \sigma _{xy} \psi &= q \delta (z - z ^{\prime}) , \label{MaxRed1} \\ - \frac{\partial ^{2} \psi}{\partial z ^{2}} + k ^{2} _{\perp} \psi - \frac{4 \pi}{c} k _{\perp} \sigma _{xy} \phi &= 0 , \label{MaxRed2} \end{aligned}$$ where we have expressed the charge density as $\rho (\textbf{r}) = \frac{q}{2 \pi} \delta (z - z ^{\prime}) \int _{0} ^{\infty} k _{\perp} J _{0} (k _{\perp} \rho) dk _{\perp}$. Here, we omit the $z$-dependence of the dielectric function $\epsilon$ and the Hall conductivity $\sigma _{xy}$ for brevity. The differential equations (\[MaxRed1\]) and (\[MaxRed2\]), along with the appropriate boundary conditions at the interface $z = 0$ and at the singular point $z = z ^{\prime}$, constitute a complete boundary value problem. Before dealing with the solutions of the above equations let us comment on the effects on the induced electric and magnetic fields when we interchange the position of the Weyl nodes in momentum space. In the following we momentarily call the solutions to Eqs. (\[MaxRed1\]) and (\[MaxRed2\]) as $\phi_\mathbf{b}$ and $\psi_\mathbf{b}$. There are two possibilities of arranging the Weyl nodes according to whether the source (sink) of the Berry curvature is located at $+\mathbf{b} \,\, (-\mathbf{b})$ or the other way around, which amounts to changing $\mathbf{b} \to - \mathbf{b}$ in our equations. This interchange implies $\sigma _{xy} \to - \sigma _{xy}$ in Eqs. (\[MaxRed1\]) and (\[MaxRed2\]), with new solutions we now denote by $\phi_{-\mathbf{b}}$ and $\psi_{-\mathbf{b}}$. However, by the way in which these equations are coupled we obtain $\phi_{-\mathbf{b}} = \phi_\mathbf{b}$ and $\psi_{-\mathbf{b}} = - \psi_\mathbf{b}$. In other words, the electrostatic potential remains the same, but the vector potential (and thus the magnetic field) flips sign under the interchange of the Weyl nodes in momentum space. To solve equations (\[MaxRed1\]) and (\[MaxRed2\]), we employ standard techniques of electromagnetism [@Schwinger]. Leaving the details of the calculations for Appendix \[DetSol\], we obtain the following expressions for the reduced functions beneath the surface ($z<0$) $$\begin{aligned} \phi _{z<0} = & \; \frac{q}{\epsilon _{1} Q} e ^{\alpha _{1} z - \alpha _{2} z ^{\prime}} \Big\{ \left( \epsilon _{1} \alpha _{1} + \epsilon _{2} \alpha _{2} \right) \cos \left( \beta _{1} z - \beta _{2} z ^{\prime} \right) + \left( \epsilon _{1} \beta _{1} + \epsilon _{2} \beta _{2} \right) \sin \left( \beta _{1} z - \beta _{2} z ^{\prime} \right) + \left( \epsilon _{1} - \epsilon _{2} \right) \notag \\ & \times \cos \left( \beta _{1} z \right) \left[ \alpha _{2} \cos \left( \beta _{2} z ^{\prime} \right) - \beta _{2} \sin \left( \beta _{2} z ^{\prime} \right) \right] \Big\} , \label{RedEscz<0} \\ \psi _{z<0} = & \; \frac{q}{Q} e ^{\alpha _{1} z - \alpha _{2} z ^{\prime}} \Big\{ \left( \epsilon _{1} \beta _{1} + \epsilon _{2} \beta _{2} \right) \cos \left( \beta _{1} z - \beta _{2} z ^{\prime} \right) - \left( \epsilon _{1} \alpha _{1} + \epsilon _{2} \alpha _{2} \right) \sin \left( \beta _{1} z - \beta _{2} z ^{\prime} \right) + \left( \epsilon _{1} - \epsilon _{2} \right) \notag \\ & \times \sin \left( \beta _{1} z \right) \left[ \beta _{2} \sin \left( \beta _{2} z ^{\prime} \right) - \alpha _{2} \cos \left( \beta _{2} z ^{\prime} \right) \right] \Big\} , \label{RedVecz<0}\end{aligned}$$ and, above the surface ($z>0$), we obtain $$\begin{aligned} \phi _{z>0} = & \; \frac{q}{2 \epsilon _{2} r _{2} ^{2}} e ^{- \alpha _{2} \vert z - z ^{\prime} \vert} \Big[ \alpha _{2} \cos \left( \beta _{2} \vert z - z ^{\prime} \vert \right) - \beta _{2} \sin \left( \beta _{2} \vert z - z ^{\prime} \vert \right) \Big] - \frac{q (\epsilon _{1} - \epsilon _{2})}{2 \epsilon _{2} Q} e ^{- \alpha _{2} (z + z ^{\prime})} \Big\{ \alpha _{1} \cos \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \notag \\ & + \beta _{1} \sin \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \Big\} + \frac{q}{2 \epsilon _{2} r _{2} ^{2} Q} e ^{- \alpha _{2} (z + z ^{\prime})} \Big\{ \Gamma \cos \left[ \beta _{2} \left( z + z ^{\prime} \right) \right] - \Delta \sin \left[ \beta _{2} \left( z + z ^{\prime} \right) \right] \Big\} , \label{RedEscz>0} \\ \psi _{z>0} = & \; \frac{q}{2 r _{2} ^{2}} e ^{- \alpha _{2} \vert z - z ^{\prime} \vert} \Big[ \beta _{2} \cos \left( \beta _{2} \vert z - z ^{\prime} \vert \right) + \alpha _{2} \sin \left( \beta _{2} \vert z - z ^{\prime} \vert \right) \Big] + \frac{q (\epsilon _{1} - \epsilon _{2})}{2 Q} e ^{- \alpha _{2} (z + z ^{\prime})} \Big\{ \beta _{1} \cos \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \notag \\ & - \alpha _{1} \sin \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \Big\} + \frac{q}{2 r _{2} ^{2} Q} e ^{- \alpha _{2} (z + z ^{\prime})} \Big\{ \Delta \cos \left[ \beta _{2} \left( z + z ^{\prime} \right) \right] + \Gamma \sin \left[ \beta _{2} \left( z + z ^{\prime} \right) \right] \Big\} , \label{RedVecz>0}\end{aligned}$$ where we have defined $$\begin{aligned} \Gamma & = \alpha _{2} \left( \epsilon _{2} r _{2} ^{2} - \epsilon _{1} r _{1} ^{2} \right) + \beta _{2} \left( \epsilon _{1} + \epsilon _{2} \right) \left( \alpha _{1} \beta _{2} - \beta _{1} \alpha _{2} \right) , \notag \\ \Delta & = \beta _{2} \left( \epsilon _{2} r _{2} ^{2} - \epsilon _{1} r _{1} ^{2} \right) - \alpha _{2} \left( \epsilon _{1} + \epsilon _{2} \right) \left( \alpha _{1} \beta _{2} - \beta _{1} \alpha _{2} \right) , \notag \\ Q &= \epsilon _{1} r _{1} ^{2} + \epsilon _{2} r _{2} ^{2} + \left( \epsilon _{1} + \epsilon _{2} \right) \left( \alpha _{1} \alpha _{2} + \beta _{1} \beta _{2} \right) , \label{Defs}\end{aligned}$$ and $r _{j} ^{2} = k _{jz} k _{j z} ^{\ast}$. Here, $k _{j z} = \alpha _{j} ( k _{\perp}) + i \beta _{j} ( k _{\perp})$ is the complex wave number in the medium $j$, with $$\begin{aligned} \alpha _{j} ( k _{\perp}) = \sqrt{\frac{k _{\perp}}{2} \left( \sqrt{k ^{2} _{\perp} + \Sigma _{j} ^{2}} + k _{\perp} \right)} , \notag \\ \beta _{j} ( k _{\perp}) = \sqrt{\frac{k _{\perp}}{2} \left( \sqrt{k ^{2} _{\perp} + \Sigma _{j} ^{2}} - k _{\perp} \right)} , \label{kappa}\end{aligned}$$ and $\Sigma _{j} = \frac{4 \pi}{c} \frac{\sigma _{xy} ^{j}}{\epsilon _{j}}$ is an effective bulk Hall conductivity (with dimensions of inverse length). The imaginary part $\beta _{j}$ of $k _{jz}$ implies that the electromagnetic fields are attenuated in the bulk, as in ordinary metals. The final expressions for the scalar and vector potentials in coordinate representation are obtained by inserting the reduced functions (\[RedEscz&lt;0\])-(\[RedVecz&gt;0\]) into the $2+1$ representations (\[RedEscPot2\])-(\[RedVecPot\]) and computing the $k _{\perp}$-integrals. Below we present two consistency checks of our results. First, we consider the limit in which the two materials are topologically trivial (i.e., with vanishing bulk Hall conductivities, $\Sigma _{1} = \Sigma _{2} = 0$), with however $\epsilon _{1} \neq \epsilon _{2}$. In this case we find that $k _{1z} = k _{2z} = k _{\perp}$, thus yielding $Q = 2 k ^{2} _{\perp} \left( \epsilon _{1} + \epsilon _{2} \right)$, $\Gamma = k ^{3} _{\perp} \left( \epsilon _{2} - \epsilon _{1} \right)$ and $\Delta = 0$. Therefore, the reduced scalar potential can be written as $$\begin{aligned} \phi &= \frac{q}{2 \epsilon _{2} k _{\perp}} \left[ e ^{- k _{\perp} \vert z - z ^{\prime} \vert} - \frac{\epsilon _{1} - \epsilon _{2}}{\epsilon _{1} + \epsilon _{2}} e ^{- k _{\perp} \vert z \vert} e ^{- k _{\perp} \vert z ^{\prime} \vert} \right] ,\end{aligned}$$ which we recognize as that of an electric charge in front of a dielectric interface [@Schwinger]. Besides, this limit yields $\psi _{z>0} = \psi _{z<0} = 0$, as expected, since there is no magnetoelectric effect in the absence of the $\theta$ term. Second, we consider the case in which the electric charge is embedded in an infinite chiral medium, namely $\epsilon_1 = \epsilon_2 \equiv \epsilon$ and $\Sigma_1 = \Sigma_2 \equiv \Sigma$, then $Q = 4 \epsilon k _{\perp} \sqrt{k _{\perp} ^{2} + \Sigma ^{2}}$ and $\Gamma = \Delta = 0$. The resulting scalar and vector potentials due to a charge $q$ located at $z ^{\prime} = 0$ are: $$\begin{aligned} \Phi &= \frac{q}{\epsilon} \int _{0} ^{\infty} \!\!\! k_\perp J _{0} \left( k_\perp \rho \right) \left[ \frac{1}{2 k_z} e ^{- k_z \vert z \vert} + \frac{1}{2 k_z ^{\ast}} e ^{- k_z ^{\ast} \vert z \vert} \right] dk_\perp , \notag \\ \textbf{A} &= q \!\int _{0} ^{\infty}\!\!\! k_\perp J _{1} (k_\perp \rho) \left[ \frac{i}{2 k_z} e ^{- k_z \vert z \vert} - \frac{i}{2 k_z ^{\ast}} e ^{- k_z ^{\ast} \vert z \vert} \right] dk_\perp \, \hat{\textbf{e}} _{\varphi} ,\label{VectorApp}\end{aligned}$$ where $k_z \equiv \alpha + i \beta$ and $\alpha$, $\beta$ are given by Eq. (\[kappa\]) with $\Sigma_j = \Sigma$. As expected, these expressions coincide with the ones obtained from the Green’s functions in Ref. [@ChiralMatter]. EM fields induced by a charge near a WSM {#PartSol} ---------------------------------------- The case of an electric charge located in a dielectric fluid, above the surface of a topological Weyl semimetal, as shown in Fig. \[figure\], is described by the reduced functions (\[RedEscz&lt;0\])-(\[RedVecz&gt;0\]) in the limit $\Sigma _{2} = 0$. First we discuss the resulting electric field. Taking $\beta _{2} = 0$ in Eqs. (\[RedEscz&lt;0\]) and (\[RedEscz&gt;0\]) and inserting the result into the $2+1$ representation (\[RedEscPot2\]) we find that, in coordinate representation, the electrostatic potential beneath the surface becomes $$\begin{aligned} \Phi _{z<0} = & \; 2q \int _{0} ^{\infty} \frac{\left( \alpha _{1} + k _{\perp} \right) \cos \left( \beta _{1} z \right) + \beta _{1} \sin \left( \beta _{1} z \right)}{\epsilon _{1} \left( \alpha _{1} ^{2} + \beta _{1} ^{2} \right) + \epsilon _{2} k ^{2} _{\perp} + k _{\perp} \alpha _{1} \left( \epsilon _{1} + \epsilon _{2} \right)} \notag \\ & \times k _{\perp} J _{0} (k _{\perp} \rho) e ^{\alpha _{1} z - k _{\perp} z ^{\prime}} dk _{\perp} , \label{EscPotWSM}\end{aligned}$$ and, above the surface, we find $$\begin{aligned} \Phi _{z>0} &= \frac{q}{\epsilon _{2}} \frac{1}{\sqrt{\rho ^{2} + (z - z ^{\prime}) ^{2}}} + \frac{q}{\epsilon _{2}} \frac{\epsilon _{2} - \epsilon _{1}}{\epsilon _{2} + \epsilon _{1}} \frac{1}{\sqrt{\rho ^{2} + (z + z ^{\prime}) ^{2}}} \notag \\ & \phantom{=} - \frac{2 q \epsilon _{1}}{\epsilon _{1} + \epsilon _{2}} \int _{0} ^{\infty} \!\!\!\! \frac{\alpha _{1} ^{2} + \beta _{1} ^{2} - k ^{2} _{\perp}}{\epsilon _{1} \! \left( \alpha _{1} ^{2} + \beta _{1} ^{2} \right) + \epsilon _{2} k ^{2} _{\perp} + k _{\perp} \alpha _{1} \! \left( \epsilon _{1} \! + \! \epsilon _{2} \right)} \notag \\ & \phantom{=} \times J _{0} (k _{\perp} \rho) e ^{- k _{\perp} ( z + z ^{\prime} )} dk _{\perp} . \label{EscPotDielectric}\end{aligned}$$ We observe that in the dielectric fluid ($z>0$), the electric potential can be interpreted as due to the original electric charge of strength $q$ at $z ^{\prime}$, an image electric charge of strength $q (\epsilon _{2} - \epsilon _{1}) / (\epsilon _{2} + \epsilon _{1})$ at $-z ^{\prime}$, and an additional term arising from the nontrivial topology of the WSM. Inside the Weyl semimetal ($z<0$), the electric potential has no simple interpretation. In Fig. \[EMPlots\]a we plot the electrostatic potential $\Phi$ (in units of $q \Sigma _{1}$) as a function of the dimensionless distance $\Sigma _{1} z$, for $\rho = 0$ and $z ^{\prime} = 1 / \Sigma _{1}$. Consider the reference value $\epsilon _{1} \sim 6 $ appropriate for dielectric constant of the Weyl semimetal TaAs [@TaAs-Huang; @TaAs-Lv; @TaAs-Xu; @TaAs-Yang; @TaAs-Xu2]. In Fig. \[EMPlots\]a the continuous line represents the case when the semispace above the WSM is also filled with a dielectric medium of $\epsilon _{2} = 6$, and the dashed line represents the case when the space above the WSM is vacuum, namely $\epsilon _{2} = 1$. As expected, we observe that the electrostatic potential is attenuated inside the WSM due to the metallic character of the material. ![image](HallCurrent2.png) From Eq. (\[EscPotWSM\]), one can further see that, in the limit $\epsilon _{1} \rightarrow \infty$, we obtain that $\Phi _{z<0} = 0$, as in a perfect conductor. Even in this simplified model of a WSM this decay of the electrostatic potential inside the material reflects an additional contribution to the screening length of the material [@ZangWill], which is not the penetration depth as defined for electromagnetic waves. A proper estimation of these relevant parameters demands to consider a more realistic model for the WSM and deserves further investigation. The electric field is obtained from the electrostatic potential (\[EscPotWSM\])-(\[EscPotDielectric\]) as $\textbf{E} = - \nabla \Phi$. In Fig. \[EMPlots\]b we illustrate the electric field $\textbf{E}$ (in units of $q \Sigma _{1} ^{2}$) generated by an electric charge in vacuum ($\epsilon _{2} = 1$) at $z ^{\prime} = 1 / \Sigma _{1}$ (red sphere) close to the WSM TaAs as a function of the dimensionless coordinates $\Sigma _{1} \rho$ and $\Sigma _{1} z$. We observe that the electric field outside the WSM is similar to that generated by the original electric charge, with deviations close to the interface due to the screening of the field inside the material. In fact, this electric field is practically indistinguishable from that produced by an electric charge close to an ordinary metal or a dielectric. Nevertheless, the electric field beneath the surface is more complicated than in the nontopological cases. For example, the electric field within a uniform and isotropic dielectric is a radially directed field (with the charge outside the material as its source); while the field inside an ordinary metal is zero. In the present case, as shown in Fig. \[EMPlots\]b, the electric field is remarkably different as evidenced by the curved field lines inside. Now we discuss the induced magnetic field. The vector potential is given by $\textbf{A} = \Psi \hat{\textbf{e}} _{\varphi}$, with the function $\Psi$ defined by Eq. (\[RedVecPot\]) and the reduced function $\psi$ given by Eqs. (\[RedVecz&lt;0\]) and (\[RedVecz&gt;0\]) in the limit $\beta _{2} = 0$. In coordinate representation, the function $\Psi$ beneath the surface is $$\begin{aligned} \Psi _{z<0} = & \, 2 q \epsilon _{1} \int _{0} ^{\infty} \frac{ \beta _{1} \cos \left( \beta _{1} z \right) - \left( \alpha _{1} + k _{\perp} \right) \sin \left( \beta _{1} z \right) }{\epsilon _{1} \left( \alpha _{1} ^{2} + \beta _{1} ^{2} \right) + \epsilon _{2} k ^{2} _{\perp} + k _{\perp} \alpha _{1} \left( \epsilon _{1} + \epsilon _{2} \right)} \notag \\ & \times k _{\perp} J _{1} (k _{\perp} \rho) e ^{\alpha _{1} z - k _{\perp} z ^{\prime}} dk _{\perp} ,\end{aligned}$$ and, above the surface, we obtain $$\begin{aligned} \Psi _{z>0} = & \, 2 q \epsilon _{1} \int _{0} ^{\infty} \frac{\beta _{1}}{\epsilon _{1} \left( \alpha _{1} ^{2} + \beta _{1} ^{2} \right) + \epsilon _{2} k ^{2} _{\perp} + k _{\perp} \alpha _{1} \left( \epsilon _{1} + \epsilon _{2} \right)} \notag \\ & \times k _{\perp} J _{1} (k _{\perp} \rho) e ^{- k _{\perp} ( z + z ^{\prime} )} dk _{\perp} . \label{Psi>}\end{aligned}$$ The magnetic field is obtained from these expressions as $\textbf{B} = \nabla \times \textbf{A} = - \left( \partial _{z} \Psi \right) \hat{\textbf{e}} _{\rho} + \rho ^{-1} \partial _{\rho} \left( \rho \Psi \right) \hat{\textbf{e}} _{z}$. In Fig. \[EMPlots\]c we show the magnetic field $\textbf{B}$ (in units of $q \Sigma _{1} ^{2}$) induced by an electric charge in vacuum at $z ^{\prime} = 1 / \Sigma _{1}$ close to the WSM TaAs as a function of the dimensionless coordinates $\Sigma _{1} \rho$ and $\Sigma _{1} z$. Clearly, the field lines do not have a simple form. The magnetic field generated by an electric charge close to a TI should serve as the benchmark for understanding the subtlety of our result. In that case, the monopole magnetic field beneath (above) the surface is radially directed with the magnetic monopole above (beneath) the surface as its origin [@Qi-Monopole; @Karch; @MCU-GreenTI]. In the present case, however, the behavior of the field lines is radically different. Above the surface, the magnetic field lines begin at the surface and end at the surface (but not at the same point). The situation beneath the surface also differs from that of the topological insulator. In Sec. \[Conclusions\] we discuss two experimental setups which could be used to test this nontrivial magnetic field. To understand the physical origin of the induced magnetic field above the surface we rewrite the Maxwell equation (\[Ampere\]) as $\nabla \times \textbf{B} _{z>0} = \frac{4 \pi}{c} \textbf{J} _{\mbox{\scriptsize bHall}}$, where the bulk Hall current, given by $\textbf{J} _{\mbox{\scriptsize bHall}} = \frac{\alpha c}{2 \pi ^{2}} \textbf{b} \times \textbf{E} _{z<0}$, is induced by the in-plane component of the electric field produced by the charge. Having taken $\textbf{b} = b \hat{\textbf{e}} _{z}$, the current is circulating around the symmetry axis, i.e. $\textbf{J} _{\mbox{\scriptsize bHall}} = \sigma _{xy} \left( \hat{\textbf{e}} _{\rho} \cdot \textbf{E} _{z<0} \right) \, \hat{\textbf{e}} _{\varphi}$. In Fig. \[HallCurrentPlots\] we show a stream density plot of the bulk Hall current $\textbf{J} _{\mbox{\scriptsize bHall}}$ (in units of $q \sigma _{xy} \Sigma _{1} ^{2}$) for $z ^{\prime} = 1 / \Sigma _{1}$ and different values of $\Sigma _{1} z$. We observe that each cross section of the bulk Hall current resembles the surface Hall current induced by an electric charge near to a topological insulator. Naively, this suggests that a 3D Weyl semimetallic phase can be understood as an infinite number of 2+1 Dirac subsystems (one for each value of $z$ in the bulk) supporting a surface Hall current. According to Fig. \[EMPlots\]c, we do not expect an induced magnetic monopole in the bulk as it is the case of an electric charge in front of a TI. A close inspection to Fig. \[EMPlots\]c reveals that below the surface of the WSM, centered at the position of the image charge, the $\mathbf{B}$-field lines wind in an axisymmetric way as if about a loop of current, similar to those of a “physical” magnetic dipole of finite radius. This suggests that we consider a multipole expansion of the magnetic field and determine the dominant contribution. Still, we recall that the source of the magnetic field is not localized, being the bulk Hall current $\textbf{J} _{\mbox{\scriptsize bHall}}$ which is proportional to the electric field $\mathbf{E}_{z<0}$ produced by the charge. In this way, the standard multipole expansion for localized sources does not necessarily applies. In order to answer the above question we look for the large distance behavior of the magnetic potential $\mathbf{A}=\Psi (\rho ,z)\hat{\mathbf{e}}_{\varphi }$ in the region $z>0$. It is convenient to rewrite Eq. (\[Psi&gt;\]) in the form $\Psi _{z>0} = \,2q\epsilon _{1}\int_{0}^{\infty }F(k_\perp;\epsilon_1, \epsilon_2)J_{1}(k_\perp \rho )e^{-k_\perp(z+z^{\prime })}dk_\perp $, where $$\begin{aligned} & F (k _{\perp} ; \epsilon _{1} , \epsilon _{2}) = % \notag \\ & \frac{\beta_1}{\epsilon _{1} \sqrt{k _{\perp} ^{2} \! + \! \Sigma _{1} ^{2}} \! + \! \epsilon _{2} k _{\perp} \! + \! (\epsilon _{1} \! + \! \epsilon _{2}) \alpha_1} .\end{aligned}$$ Due to the exponential factor $e^{-k_\perp(z+z')}$ together with the rapidly oscillating nature of $J_{1}(k_\perp \rho )$ in the far zone, the integral (\[Psi&gt;\]) is dominated by the behavior of the integrand for small values of $k_\perp$. A series expansion of $F(k_\perp;\epsilon_1, \epsilon_2)$ in powers of $k_\perp/\Sigma_1$ results in an expansion in powers of $(k_\perp/\Sigma_1)^{1/2}$. The leading terms are: $$\begin{aligned} \Psi _{z>0} &\approx 2 q \epsilon _{1} \int _{0} ^{\infty} \left[ \frac{\epsilon _{1}}{\sqrt{2}} \left(\frac{k_\perp}{\Sigma_1}\right)^{1/2} - \frac{\left( \epsilon _{1} + \epsilon _{2} \right) }{2 \epsilon _{1} ^{2}} \left(\frac{ k _{\perp}}{\Sigma _{1}} \right) \right. \notag \\ & \hspace{0.9cm} \left. + O \left(\frac{k_\perp}{\Sigma_1} \right)^{3/2} \right] J_{1}(k_\perp\rho )e^{-k_\perp(z+z^{\prime})}dk_\perp. \label{EQ3}\end{aligned}$$ Just from dimensional arguments, it is clear that the term proportional to $k_\perp^\sigma$ (with $\sigma >0 $) under the integral yields a contribution of the order $(1/L)^{(\sigma +1)}$, with $L$ being a characteristic length in the integrand. In fact, the required integrals are given in closed form [@Gradshteyn], yielding $$\mathbf{A}^{(1)} \approx \frac{\sqrt{2} \, \Gamma(5/2) \,q}{\Sigma_1}\frac{1}{r^{3/2}}P^{-1}_{1/2}(\cos \theta) \hat{\mathbf{e}}_\varphi,$$ for the dominant contribution in the far zone, that arises from the term proportional to $k_\perp^{1/2}$ in Eq. (\[EQ3\]). Here $\theta$ is the angle from the $oz$ axis to the observation point, $r \cos \theta = \mathbf{r} \cdot \hat{\mathbf{e}}_z$ and $\mathbf{r} = \rho \hat{\mathbf{e}}_\rho + (z + z ^{\prime}) \hat{\mathbf{e}}_z$ and $r = \sqrt{\rho^2 + (z + z ^{\prime}) ^2}$. The associated Legendre function $P^{-1}_{1/2}(x)$ is $$P^{-1}_{1/2}(x)=\frac{1}{\Gamma(2)}\left(\frac{1-x}{1+x}\right)^{1/2} \!\!\! {}_{2}F _{1} \left(-1/2,\, 3/2 \, ; \, 2 \, ; \,\frac{1-x}{2}\right),$$ where ${}_{2}F _{1} (a, b ; c ; z)$ is the hypergeometric function. The next term in Eq. (\[EQ3\]) produces $$\mathbf{A}^{(2)} \approx -q\frac{\left( \epsilon _{1}+\epsilon _{2}\right) }{\epsilon _{1} \Sigma _{1}}\frac{\sin \theta }{r^{2}} \hat{\mathbf{e}}_{\varphi}.$$ Comparing with the magnetic potential $\mathbf{A}= \left(m \sin \theta/r^{2}\right)\hat{\mathbf{e}}_{\varphi }$ produced by a magnetic dipole $m \hat{\mathbf{e}} _z$ at the origin, we identify this contribution as that of a magnetic dipole with $$m =-q \frac{\left( \epsilon _{1}+\epsilon _{2}\right) }{\epsilon _{1}\Sigma _{1}},$$ located at the image point $-z ^{\prime}$. Thus we confirm the qualitative expectation that a magnetic dipole is induced, which is the subleading term of the vector potential. Interaction energy and force {#Force} ============================ To compute the force between the electric charge and the Weyl semimetal we need the interaction energy between a charge distribution and a WSM as given by [@Schwinger] $$\begin{aligned} E _{\mbox{\scriptsize int}} = \frac{1}{2} \int \left[ \Phi (\textbf{r}) - \Phi _{0} (\textbf{r}) \right] \rho (\textbf{r}) d ^{3} \textbf{r} ,\end{aligned}$$ where $\Phi _{0} (\textbf{r}) = \lim _{\Sigma _{1} \rightarrow 0} \Phi (\textbf{r})$ is the electrostatic potential in the absence of the $\theta$ term. The first contribution represents the total energy of a charge distribution in the presence of the WSM, including mutual interactions. We evaluate this energy for the problem of an electric charge above a WSM. Making use of Eq. (\[EscPotDielectric\]), the interaction energy becomes $$\begin{aligned} E _{\mbox{\scriptsize int}} (z ^{\prime}) = & - \frac{q ^{2}}{4 \epsilon _{2} z ^{\prime}} \frac{\epsilon _{1} - \epsilon _{2}}{\epsilon _{1} + \epsilon _{2}} - \frac{q ^{2} \epsilon _{1}}{\epsilon _{1} + \epsilon _{2}} \int _{0} ^{\infty} e ^{-2 k _{\perp} z ^{\prime}} \notag \\ & \times \frac{\alpha _{1} ^{2} + \beta _{1} ^{2} - k ^{2} _{\perp}}{\epsilon _{1} \left( \alpha _{1} ^{2} + \beta _{1} ^{2} \right) + \epsilon _{2} k ^{2} _{\perp} + k _{\perp} \alpha _{1} \left( \epsilon _{1} + \epsilon _{2} \right)} dk _{\perp} , \label{IntEnergy}\end{aligned}$$ which we interpret as follows. The first term corresponds to the interaction energy between the original charge at $z ^{\prime}$ and the image charge at $- z ^{\prime}$ [@Schwinger]. The second term does not admit an immediate interpretation in a similar fashion, however, we are certain that it is a consequence of the nontrivial bulk topology of the material since it vanishes as the bulk Hall conductivity goes to zero. We observe that as the charge approaches the interface ($z ^{\prime} \rightarrow 0$), the nontopological contribution will dominate the interaction energy (\[IntEnergy\]) provided $\epsilon _{1} \neq \epsilon _{2}$; and therefore $E _{\mbox{\scriptsize int}} \rightarrow - \infty$, as usual. However, this trivial contribution vanishes for $\epsilon _{1} = \epsilon _{2}$, which is achieved by embedding the charge in a dielectric fluid with the same permittivity to that of the Weyl semimetal. This idea was recently employed in Refs. [@MU; @MC] to cancel out the trivial electrostatic effects when studying the interaction between an hydrogen-like ion and a planar topological insulator. To isolate the topological effects we focus on this case. A distinguishing feature of this interaction energy is that it does not diverge as the charge approaches the interface. Indeed, we can compute the surface interaction energy analytically, with the result (setting $\epsilon _{1} = \epsilon _{2} \equiv \epsilon$) $$\begin{aligned} E _{\mbox{\scriptsize surf}} \equiv E _{\mbox{\scriptsize int}} (z ^{\prime} = 0) &= - \frac{\alpha q ^{2} b}{8 \epsilon ^{2}}.\end{aligned}$$ This finite value of the interaction energy at the interface is a signature that the electric field cannot be interpreted in terms of a symmetrically located image charge, as in metals, dielectrics and topological insulators. In Fig. \[InteractionEnergy\] we show a plot of the ratio between the interaction energy $E _{\mbox{\scriptsize int}}$ and the surface energy $E _{\mbox{\scriptsize surf}}$ as a function of the dimensionless distance $\Sigma _{1} z ^{\prime}$. We observe that the maximum value is precisely at the surface, and it decreases asymptotically to zero as the charge moves away from the surface. The force that the Weyl semimetal exerts upon the charge can be computed as $F _{z} (z ^{\prime}) = - \partial _{z ^{\prime}} E _{\mbox{\scriptsize int}} (z ^{\prime})$. To get an insight of the magnitude of this force, in the inset of Fig. \[InteractionEnergy\] we plot the force $F _{z}$ (in units of $F _{0} = - \frac{q ^{2}}{ \epsilon (2 z ^{\prime}) ^{2}}$, which is the force that a perfect metallic surface exerts upon the charge) as a function of the dimensionless coordinate $\Sigma _{1} z ^{\prime}$. As we can see, the force between the Weyl semimetal and the charge tends asymptotically to the force between the charge and a perfect metallic surface. ![Interaction energy $E _{\mbox{\scriptsize int}}$ (in units of the surface energy $E _{\mbox{\scriptsize surf}}$) as a function of the dimensionless distance $\Sigma _{1} z ^{\prime}$. The inset shows the force (in units of $F _{0}$) as a function of $\Sigma _{1} z ^{\prime}$.[]{data-label="InteractionEnergy"}](EnergyForce2.pdf) Summary and Discussion {#Conclusions} ====================== In summary, we have computed the electromagnetic fields produced by an electric charge near a topological Weyl semimetal in the equilibrium state, at the neutrality point, and with two nodes in the bulk Brillouin zone, when the charge is located in front of the face without surface states. We found that, outside the WSM, the electric field behaves as that generated by the original electric charge, with deviations close to the interface due to the screening of the field inside the material (see Fig. \[EMPlots\]b). This behavior is dominated by the dielectric properties of the semimetal, in such a way that the topological contribution is always hidden. The magnetic field is, on the contrary, of topological origin due to the magnetoelectric effect of topological phases. In particular, we showed that the magnetic field exhibits a characteristic behavior above the WSM: the field lines begin at the surface and then end at the surface (but not at the same point), as depicted in Fig. \[EMPlots\]c. In fact, we showed that this peculiar magnetic field for $z \gg z'$ includes a nonleading contribution corresponding to a magnetic dipole moment induced beneath the WSM’s surface. This magnetic field is different from the radially directed one produced by an electric charge near the surface of a TI, interpreted in terms of an image magnetic monopole located beneath surface. As in the case of the charge in front of the TI, in our case, the interpretation of the dipole magnetic moment is only an artifact. The physical origin of this field are the circulating Hall currents induced in the bulk of the WSM obtained at the end of Sec. \[PartSol\]. Again, the comparison with the situation of a charge in front of a TI is useful. As we see from the stream density plot of the bulk Hall current in Fig. \[HallCurrentPlots\], for each $z<0$, the current resembles the surface Hall current induced by a charge near a TI, suggesting that a 3D WSM can be interpreted as an infinite number of 2+1 Dirac subsystems supporting a surface Hall current. The distinctive behavior of the magnetic fields here obtained is an experimentally observable signature of the anomalous Hall effect in the bulk, and thus its detection is in order. We must recall that our model is based on a simplified description of Weyl semimetals. Nevertheless the physical realization of materials with generic WSM phases amenable for experimental measurements is rather subtle. For example, Weyl semimetals may have more than a single pair [@NN1; @NN2; @NN3] of Weyl nodes and possibly not all aligned with each other. In this case a different approach must be employed to solve the field equations and our results can not be directly applied since axial symmetry no longer holds. We point out that our model and results still apply to systems where the Weyl nodes appear once time-reversal is broken by an external magnetic field. For instance, in the Dirac materials Cd$_{3}$As$_{2}$ [@DiracCdAs1; @DiracCdAs2; @DiracCdAs3] and Na$_{3}$Bi [@DiracNaBi1; @DiracNaBi2; @DiracNaBi3], each Dirac point is expected to split into two Weyl nodes with a separation in momentum proportional (in magnitude and direction) to the magnetic field. An interesting theoretical proposal of the WSM phase with two nodes is the multilayer structure comprised of topologically trivial and nontrivial insulators proposed in Refs. [@Burkov-Balents; @Burkov; @Burkov2], which up to our knowledge has not been realized experimentally yet. Now we discuss two specific fingerprints of the induced magnetic field above our particular WSM which could, in principle, be measured. *Angle-resolved measurement.* The force that the Weyl semimetal exerts upon the charge is $F _{z} = - \partial _{z ^{\prime}} E _{\mbox{\scriptsize int}} (z ^{\prime})$, where the interaction energy is given by Eq. (\[IntEnergy\]). This force corresponds to $\textbf{F} _{e} = q \textbf{E} _{z>0} ( \textbf{r} ^{\prime})$, where $\textbf{E} _{z>0}$ is the electric field above the WSM evaluated at the position $\textbf{r} ^{\prime} = z ^{\prime} \hat{\textbf{e}} _{z}$ of the original charge, and it attracts the charge toward the surface in the direction perpendicular to it. However, interesting phenomena appear when we examine a moving external charge. For example, consider a steady electron beam drifting at a distance $z ^{\prime}$ above the surface of the WSM. If the motion of the electrons is slow enough with respect to the Fermi velocity in the solid, the induced polarization and magnetization of the material rearranges infinitely fast, in such a way that the solution for the electromagnetic fields we have computed are still valid. In this case, where the charge $q$ is moving with a uniform velocity $\textbf{v}$ above the surface of the WSM, the force acting upon the charge will acquire an additional term of the form $\textbf{F} _{m} = q {\textbf{v} \over c} \times \textbf{B} _{z>0} (\textbf{r} ^{\prime})$ due to the induced magnetic field. For an electron beam moving along the $x$-direction (with velocity $\textbf{v} = v _{x} \hat{\textbf{e}} _{x}$) we find $$\begin{aligned} \textbf{F} _{m} = - \hat{\textbf{e}} _{y} \int _{0} ^{\infty} \!\!\!\! \frac{2 q ^{2} \epsilon _{1} (v _{x}/c) \; k ^{2} _{\perp} \beta _{1} e ^{- 2 k _{\perp} z ^{\prime}} dk _{\perp} }{\epsilon _{1} \left( \alpha _{1} ^{2} + \beta _{1} ^{2} \right) + \epsilon _{2} k ^{2} _{\perp} + k _{\perp} \alpha _{1} \left( \epsilon _{1} + \epsilon _{2} \right)}. \label{Force2}\end{aligned}$$ Remarkably, this anomalous force is orthogonal to the electrons’ motion as well as to the electric contribution $\textbf{F} _{e}$. As a result, these effects can be distinguished from each other. Experimentally, the required probe can be provided by the steady electron beam emitted from a low-energy electron gun (low-energy electron diffraction). While drifting above the WSM, the anomalous force (\[Force2\]) will deflect the trajectory of the electron beam. To estimate the size of this deflection we consider the proposal in Ref. [@ZangNagaosa] of a similar experimental setup involving TIs. In that case the authors take $v _{x} \sim 10 ^{7}$ cm/s, $z ^{\prime} \sim 1 \, \mu$m and $L \sim 1$ cm for the sample’s size (which in their case coincides with the $ox$-displacement). We assume these parameters are also feasible when the sample is a WSM. An estimate of the transverse displacement $\Delta$ produced by the anomalous force $F_{m}$ is $$\begin{aligned} \Delta & \approx \frac{4 \alpha ^{2}}{\pi ^{2} \epsilon _{1}} \, \frac{q ^{2} L ^{2} b ^{2}}{m _{e} v _{x}c } \, f(\epsilon _{1} , \epsilon _{2}; \Sigma z ^{\prime}), \label{AnomalousDrift}\end{aligned}$$ where $f(\epsilon _{1} , \epsilon _{2}; \Sigma z ^{\prime})$ is obtained from Eq. (\[Force2\]). Taking $\epsilon_1 \sim 6$ and $b \sim 10^9\, \mathrm{m}^{-1}$ as for the genuine Weyl semimetal TaAs [@TaAs-Huang; @TaAs-Lv; @TaAs-Xu; @TaAs-Yang; @TaAs-Xu2], we find $\Delta \approx 3,2 \, \mu$m. This deflection is of the same order of magnitude as that reported in Ref. [@ZangNagaosa], and then it can be traced by angle-resolved measurements. If this experiment were carried out with a Dirac semimetal by applying an external magnetic field, instead of a genuine WSM, the induced magnetic field will be overwhelmed by the external one, and so would its contribution to the Lorentz force on a moving charge. In fact, we can estimate by how much is the Lorentz force of the external field larger than that of the anomalous force. Taking $\textbf{B} _{\scriptsize \mbox{ext}} = B _{0} \hat{\textbf{e}} _{z}$, the Lorentz force $\textbf{F} _{0} = q \frac{\textbf{v}}{c} \times \textbf{B} _{\scriptsize \mbox{ext}}$ will deflect the trajectory of each electron by an amount $\Delta _{0} \approx \frac{qL ^{2}}{2m v _{x} c} B _{0}$. Thus, the total transverse drift will be $\Delta _{0} + \Delta $, where $\Delta$ is the drift in Eq. (\[AnomalousDrift\]) produced by the anomalous force. To compare the magnitudes of these contributions we focus on $\Delta / \Delta _{0}$, which is $$\begin{aligned} \frac{\Delta}{\Delta _{0}} = \frac{8 \alpha ^{2}}{ \pi ^{2} \epsilon _{1}} \frac{qb ^{2}}{B _{0}} f(\epsilon_1, \epsilon_2; \Sigma _{1} z') .\end{aligned}$$ A numerical estimation is obtained by considering the Dirac semimetal Cd$_{3}$As$_{2}$ in the presence of a magnetic field of $1$T. According to Ref. [@DiracCdAs3], this induces a separation of the Weyl nodes of $b = 5 \times 10 ^{8} \mbox{m} ^{-1}$ and $\epsilon _{1} = 12$. Therefore for an electron at a distance $z ^{\prime} = 1 \mu$m above the material we obtain $\Delta / \Delta _{0} \approx 10 ^{-7}$, thus implying that the external field overwhelms the topological contribution by this factor. We then conclude that an angle-resolved measurement is appropriate for experimental realization only if it were possible to consider a genuine WSM, for which no external magnetic field is needed. ![Magnetic flux generated by an electric charge $q$ at a distance $z ^{\prime} = 1 / \Sigma _{1}$ above the Weyl semimetal TaAs as a function of $\Sigma _{1} R$ for different values of $\Sigma _{1} z$.[]{data-label="FluxPlot"}](FluxWSM.pdf) *Scanning SQUID magnetometry.* Another possible technique for measuring the induced magnetic field could be scanning SQUID (Superconducting Quantum Interference Device) magnetometry. Roughly speaking, a SQUID is a very sensitive magnetometer used to measure extremely subtle magnetic fields (as low as $5 \times 10 ^{-18}$T [@SQUID1; @SQUID2; @SQUID3]), based on superconducting loops containing Josephson junctions. Technically, we have to compute the magnetic flux through a loop (of radius $R$ and parallel to the surface) placed at a distance $z$ above the Weyl semimetal, i.e. $\Phi _{\textbf{B}} = \int _{S} \textbf{B} \cdot d \textbf{S}$, where $S$ is the surface of the loop. The magnetic flux from a topological insulator through a Josephson junction, $\Phi _{\textbf{B}} ^{\mbox{\scriptsize{TI}}}$, serves as the benchmark for comparing our result. In that case, the magnetic flux grows from 0 (at $R=0$) to the constant value $2 \pi g$ (as $R \rightarrow \infty$), where $g = \frac{2q \alpha}{2 (\epsilon _{1} + \epsilon _{2}) + \alpha ^{2}}$ is the magnetic monopole strength [@MCU-GreenTI]. This is so because the magnetic field is radially directed away from the image magnetic monopole beneath the surface and therefore the loop will always enclose field lines. This interesting tendency of the magnetic flux to a constant value can be thought as a distinctive feature of the induced magnetic field in topological insulators. The case of a Weyl semimetal is quite different, as we discuss below. ![Critical radius $\Sigma _{1} R _{c}$ of the loop corresponding to the maximal magnetic field flux as a function of $\Sigma _{1} (z+z ^{\prime})$ for the Weyl semimetal TaAs. The dots correspond to the numerical calculation and the continuous line is a curve-fitting.[]{data-label="CriticalR"}](CriticalR.pdf) In our case, a simple calculation produces $\Phi _{\textbf{B}} (R , z) = 2 \pi R \Psi _{z > 0} (R , z)$, where the function $\Psi _{z > 0}$ is given by Eq. (\[Psi&gt;\]). In Fig. \[FluxPlot\] we show a plot of the magnetic flux $\Phi _{\textbf{B}} (R , z)$ (in units of $q$) due to a pointlike charge $q$ located at a distance $z ^{\prime} = 1/ \Sigma _{1}$ above the surface of the Weyl semimetal TaAs (for which $\epsilon _{1} \sim 6$ and $b \sim 10 ^{9} \mbox{m} ^{-1}$) as a function of the dimensionless radius $\Sigma _{1} R$ and for different values of $\Sigma _{1} z$. Of course, $\Phi _{\textbf{B}} = 0$ at $R = 0$. Furthermore, in the limit $R \to \infty$, the function $\Psi _{z>0}$ (\[Psi&gt;\]) is a highly oscillatory integral and therefore $\Phi _{\textbf{B}} \to 0$. This behavior implies the existence of a maximum flux at a critical radius $R _{c}$, as shown in Fig. \[FluxPlot\]. The fact that the magnetic flux tends to zero as the radius goes to infinity can be easily understood from the fact that the magnetic field lines, which start at the WSM surface, go back again to the surface, as discussed before. The existence of a maximum flux at $R _{c}$, as well as the asymptotic vanishing of the flux, are distinguishing features of the induced magnetic field due to a genuine WSM. One can further determine the critical radius $R _{c}$ corresponding to the maximal magnetic field flux in the usual manner (i.e. by solving $\partial _{R} \Phi _{\textbf{B}} \vert _{R = R _{c}} = 0$ for $R _{c}$). In Fig. \[CriticalR\] we show a plot of $\Sigma _{1} R _{c}$ as a function of $\Sigma _{1} (z+z ^{\prime})$ for the case in which the WSM is the TaAs. While the dots represents the numerical solution of $\partial _{R} \Phi _{\textbf{B}} \vert _{R _{c}} = 0$, the continuous line is a curve-fitting. Unexpectedly we find the equation of a straight line, $\Sigma _{1} R _{c} = m \, \Sigma _{1} (z+z ^{\prime}) + b$, where the values of the slope $m$ and the intercept $b$ on the $\Sigma _{1} R _{c}$-axis depend on the permittivity $\epsilon _{2}$ of the WSM. When the WSM is the TaAs, we find $m = 2.71$ and $b = 3.69$. For a numerical estimate of the magnetic flux we consider a charge $q = n _{e} \vert e \vert$ placed at a distance $z ^{\prime} = 1 \mu$m above the surface of the Weyl semimetal TaAs and a SQUID of radius $R = 10 \mu$m located at $z = 10 \mu$m. We find $\Phi _{\textbf{B}} \approx 7 n _{e} \times 10 ^{-14} \mbox{T} \cdot \mbox{cm} ^{2}$, which is measurable with present day attainable sensitivities of SQUIDs [@SQUID1; @SQUID2; @SQUID3]. One of the key challenges for the experimental detection of this flux profile would be to find a way to fix and localize the charge above the surface. As expected, if this experiment were carried out with a Dirac semimetal instead of a genuine WSM, the required external magnetic field will overwhelm the topological contribution to the total magnetic flux. Nevertheless, in this case it is still possible to disentangle these effects by using the fact that the contribution to the flux produced by the external magnetic field, say $\Phi _{\textbf{B}} ^{\scriptsize \mbox{ext}}$, is constant in space and time. Contrary to this, the contribution from the induced field $\Phi _{\textbf{B}}$ is also constant in time, however, it is not isotropic. A sensitive magnetometer as the SQUID will be capable to measure small variations of the flux which amounts to eventually measuring the induced electromotive force $\mathcal{E}$ in the loop. Therefore, this allows for isolating the topological contribution, for example, by producing a controlled displacement of the SQUID along the $z$-axis at speed $v _{z}$, namely: $\mathcal{E} = - \frac{d}{dt} \left( \Phi _{\textbf{B}} ^{\scriptsize \mbox{ext}} + \Phi _{\textbf{B}} \right) = - v _{z} \frac{d \Phi _{\textbf{B}}}{dz}$, where the $z$-dependence is read-off from Eq. (\[Psi&gt;\]). We thank Alberto Cortijo for useful comments and suggestions, and also to the anonymous referees for their recommendations. A. M. was supported by the CONACyT postdoctoral Grant No. 234774. L.F.U. has been supported in part by the project CONACyT (México) \# 237503. M.C. has been partially supported by UNAB DGID under grant \# DI-33-17/RG and wishes to thank A. Martín-Ruiz and L. F. Urrutia at Instituto de Ciencias Nucleares, UNAM for the kind hospitality of throughout the preparation of the manuscript. Detailed solution {#DetSol} ================= In this section we present the detailed solution of the equations of motion Eqs. (\[MaxRed1\]) and (\[MaxRed2\]) in the general case where two bulk Hall materials are in contact, as described in the main text. First we derive the corresponding boundary conditions at the interface $z = 0$ and at the singular point $z = z ^{\prime}$; and next we use standard electromagnetic techniques to obtain the general solution in the whole space. *Boundary Conditions*. The boundary conditions for $\phi$ and $\psi$ can be determined as usual. Assuming that the reduced functions are bounded when $z$ is in the infinitesimal neighborhood of $z = 0$, integration of Eqs. (\[MaxRed1\]) and (\[MaxRed2\]) over the interval between $- \varepsilon$ and $+ \varepsilon$ with $\varepsilon \rightarrow 0 ^{+}$, yields the continuity of $\epsilon \partial _{z} \phi$ and $\partial _{z} \psi$ there. Then, the continuity of $\phi$ and $\psi$ at $z = 0$ follows. In a similar fashion, one can show that the singularity in Eq. (\[MaxRed1\]) requires, at $z = z ^{\prime}$, that $\phi$ be continuous, while $\partial _{z} \phi$ be discontinuous, i.e. $\partial _{z} \phi \vert _{z = z ^{\prime} - 0} ^{z = z ^{\prime} + 0} = - q / \epsilon (z ^{\prime})$. Analogously, integrating twice Eq. (\[MaxRed2\]) yields the continuity of $\psi$ and $\partial _{z} \psi$ at the singular point. *General solution*. The solutions to equations (\[MaxRed1\]) and (\[MaxRed2\]) subject to the above boundary conditions can be expressed in terms of the solutions, $e ^{k _{jz} z}$, $e ^{k _{jz} ^{\ast} z}$, $e ^{- k _{jz} z}$ and $e ^{- k _{jz} ^{\ast} z}$, of the corresponding homogeneous equations. Here, $j$ labels the two media and $k _{jz} = \alpha _{j} + i \beta _{j}$ is the complex wave number, with $\alpha _{j}$ and $\beta _{j}$ given by Eq. (\[kappa\]). To compute the reduced functions $\phi$ and $\psi$ we first partition the $z$-axis in the three regions: (I) $z<0$, (II) $0 < z < z ^{\prime}$ and (III) $z ^{\prime} < z$. Next, we write an appropriate linear combination of the solutions to the homogeneous equation for each region, and finally we apply the corresponding boundary conditions. On the one hand, for the reduced scalar potential $\phi$, the forms of the solutions in the three regions are as follows: $$\begin{aligned} \phi _{I} &= a _{1} e ^{k _{1z} z} + a _{2} e ^{k _{1z} ^{\ast} z} \label{phiI} , \\ \phi _{II} &= b _{1} e ^{ k _{2z} z} + b _{2} e ^{ k _{2z} ^{\ast} z} + c _{1} e ^{ - k _{2z} z} + c _{2} e ^{ - k _{2z} ^{\ast} z} , \label{phiII} \\ \phi _{III} &= d _{1} e ^{ - k _{2z} z} + d _{2} e ^{ - k _{2z} ^{\ast} z} , \label{phiIII}\end{aligned}$$ where the signs in the exponentials (\[phiI\]) and (\[phiIII\]) are required by the boundary condition that $\phi$ goes to zero for $\vert z \vert \rightarrow \infty$. On the other hand, we observe that Eq. (\[MaxRed2\]) dictates the relation between $\psi$ and $\phi$; namely, $\psi \sim i \epsilon _{j} \phi$ for $\phi \sim e ^{\pm k _{jz} z}$ and $\psi \sim - i \epsilon _{j} \phi$ for $\phi \sim e ^{\pm k _{jz} ^{\ast} z}$. Using this result we find that Eq. (\[phiI\]) implies that, for the region I, $\psi _{I} = i \epsilon _{1} \left( a _{1} e ^{ k _{1z} z} - a _{2} e ^{ k _{1z} ^{\ast} z} \right)$. In a similar fashion we obtain the corresponding expressions for $\psi _{II}$ and $\psi _{III}$. Imposing the boundary conditions and solving the resulting system of equations we find for the coefficients $$\begin{aligned} a _{1} &= a _{2} ^{\ast} = \frac{q}{2 \epsilon _{1}} \frac{k _{2z} \left( \epsilon _{1} - \epsilon _{2} \right) e ^{- k _{2z} ^{\ast} z ^{\prime}} + \left[ k _{2z} ^{\ast} \left( \epsilon _{1} + \epsilon _{2} \right) + 2 k _{1z} ^{\ast} \epsilon _{1} \right] e ^{- k _{2z} z ^{\prime}}}{2 \left( \epsilon _{1} k _{1z} k _{1z} ^{\ast} + \epsilon _{2} k _{2z} k _{2z} ^{\ast} \right) + \left( \epsilon _{1} + \epsilon _{2} \right) \left( k _{1z} ^{\ast} k _{2z} + k _{1z} k _{2z} ^{\ast} \right) } , \\ b _{1} &= b _{2} ^{\ast} = \frac{q}{4 \epsilon _{2} k _{2z}} e ^{- k _{2z} z ^{\prime}} , \\ c _{1} &= c _{2} ^{\ast} = \frac{q}{4 \epsilon _{2} k _{2z}} \frac{\left[ 2 \left( \epsilon _{2} k _{2z} k _{2z} ^{\ast} - \epsilon _{1} k _{1z} k _{1z} ^{\ast} \right) + \left( \epsilon _{1} + \epsilon _{2} \right) \left( k _{2z} k _{1z} ^{\ast} - k _{2z} ^{\ast} k _{1z} \right) \right] e ^{- k _{2z} z ^{\prime}} - 2 k _{1z} k _{2z} \left( \epsilon _{1} - \epsilon _{2} \right) e ^{- k _{2z} ^{\ast} z ^{\prime}} }{2 \left( \epsilon _{1} k _{1z} k _{1z} ^{\ast} + \epsilon _{2} k _{2z} k _{2z} ^{\ast} \right) + \left( \epsilon _{1} + \epsilon _{2} \right) \left( k _{1z} ^{\ast} k _{2z} + k _{1z} k _{2z} ^{\ast} \right)} , \\ d _{1} &= d _{2} ^{\ast} = \frac{q}{4 \epsilon _{2} k _{2z}} \left\lbrace e ^{ k _{2z} z ^{\prime}} \!\! + \! \frac{\left[ 2 \left( \epsilon _{2} k _{2z} k _{2z} ^{\ast} - \epsilon _{1} k _{1z} k _{1z} ^{\ast} \right) + \left( \epsilon _{1} + \epsilon _{2} \right) \left( k _{2z} k _{1z} ^{\ast} - k _{2z} ^{\ast} k _{1z} \right) \right] e ^{- k _{2z} z ^{\prime}} \! \! - 2 k _{1z} k _{2z} \left( \epsilon _{1} - \epsilon _{2} \right) e ^{- k _{2z} ^{\ast} z ^{\prime}}}{2 \left( \epsilon _{1} k _{1z} k _{1z} ^{\ast} + \epsilon _{2} k _{2z} k _{2z} ^{\ast} \right) + \left( \epsilon _{1} + \epsilon _{2} \right) \left( k _{1z} ^{\ast} k _{2z} + k _{1z} k _{2z} ^{\ast} \right)} \right\rbrace .\end{aligned}$$ Using these results, we can write the reduced functions beneath the surface as $\phi _{I} = 2 \mbox{Re} \left( a _{1} e ^{k _{1z} z} \right)$ and $\psi _{I} = - 2 \epsilon _{1} \mbox{Im} \left( a _{1} e ^{k _{1z} z } \right)$, whose explicit forms are $$\begin{aligned} \phi _{I} = & \; \frac{q}{\epsilon _{1} Q} e ^{\alpha _{1} z - \alpha _{2} z ^{\prime}} \Big[ \left( \epsilon _{1} \alpha _{1} + \epsilon _{2} \alpha _{2} \right) \cos \left( \beta _{1} z - \beta _{2} z ^{\prime} \right) + \left( \epsilon _{1} \beta _{1} + \epsilon _{2} \beta _{2} \right) \sin \left( \beta _{1} z - \beta _{2} z ^{\prime} \right) + \left( \epsilon _{1} - \epsilon _{2} \right) \notag \\ & \times \cos \left( \beta _{1} z \right) \left[ \alpha _{2} \cos \left( \beta _{2} z ^{\prime} \right) - \beta _{2} \sin \left( \beta _{2} z ^{\prime} \right) \right] \Big] , \\ \psi _{I} = & \; \frac{q}{Q} e ^{\alpha _{1} z - \alpha _{2} z ^{\prime}} \Big[ \left( \epsilon _{1} \beta _{1} + \epsilon _{2} \beta _{2} \right) \cos \left( \beta _{1} z - \beta _{2} z ^{\prime} \right) - \left( \epsilon _{1} \alpha _{1} + \epsilon _{2} \alpha _{2} \right) \sin \left( \beta _{1} z - \beta _{2} z ^{\prime} \right) + \left( \epsilon _{1} - \epsilon _{2} \right) \notag \\ & \times \sin \left( \beta _{1} z \right) \left[ \beta _{2} \sin \left( \beta _{2} z ^{\prime} \right) - \alpha _{2} \cos \left( \beta _{2} z ^{\prime} \right) \right] \Big] ,\end{aligned}$$ which are the ones we present in the main text in Eqs. (\[RedEscz&lt;0\]) and (\[RedVecz&lt;0\]). Now we follow similar steps to derive the reduced functions $\phi _{II} = 2 \mbox{Re} \left( b _{1} e ^{k _{2z} z} + c _{1} e ^{- k _{2z} z} \right)$ and $\psi _{II} = - 2 \epsilon _{2} \mbox{Im} \left( b _{1} e ^{k _{2z} z} + c _{1} e ^{- k _{2z} z} \right)$. The results are $$\begin{aligned} \phi _{II} = & \; \frac{q}{2 \epsilon _{2} r _{2} ^{2}} e ^{\alpha _{2} (z - z ^{\prime})} \Big\{ \alpha _{2} \cos \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] + \beta _{2} \sin \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \Big\} - \frac{q (\epsilon _{1} - \epsilon _{2})}{2 \epsilon _{2} Q} e ^{- \alpha _{2} (z + z ^{\prime})} \Big\{ \alpha _{1} \cos \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \notag \\ & + \beta _{1} \sin \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \Big\} + \frac{q}{2 \epsilon _{2} r _{2} ^{2} Q} e ^{- \alpha _{2} (z + z ^{\prime})} \Big\{ \Gamma \cos \left[ \beta _{2} \left( z + z ^{\prime} \right) \right] - \Delta \sin \left[ \beta _{2} \left( z + z ^{\prime} \right) \right] \Big\} , \\ \psi _{II} = & \; \frac{q}{2 r _{2} ^{2}} e ^{\alpha _{2} (z - z ^{\prime})} \Big\{ \beta _{2} \cos \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] - \alpha _{2} \sin \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \Big\} + \frac{q (\epsilon _{1} - \epsilon _{2})}{2 Q} e ^{- \alpha _{2} (z + z ^{\prime})} \Big\{ \beta _{1} \cos \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \notag \\ & - \alpha _{1} \sin \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \Big\} + \frac{q}{2 r _{2} ^{2} Q} e ^{- \alpha _{2} (z + z ^{\prime})} \Big\{ \Delta \cos \left[ \beta _{2} \left( z + z ^{\prime} \right) \right] + \Gamma \sin \left[ \beta _{2} \left( z + z ^{\prime} \right) \right] \Big\} ,\end{aligned}$$ where $\Gamma$, $\Delta$ and $Q$ are defined in Eq. (\[Defs\]). Finally, for $\phi _{III} = 2 \mbox{Re} \left( d _{1} e ^{- k _{2z} z} \right)$ and $\psi _{III} = - 2 \epsilon _{2} \mbox{Im} \left( d _{1} e ^{- k _{2z} z} \right)$, we obtain $$\begin{aligned} \phi _{III} = & \; \frac{q}{2 \epsilon _{2} r _{2} ^{2}} e ^{\alpha _{2} (z ^{\prime} - z)} \Big\{ \alpha _{2} \cos \left[ \beta _{2} \left( z ^{\prime} - z \right) \right] + \beta _{2} \sin \left[ \beta _{2} \left( z ^{\prime} - z \right) \right] \Big\} - \frac{q (\epsilon _{1} - \epsilon _{2})}{2 \epsilon _{2} Q} e ^{- \alpha _{2} (z + z ^{\prime})} \Big\{ \alpha _{1} \cos \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \notag \\ & + \beta _{1} \sin \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \Big\} + \frac{q}{2 \epsilon _{2} r _{2} ^{2} Q} e ^{- \alpha _{2} (z + z ^{\prime})} \Big\{ \Gamma \cos \left[ \beta _{2} \left( z + z ^{\prime} \right) \right] - \Delta \sin \left[ \beta _{2} \left( z + z ^{\prime} \right) \right] \Big\} , \\ \psi _{III} = & \; \frac{q}{2 r _{2} ^{2}} e ^{\alpha _{2} (z ^{\prime} - z)} \Big\{ \beta _{2} \cos \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] + \alpha _{2} \sin \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \Big\} + \frac{q (\epsilon _{1} - \epsilon _{2})}{2 Q} e ^{- \alpha _{2} (z + z ^{\prime})} \Big\{ \beta _{1} \cos \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \notag \\ & - \alpha _{1} \sin \left[ \beta _{2} \left( z - z ^{\prime} \right) \right] \Big\} + \frac{q}{2 r _{2} ^{2} Q} e ^{- \alpha _{2} (z + z ^{\prime})} \Big\{ \Delta \cos \left[ \beta _{2} \left( z + z ^{\prime} \right) \right] + \Gamma \sin \left[ \beta _{2} \left( z + z ^{\prime} \right) \right] \Big\} .\end{aligned}$$ We observe that the reduced functions above the WSM surface can be written in a unified fashion. For example, $\phi _{II}$ and $\phi _{III}$ are both contained in Eq. (\[RedEscz&gt;0\]); and $\psi _{II}$ and $\psi _{III}$ are both contained in Eq. (\[RedVecz&gt;0\]). [99]{} X.-L. Qi and S.-C. Zhang, Rev. Mod. Phys. **83**, 1057 (2011). M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. **82**, 3045 (2010). N. P. Armitage, E. J. Mele and A. Vishwanath, Rev. Mod. Phys. **90**, 015001 (2018). X. Wan, A. M. Turner, A. Vishwanath and S. Y. Savrasov Phys. Rev. B **83**, 205101 (2011). S.-M. Huang, S.-Y. Xu, I. Belopolski, C.-C. Lee, G. Chang, B. Wang, N. Alidoust, G. Bian, M. Neupane, C. Zhang, S. Jia, A. Bansil, H. Lin and M. Z. Hasan, Nat. Commun. **6**, 7373 (2015). B. Q. Lv, H. M. Weng, B. B. Fu, X. P. Wang, H. Miao, J. Ma, P. Richard, X. C. Huang, L. X. Zhao, G. F. Chen, Z. Fang, X. Dai, T. Qian and H. Ding, Phys. Rev. X **5**, 031013 (2015). S.-Y. Xu, I. Belopolski, N. Alidoust, M. Neupane, G. Bian, C. Zhang, R. Sankar, G. Chang, Z. Yuan, C.-C. Lee, S.-M. Huang, H. Zheng, J. Ma, D. S. Sanchez, B. Wang, A. Bansil, F. Chou, P. P. Shibayev, H. Lin, S. Jia and M. Z. Hasan, Science **349**, 613 (2015). L. X. Yang, Z. K. Liu, Y. Sun, H. Peng, H. F. Yang, T. Zhang, B. Zhou, Y. Zhang, Y. F. Guo, M. Rahn, D. Prabhakaran, Z. Hussain, S.-K. Mo, C. Felser, B. Yan and Y. L. Chen, Nature physics **11**, 728 (2015). S.-Y. Xu, I. Belopolski, D. S. Sanchez, M. Neupane, G. Chang, K. Yaji, Z. Yuan, C. Zhang, K. Kuroda, G. Bian, C. Guo, H. Lu, T.-R. Chang, N. Alidoust, H. Zheng, C.-C. Lee, S.-M. Huang, C.-H. Hsu, H.-T. Jeng, A. Bansil, T. Neupert, F. Komori, T. Kondo, S. Shin, H. Lin, S. Jia and M. Z. Hasan, Phys. Rev. Lett. **116**, 096801 (2016). X.-L. Qi, T. L. Hughes and S.-C. Zhang, Phys. Rev. B **78**, 195424 (2008). A. M. Essin, J. E. Moore and D. Vanderbilt, Phys. Rev. Lett. **102**, 146805 (2009). L. Wu, M. Salehi, N. Koirala, J. Moon, S. Oh and N. Armitage, Science **354**, 1124 (2016). A. A. Zyuzin and A. A. Burkov, Phys. Rev. B **86**, 115133 (2012). A. A. Zyuzin, S. Wu and A. A. Burkov, Phys. Rev. B **85**, 165110 (2012). P. Goswami and S. Tewari, Phys. Rev. B **88**, 245107 (2013). M. Kargarian, M. Randeria and N. Trivedi, Sci. Rep. **5**, 12683 (2015). J. H. Wilson, A. A. Allocca and V. Galitski, Phys. Rev. B **91**, 235115 (2015). J. Hofmann and S. D. Sarma, Phys. Rev. B **93**, 241402(R) (2016). F. M. D. Pellegrino, M. I. Katsnelson and M. Polini, Phys. Rev. B **92**, 201407(R) (2015). X.-L. Qi, R. Li, J. Zang and S.-C. Zhang, Science **323**, 1184 (2009). A. Karch, Phys. Rev. Lett. **103**, 171601 (2009). A. Martín-Ruiz, M. Cambiaso, and L. F. Urrutia, Phys. Rev. D **94**, 085019 (2016). R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. **38**, 1440 (1977). F. Wilczek, Phys. Rev. Lett. **58**, 1799 (1987). A. Martín-Ruiz, Phys. Rev. D **98**, 056012 (2018). A. Martín-Ruiz, M. Cambiaso and L. F. Urrutia, Phys. Rev. D **92**, 125015 (2015). A. Martín-Ruiz, M. Cambiaso and L. F. Urrutia, Phys. Rev. D **93**, 045022 (2016). A. Martín-Ruiz, M. Cambiaso and L. F. Urrutia, Europhysics Lett. **113**, 60005 (2016). K.-Y. Yang, Y.-M. Lu, and Y. Ran, Phys. Rev. B **84**, 075129 (2011). A. A. Burkov and L. Balents, Phys. Rev. Lett. **107**, 127205 (2011). A. G. Grushin, Phys. Rev. D **86**, 045001 (2012). E. V. Gorbar, V. A. Miransky, I. A. Shovkovy, and P. O. Sukhachov, Phys. Rev. B **96**, 085130 (2017). K. Landsteiner, Acta Phys. Pol. B **47**, 2617 (2016). M. M. Vazifeh and M. Franz, Phys. Rev. Lett. **111**, 027201 (2013). G. Başar, D. E. Kharzeev and H.-U. Yee, Phys. Rev. B **89**, 035142 (2014). J. Ma and D. A. Pesin, Phys. Rev. B **92**, 235205 (2015). Q. N. Meier, M. Fechner, T. Nozaki, M. Sahashi, Z. Salman, T. Prokscha, A. Suter, P. Schoenherr, M. Lilienblum, P. Borisov, I. E. Dzyaloshinskii, M. Fiebig, H. Luetkens and N. A. Spaldin, Phys. Rev. X **9**, 011011 (2019). B. Sbierski, G. Pohl, E. J. Bergholtz and P. W. Brouwer, Phys. Rev. Lett. **113**, 026602 (2014). Y. Sun, Y. Zheng, C. Felser and B. Yan, Phys. Rev. Lett. **117**, 146403 (2016). B. Xu, Y. M. Dai, L. X. Zhao, K. Wang, R. Yang, W. Zhang, J. Y. Liu, H. Xiao, G. F. Chen, A. J. Taylor, D. A. Yarotski, R. P. Prasankumar, and X. G. Qiu, Phys. Rev. B **93**, 121110(R) (2016). Y. Ferreiros and A. Cortijo, Phys. Rev. B **93**, 195154 (2016). T. Holder, C.-W. Huang and P. Ostrovsky, Phys. Rev. B **96**, 174205 (2017). J. Klier, I. V. Gornyi and A. D. Mirlin, Phys. Rev. B **96**, 2142009 (2017). Y. Zhang, Y. Sun and B. Yan , Phys. Rev. B **97**, 041104(R) (2018). Z. Qiu, G. Cao and X.-G. Huang, Phys. Rev. D **95**, 036002 (2017). J. Schwinger, L. DeRaad, K. Milton, and W. Tsai, *Classical Electrodynamics*, Advanced Book Program (Perseus Books Boulder, CO, 1998). A. Zangwill, *Modern Electrodynamics*, Cambridge University Press (Verlag, 2013). I. S. Gradshteyn and I. M. Ryzhik, *Table of Integrals, Series, and Products*, 4th ed. edited by A. Jeffrey and D. Zwillinger (Academic Press, New York, 1994). A. Martín-Ruiz and L. F. Urrutia, Phys. Rev. A. **97**, 022502 (2018). A. Martín-Ruiz and E. Chan-López, Europhysics Lett. **119**, 53001 (2017). H. B. Nielsen and M. Ninomiya, Nucl. Phys. B **185**, 20 (1981). H. B. Nielsen and M. Ninomiya, Nucl. Phys. B **193**, 173 (1981). H. B. Nielsen and M. Ninomiya, Phys. Lett. B **130**, 389 (1983). H. Yi, Z. Wang, C. Chen, Y. Shi, Y. Feng, A. Liang, Z. Xie, S. He, J. He, Y. Peng, X. Liu, Y. Liu, L. Zhao, G. Liu, X. Dong, J. Zhang, M. Nakatake, M. Arita, K. Shimada, H. Namatame, M. Taniguchi, Z. Xu, C. Chen, X. Dai, Z. Fang and X. J. Zhou, Sci. Rep. **4**, 6106 (2014). M. Neupane, S.-Y. Xu, R. Sankar, N. Alidoust, G. Bian, C. Liu, I. Belopolski, T.-R. Chang, H.-T. Jeng, H. Lin, A. Bansil, F. Chou and M. Z. Hasan, Nat Commun. **5**, 3786 (2014). S. Jeon, B. B. Zhou, A. Gyenis, B. E. Feldman, I. Kimchi, A. C. Potter, Q. D. Gibson, R. J. Cava, A. Vishwanath and A. Yazdani, Nat. Mat. **13**, 851 (2014). Z. K. Liu, B. Zhou, Y. Zhang, Z. J. Wang, H. M. Weng, D. Prabhakaran, S.-K. Mo, Z. X. Shen, Z. Fang, X. Dai, Z. Hussain and Y. L. Chen, Science **343**, 864 (2014). S.-Y. Xu, C. Liu, S. K. Kushwaha, R. Sankar, J. W. Krizan, I. Belopolski, M. Neupane, G. Bian, N. Alidoust, T.-R. Chang, H.-T. Jeng, C.-Y. Huang, W.-F. Tsai, H. Lin, P. P. Shibayev, F.-C. Chou, R. J. Cava and M. Z. Hasan, Science **347**, 294 (2015). J. Xiong, S. Kushwaha, J. Krizan, T. Liang, R. J. Cava and N. P. Ong, Europhysics Lett. **114**, 27002 (2016). A. A. Burkov and L. Balents, Phys. Rev. Lett. **107**, 127205 (2011). A. A. Burkov, J. Phys: Condens. Matter **27**, 113201 (2015). A. A. Burkov, Annu. Rev. Condens. Matter Phys. **9**, 359 (2018). J. Zang and N. Nagaosa, Phys. Rev. B **81**, 245125 (2010). J. C. Allred, R. N. Lyman, T. W. Kornack and M. V. Romalis, Phys. Rev. Lett. **89**, 130801 (2002). I. K. Kominis, T. W. Kornack, J. C. Allred and M. V. Romalis, Nature **422**, 596 (2003). H. B. Dang, A. C. Maloof and M. V. Romalis, Appl. Phys. Lett. **97**, 151110 (2010).
--- author: - 'J.M. Diego[^1]' - 'S.M. Molnar' - 'C. Cerny' - 'T. Broadhurst' - 'R. Windhorst' - 'A. Zitrin' - 'R. Bouwens' - 'D. Coe' - 'C. Conselice' - 'K. Sharon' bibliography: - 'MyBiblio.bib' title: 'Free-form lens model and mass estimation of the high redshift galaxy cluster ACT-CL J0102-4915, “El Gordo”' --- Introduction ============ The galaxy cluster, ACT-CL J0102-4915, also known as [*El Gordo*]{} is a relatively high redshift z=0.870 with a rich, bimodal galaxy distribution [@Williamson2011; @Menanteau2012]. Its mass has been estimated in earlier work using different techniques, including combined dynamical, X-ray and Sunyaev-Zeldovich (SZ) data [@Menanteau2012], strong lensing data [@Zitrin2013; @Cerny2018], and weak lensing data [@Jee2014]. El Gordo has also been the subject of several dynamical studies, including numerical N-body simulations [@Donnert2014] and hydrodynamical simulations [@Molnar2015; @Zhang2015; @Zhang2018]. These studies have highlighted the impressively large scale cometary structure visible clearly in X-ray images that appears to imply El Gordo is being observed right after a collision of two subgroups [@Molnar2015; @Zhang2015], similar to the iconic [*Bullet*]{} cluster. This interpretation is supported by the presence of two radio relics ahead and behind the X-ray cometary structure [@Molnar2015; @Lindner2014]. Based on the X-ray and radio morphology, as well as on a preliminary lens model for the mass distribution, [@Ng2015] argue that El Gordo is in a return phase after first core passage. This means that the cluster is being observed after the phase of maximum apocenter, and that the two groups are moving against each other rather than away from each other. Part of this conclusion is based on a lens model that relies on weak lensing and that assigns more mass to the NW clump than to the SE clump. This interpretation is however challenged by lens models based on strong lensing data that place most of the mass in the SE group [@Zitrin2013; @Cerny2018]. The El Gordo cluster is an extreme cluster at several levels. It is the most massive cluster at $z\approx 0.9$ with an estimated mass ranging from $M_{200c} = 1.8\times 10^{15}M_{\odot}$ to $M_{200c} = 2 \times 10^{15}M_{\odot}$. $M_{200c}$ is the mass within the sphere of radius $r_{200}$. This radius is defined as the radius where the mass density enclosed in the sphere with the same radius and centered in the object is 200 times the critical density of the universe at the cluster redshift. Some authors estimate the overdensity in relation to the average density of the universe, $\rho_m = \rho_c\Omega_m$. In this case the mass is denoted as $M_{200\rho_m}$. Using SZ data, [@Williamson2011] estimates a mass of $M_{200\rho_m} = 1.89 \pm 0.45\times 10^{15}M_{\odot}$. [@Menanteau2012] obtained a mass estimate of $M_{200\rho_m} = 2.16 \pm 0.32 \times 10^{15}M_{\odot}$ based on different scaling relations. Based on an extrapolation of the strong lensing mass model, [@Zitrin2013] estimate a total mass of $M_{200\rho_m} \sim 2.3 \times 10^{15}M_{\odot}$. [@Jee2014] uses weak lensing obtained with HST and estimates $M_{200c} = 3.13 \pm 0.56 \times 10^{15}M_{\odot}$. At the high-end of the mass range for El Gordo, these masses are in tension with the standard LCDM model [see for instance @Jee2014], that predicts the maximum mass at this redshift should be less than $M_{200\rho_m} \approx 1.7 \times 10^{15}M_{\odot}$ [@Harrison2012]. A similar conclusion is reached when studying the results obtained by large N-body simulations. Using the very large 630 Gpc$^3$ N-body simulation Jubilee (based on a standard LCDM model), [@Watson2014] (in their Figure 5) find that the most massive cluster in the simulation and at $z=0.9$ is $M_{200\rho_m} \approx 1.5\times 10^{15}M_{\odot}$. Note that in [@Watson2014], the masses are defined as $M_{178\rho_m}$ rather than $M_{200\rho_m}$ or $M_{200c}$. For an NFW profile, $M_{178\rho_m} \approx 1.2M_{200c}$ and $M_{178\rho_m}\approx 4\%$ times higher than $M_{200\rho_m}$ [@Waizmann2012]. Given the fact that El Gordo was found in a relatively small area of the sky, it raises the question about its significance as possible evidence for tension with the LCDM model. At the source of this apparent tension could be an overestimation of its mass. Hence, it is important to improve on its mass estimation using the latest lensing data. As of 2014, El Gordo was also the highest-redshift cluster known to host radio relics [@Lindner2014]. The X-ray emission exhibits an interesting offset between the peak of the X-ray emission and the position of the BCG. Contrary to what happens in the Bullet cluster, the X-ray peak seems to be ahead of the BCG. However, in the interpretation of [@Ng2015], the BCG would be moving towards the second group, so the X-ray peak would be trailing the BCG. The returning phase interpretation of El Gordo is challenged based on results from dedicated N-body/hydrodynamical simulations reproducing most of the observations of El Gordo [@Molnar2015; @Zhang2015]. Also [@Molnar2018] demonstrate that the speed of the outgoing shocks can be very large (4000–5000 km s$^{-1}$) in a massive merging cluster like El Gordo, therefore leaving the system before the first turnaround. El Gordo is also unique in the sense that it is a powerful lens at relatively high redshift. One of the features that makes El Gordo an attractive target for lensing studies is the fact that for sources at high redshift, critical curves form at relatively large distances from the member galaxies. This is particularly true in the gap between the two clusters, where the critical curves are relatively undisturbed by nearby member galaxies. Having undisturbed critical curves is relevant to observe caustic crossing events of distant stars [@Kelly2018; @Diego2018], since in this case the maximum magnification can be larger than in situations where critical curves are affected by microlenses in member galaxies or from the intracluster medium. Caustic crossing events has been proposed as a technique useful to study Pop III stars and stellar-mass black hole accretion discs in [@Windhorst2018] with JWST. Because El Gordo is the highest redshift known cluster with potentially such significant transverse motion — based on the X-ray morphology and the two lensing mass centers discussed in this paper — it is an ideal target for JWST follow-up to search for caustic transits at z$>>$1, and possibly for First Light caustic transits at z$>$7. For this reason, El Gordo is a JWST GTO target that will be observed in Cycle 1 (JWST program \# 1176; PI: Windhorst). It is our sincere hope that JWST Guest Observers will propose to observe El Gordo in many successive epochs, amongst others to find caustic transits at z$>>$1. In this paper we derive the mass distribution and study this interesting cluster using the latest data from the RELICS program and newly identified strong lensing systems. We use our free-form lensing reconstruction code WSLAP+ [@Diego2005; @Diego2007; @Sendra2014; @Diego2016], which does not rely on assumptions about the distribution of dark matter. We pay special attention to the integrated mass as a function of radius and the effect that extrapolations of the derived mass profile up to the virial radius has on the inferred total mass of the cluster. This paper is organized as follows. In section \[sect\_data\] we describe the data and simulations used in this work. In section \[sec\_math\] we describe briefly the algorithm used to perform the lensing reconstruction. Results are presented in section \[sect\_result\] and discussed in section \[sect\_disc\]. We summarize and conclude in section \[sect\_concl\]. We adopt a standard flat cosmological model with $\Omega_m=0.3$ and $h=0.7$. At the redshift of the lens, and for this cosmological model, one arcsecond corresponds to 7.8 kpc. ![HST Optical+IR composite image with overlaid contours from Chandra. Multiply lensed images are marked with their corresponding IDs. The color of the IDs indicate the quality of the family identification. Images with yellow IDs are category A (reliable), IDs in white are still reliable, but not as confident as A. Images with IDs in orange are less reliable although still valid candidates. The blue and red curves show the critical curve at $z=3.3$ for the driver model (derived from images in category A) and full model (derived from images in categories A and B) respectively. []{data-label="Fig_CC_Arcs_Chandra"}](Figs/ElGordo_Arcs_CC_Chandra.pdf){width="9cm"} Observational and simulated data {#sect_data} ================================ In this section we describe briefly the data used in this work as well as previous N-body simulations of El Gordo cluster that will be used to compare with our results. Optical data ------------ We use public Hubble imaging data from programs GO 12755 (PI J. Hughes), GO 12477 (PI F. High), and GO 14096 (PI D. Coe). These ACS and WFC3/IR observations include data in 10 filters spanning wavelengths $sim$0.4–1.7 $\mu$m. The Reionization Lensing Cluster Survey [RELICS; @Coe2019] delivered reduced images combining data from all of these HST programs, including their own (14096). RELICS also delivered photometric redshift catalogs of objects detected in these images. We retrieved these data products from the Mikulski Archive for Space Telescope (MAST). From the reduced images, we produce colour images by combining the optical and IR bands. X-ray data ---------- To explore the dynamical state of El Gordo, we also produce an X-ray image using public Chandra data. In particular, we used data from the ACIS instrument acquired in 2011–2012 and with the Obs ID 12258, Obs ID 14022 and Obs ID 14023 (PI. J. Hughes) totaling $\approx 350$ ks. The X-ray data is smoothed using the code [ASMOOTH]{} [@Ebeling2006]. A false color image from the HFF imaging overlaid contours of the smoothed X-ray data is shown in figure \[Fig\_CC\_Arcs\_Chandra\]. The distribution of X-rays shows a cometary structure similar to the Bullet cluster. The peak of the X-ray emission is offset with respect to the BCG by $\approx 70$ kpc. Simulated data -------------- In order to study the impact of extrapolating a strong lensing derived profile (i.e, a profile that extends a limited range in radii) up to the virial radius, we use dedicated N-body/hydrodynamical simulations that mimic El Gordo [for details of the simulations see @Molnar2015]. Our simulations were constrained by multi-frequency data: X-ray, radio (Sunyaev–Zel’dovich observations), and optical (for gravitational lensing and dynamics). Our best model for El Gordo assumed initial total masses of $1.4 \times 10^{15}$ ${\rm M}_{\odot}$ and $7.5 \times 10^{14}$ ${\rm M}_{\odot}$ for the main and the infalling cluster respectively, an impact parameter of 300 kpc, and a relative initial infall velocity of 2250 km/sec when separated by the sum of the two virial radii. This model explains most of the observed features of El Gordo: the distinctive cometary feature with a [*twin-tailed*]{} wake observed in the X-ray morphology, the locations of the two peaks of the dark matter components, and the position of the SZ peak. In this paper we use the total mass distribution from our best model to derive the surface mass distribution, and compare that to the surface mass distribution we derived from gravitational lensing. Multiply lensed galaxies ------------------------ In this section we discuss the lensed systems used to constrain the mass model. The identification of multiply lensed galaxies systems in El Gordo is particularly challenging, particularly because no multiply lensed galaxies with spectroscopic redshifts have been confirmed so far. However, with the new RELICS data, we can improve upon previous identifications. As a default, we adopt the original naming scheme of [@Zitrin2013] for the lensed system candidates, who identified the first families of strongly lensed galaxies and performed the first strong lensing analysis of this cluster based on the three HST bands available at the time (the compilation of systems is detailed in the appendix). Notably, two systems (1 and 2 in Fig. \[Fig\_CC\_Arcs\_Chandra\]) exhibit well resolved morphological features that, together with robust photometric redshifts, allow to unambiguously confirm these two families of images. Systems 3, 4 and 5 contain also morphological information and reliable photometric redshifts, which makes the identification of these systems equally robust. Systems 1 through 4 are similar to the systems defined in [@Zitrin2013], and [@Cerny2018]. We note that systems 10 and 20 in [@Cerny2018] are part of our systems 1 and 2, where we identify different knots in the systems that are used as additional constraints. Our new system 5 was also independently identified by [@Cerny2018] as system 13 in their work. In the original scheme of [@Zitrin2013], system 4 was composed of 3 counterimages, and two possible radial images. A preliminary model clearly disfavors the radial image 4.2 and candidate image 4.3 [in @Zitrin2013] as part of system 4. We note that [@Cerny2018] also rejected these two counterimages as part of system 4 (as do some updated models by Zitrin et al.; *private communication*). Instead, we suggest that the two alleged arclets 4.2 and 4.3 in [@Zitrin2013] are likely features in the galaxy cluster associated with the cooling of the plasma (see Section\[Sect\_Filament\]). Using the robust systems 1 through 5 we derive a first model based on these reliable systems. This model is later used to unveil new system families (listed in Table \[tab\_arcs\]). We refer to this first model for the mass distribution as the [*driver model*]{}.\ Table \[tab\_arcs\] lists all the arclets, including also some candidates listed here for completeness, but not used in the lens reconstruction. Also for completeness, we include system 8 as originally defined in [@Zitrin2013]. The driver model disfavors this system, so we do not include in our lens reconstruction. Also [@Cerny2018] discarded this system. The systems in table \[tab\_arcs\] are divided in 3 categories, A, B and C. Arclets in category A are robustly confirmed based on their color, morphology and photometric redshift. As mentioned above, we use these arclets to derive the driver model. Systems in category B are highly compatible with the driver model. In addition, color, morphology and when available the photometric redshift is also consistent among the different members of the same family of images. Systems labeled A and B are used to derive an alternative model that we name the [*full model*]{}. Arclets in the category C are still consistent with the driver model, but lack of morphological information, a mismatch in the alignment of the predicted image (compared with the observed one), or tension between the predicted and observed magnification ratios reduces the reliability of the identification. Arclets marked with label C are not used in the mass reconstruction, but are still included in table \[tab\_arcs\]. Future data will confirm or reject these system candidates. The systems in table \[tab\_arcs\] that are new identifications are marked with bold face. Systems that were fully included in previous work are indicated in the [*Comments*]{} column. Our new system 6 has an estimated redshift (from the lens model) of $z \approx 4.3$, which is consistent with the photometric redshift. For system 7, we identify a new candidate for 7c that differs from the candidate in [@Zitrin2013]. System 10 is a new system with a photometric redshift of 5.1 (for 10a). The driver model is fully consistent with this system and redshift. System 11 is a new redefinition of system 5 in [@Zitrin2013]. The driver model suggests that the big arclet forming part of system 5 in [@Zitrin2013] consists of two images merging at the critical curve. The corresponding third image is identified with the tail of a bright galaxy (see Figure \[Fig\_System11\]). Based on the driver model, the redshift of this galaxy should be $z \approx 3.1$ while the photometric redshift for the arc is $z \approx 2.2$. When this system is included in the lens reconstruction (i.e in the full model), we adopt the photometric redshift for this system. System 12 is a redefinition of system 14 in [@Cerny2018], based on the driver model and color+morphological information. Our 12a matches 14a in [@Cerny2018], but we identify two different counterimages. The driver model predicts a redshift of $z \approx 3$, consistent with the photometric redshift of $z=3.4$ of 12a. System 13 has no photometric redshift. The driver model predicts a redshift $z=3$ for this system. System 14 has a photometric redshift $z=2.7$ (14a), but we adopt the redshift predicted by the driver model, i.e $z=4$, for this system. System 15 corresponds to system 8 in [@Cerny2018], which was also independently identified. Both the photometric redshift ($z=2.7$) and the redshift predicted by the driver model ($z=2.65$) agree reasonably well. Systems 17, 18 and 19 are all new candidates, but lack of morphological features do not allow us to confirm their association based on the morphology of the predicted images. Finally, we do not consider system 5 in [@Cerny2018]. Although the driver model is consistent with the positions of system 5 in [@Cerny2018], a third image is clearly predicted, but not observed, casting doubt on the feasibility of this system. However, we should note that it is also possible that the driver model fails at correctly predicting the position of the third counterimage, and that this image could be hidden underneath one of the bright member galaxies. Formalism {#sec_math} ========= The mass reconstruction is based on our method WSLAP+. The reader can find the details of the method in our previous papers [@Diego2005; @Diego2007; @Sendra2014; @Diego2016]. Here we give a brief summary of the most essential elements.\ The lens equation is defined as follows, $$\beta = \theta - \alpha(\theta,\Sigma), \label{eq_lens}$$ where $\theta$ is the observed position of the source, $\alpha$ is the deflection angle, $\Sigma(\theta)$ is the surface mass-density of the cluster at the position $\theta$, and $\beta$ is the position of the background source. Both the strong lensing and weak lensing observables can be expressed in terms of derivatives of the lensing potential:[^2] $$\psi(\theta) = \frac{4 G D_{l}D_{ls}}{c^2 D_{s}} \int d^2\theta' \Sigma(\theta')ln(|\theta - \theta'|), $$ where $D_l$, $D_s$, and $D_{ls}$ are the angular diameter distances to the lens, to the source and from the lens to the source, respectively. The unknowns of the lensing problem are in general the surface mass density and the positions of the background sources in the source plane. The surface mass density is described by the combination of two components; i) a soft (or diffuse) component (usually parameterized as superposition of Gaussians); and ii) a compact component that accounts for the mass associated with the individual halos (galaxies) in the cluster.\ For the diffuse component, different bases can be used but we find that Gaussian functions provide a good compromise between the desired compactness and smoothness of the basis function. A Gaussian basis offers several advantages, including a fast analytical computation of the integrated mass for a given radius, a smooth nearly constant amplitude between overlapping Gaussians (with equal amplitudes) located at the right distances, and a orthogonality between relatively distant Gaussians that help reduce unwanted correlations. For the compact component, we adopt directly the light distribution in the IR band (F160W). For each galaxy, we assign a mass proportional to its surface brightness. This mass is later re-adjusted as part of the optimization process. ![Contours of the mass distribution for the full model compared with the optical image. The circles mark the position of the BCG in the SE and the center of the NW group. []{data-label="Fig_Mass_Optical"}](Figs/ElGordo_MassContour_vs_Optical.pdf){width="9cm"} As shown by [@Diego2005; @Diego2007], the strong and weak lensing problem can be expressed as a system of linear equations that can be represented in a compact form, $${\bf {\it \Theta}}= {\bf \Gamma}{\bf {\it X}}, \label{eq_lens_system}$$ where the measured strong lensing observables (and weak lensing if available) are contained in the array ${\bf {\it \Theta}}$ of dimension $N_{\Theta }=2N_{\rm sl}$, the unknown surface mass density and source positions are in the array ${\bf {\it X}}$ of dimension $$N_{\rm X}=N_{\rm c} + N_{\rm g} + 2N_{\rm sl} \label{eq_Nx}$$ and the matrix ${\bf \Gamma}$ is known (for a given grid configuration and fiducial galaxy deflection field) and has dimension $N_{\Theta }\times N_{\rm X}$. $N_{\rm sl}$ is the number of strong lensing observables (each one contributing with two constraints, $x$, and $y$), and $N_{\rm c}$ is the number of grid points (or cells) that we use to divide the field of view. Each grid point contains a Gaussian function. The width of the Gaussians are chosen in such a way that two neighbouring grid points with the same amplitude produce a horizontal plateau in between the two overlapping Gaussians. In this work, we consider only regular grid configurations. Irregular grids are useful when there is a clear peak in the mass distribution, for instance when the cluster has a well defined centre or a single BCG. $N_{\rm g}$ (in Eq. \[eq\_Nx\]) is the number of deflection fields (from cluster members) that we consider. $N_{\rm g}$ can be seen as a number of mass layers, each one containing one or several galaxies at the distance of the cluster. In this work we set $N_{\rm g}$ equal to 1, i.e, all galaxies are forced to have the same mass-to-light ratio. ![image](Figs/ElGordo_Xray_vs_Mass_DRIVER.pdf){width="9cm"} ![image](Figs/ElGordo_Xray_vs_Mass.pdf){width="9cm"} Finally, $N_{\rm s}$ in Eq. \[eq\_Nx\] is the number of background sources (each contributes with two unknowns, $\beta_x$, and $\beta_y$), which in our particular case ranges from $N_{\rm s}=5$ when only the subset of reliable systems are used (driver model in section \[sect\_data\]) to $N_{\rm s}=16$, when all systems labeled A or B in Table \[tab\_arcs\] are used in the reconstruction (full model). The solution, $X$, of the system of equations \[eq\_lens\_system\] is found after minimising a quadratic function of $X$ [derived from the system of equations \[eq\_lens\_system\] as described in @Diego2005]. The minimisation of the quadratic function is done with the constraint that the solution, ${\bf {\it X}}$, must be positive. Since the vector ${\bf {\it X}}$ contains the grid masses, the re-normalisation factors for the galaxy deflection field and the background source positions, and all these quantities are always positive (the zero of the source positions is defined in the bottom left corner of the field of view). Imposing ${\bf {\it X}}>0$ helps constrain the space of meaningful solutions, and to regularise the solution, as it avoids large negative and positive contiguous fluctuations. The quadratic algorithm convergence is fast (a few minutes), allowing for multiple solutions to be explored in a relatively short time. Different solutions can be obtained after modifying the starting point in the optimization and/or the redshifts of the systems without spectroscopic redshift. A detailed discussion of the quadratic algorithm can be found in [@Diego2005]. For a discussion of its convergence and performance (based on simulated data), see [@Sendra2014]. Results {#sect_result} ======= Thanks to the new RELICS data we can revise the multiple images identification in this cluster and assign them a rank ranging from A (most reliable) to C (least reliable). Based on the set of images ranked A (see Table \[tab\_arcs\]) we derive the driver model, which is later used to uncover new multiply images, or to reveal issues with previous identifications. Even though the driver model is based on a relatively small subset of only 5 families of images, the spatial distribution of these 5 families allows us to derive a reliable lens model. The driver model disfavors the radial counterimages candidates 4.2 and 4.3 in [@Zitrin2013] [these images were discarded also by @Cerny2018], and instead we suggest these may be signatures of cooling flows or jets near the BCG (see Section \[Sect\_Filament\] for discussion). System 8 in [@Zitrin2013] shows a relative good consistency with the driver model in terms of predicted vs observed positions, but the morphology of the observed images does not match well with the predicted morphology, so we do not use this system in any of our lensing reconstructions as well (this systems is still included in Table \[tab\_arcs\] for completeness). Some of the counterimages postulated in earlier work as candidates (for instance 7c and 9c) are in general consistent in terms of position, but their morphology is not well reproduced by the lens model. We also unveil new image candidates, some of them independently identified in [@Cerny2018]. In addition to these, we identify additional new families as described in the appendix. System 15 in [@Cerny2018] is consistent with the driver model, but a third image is clearly predicted by the driver model and not observed. Consequently, we do not use this system in our reconstruction, although we should note that the predicted position for the third image is only a few arcseconds from the BCG. Hence, it is possible that the driver model is not accurate enough around the BCG, and that the third image lies buried behind the bright BCG, and with a smaller magnification than the one predicted by the driver model. A smaller magnification is possible if the BCG has a larger mass-to-light ratio in the central region, for instance through a central spike in the mass distribution or a supermassive black hole at the centre. Based on the driver model, we expand the number of reliable systems and estimate their redshifts based on the available photometric redshift information and/or the redshift predicted by the driver model. Using the expanded set of systems (ranked A and B in Table \[tab\_arcs\]), we derive a new model, the [*full*]{} model. The mass distributions of the two models are compared in Figure \[Fig\_Mass\_Xray\]. For these plots, we have subtracted the contribution from the member galaxies to better show the diffuse component. The two models look similar to first order, but some differences can be appreciated specially around the BCG, where the full model places the peak of the diffuse component at several arcseconds from the BCG. In particular, the peak of the soft component correlates very well with the peak in the X-ray emission. Similar correlations between the diffuse component obtained by our method and the observed X-ray emission were found in earlier work, where we discussed the possibility that the lensing data is sensitive also to the plasma mass. This possibility is discussed in more detail in section \[Sect\_Filament\]. In terms of integrated mass, both models look also similar, but with the full model having slightly more mass, specially on the SE group. A quantitative comparison of the integrated mass as a function of radius for each subgroup and the two models is presented in Figure \[Fig\_Profiles\]. The mass increase in the SE group is due mostly to the smaller photometric redshift of system 11 used to derive the full model compared with the larger redshift predicted by the driver model. Figure \[Fig\_System11\] shows the predicted images for 11a and 11b based on 11c for the driver model and the full model. The driver model does a good job at predicting the arc position and morphology for a source redshift of $\approx 3$. In the full model, the system is assumed to be at the photometric redshift of $z=2.2$ instead, which results in a mass increase in the SE group needed to compensate for the smaller redshift of the background source. ![Integrated mass as a function of aperture radius. The purple lines correspond to the driver model, the red lines correspond to the model derived with all arcs. For comparison we show also the corresponding integrated mass for the mass model from [@Cerny2018] as orange lines. In all cases, the solid lines are for profiles centered in the NW group while the dashed lines are for profiles centered in the SE BCG. []{data-label="Fig_Profiles"}](Figs/Mass_profile_ElGordo_V2.pdf){width="9cm"} Comparison with earlier results ------------------------------- In this section we compare our models with previous results derived from the same RELICS data and presented in [@Cerny2018]. We should note that the constraints used on this work and in [@Cerny2018] are not exactly the same, so some of the differences can be attributed to this fact. Our lensed candidates were derived independently from [@Cerny2018], although in some cases, our system candidates coincide, but not in all cases. System 3, with photometric redshift $z=7.42$ in [@Cerny2018], is claimed to be a newly identified system. However, the positions of 3.1, 3.2 and 3.3 are very similar (within a fraction of an arcsec) with the positions of system 3 in [@Zitrin2013], but in this case, with a different photometric redshift of 4.16. For the position and redshift of system 3, we adopt the values of [@Zitrin2013] based on the color of the images and positions relative to the critical curve of preliminary models. We have not used system 8 from [@Cerny2018]. System 8 appears as a likely multiple lensed arc with two pairs of images very close to each other and possibly merging around a critical curve. System 10 in [@Cerny2018] is included as an extra knot in our system 1. Similarly, we have included the positions of system 20 of [@Cerny2018] as additional knots of system 2. For the driver model, all systems are included also in [@Cerny2018] (although, see the difference in redshift for system 3). For the alternative full model, we include all systems used in [@Cerny2018], except system 8 as mentioned above. In the full model, we also include systems not included in [@Cerny2018]. These are the new system candidates 6, 10, 13, 14 and 16 and the redefined system candidates 11, 12 and 14 listed in the table in appendix. The system 14 in [@Cerny2018] was not included in their model. Here we use a redefined version of this system as our new system 12. ![The top panel shows the proposed redefinition of the original system 5 in [@Zitrin2013] as the new system 11. The middle panel shows the predicted merging arc by the driver model assuming the background galaxy is at z=3.08. The bottom panel shows the corresponding prediction made by the full model when the source is forced to be at the photometric redshift. The white circle in the middle and bottom panel marks the position of 11a. []{data-label="Fig_System11"}](Figs/System11_G3.pdf){width="9cm"} The critical curves of our two models and the model in [@Cerny2018] are compared in Figure \[Fig\_CCcomparison\]. The position of the critical curves is consistent between both models, although our model predicts slightly wider critical curves, suggesting a rounder distribution for the projected surface mass density in our model. In contrast, [@Cerny2018] predicts a narrower distribution of matter, with the mass being more concentrated around the line intersecting the two clumps. The models show a better agreement (in terms of positions of the critical curve) around the position of the constraints. The figure shows the estimated observed position of the critical curve based on symmetry arguments for the giant arc of system 2 (at $z\approx 3.3$). All models agree relatively well with this position by placing the critical curve (at the redshift of system 2) very close, or intersecting, the estimated position of the critical curve. In the South-East part of the lens, differences between models are larger, reflecting the relative smaller density of constraints in this part of the lens (see Figure \[Fig\_CC\_Arcs\_Chandra\]), but possibly also the fact that parametric methods assume explicit mass profiles that can extend the mass distribution beyond the range of distances covered by the lensing constraints. A more quantitative comparison of the magnification between the different models can be made by comparing the curves, $A(>\mu)$, of the area above a given magnification. These curves are computed by integrating the differential area curves, i.e $A(>\mu) = \int_{mu}^{\mu_{max}}d\mu dA/d\mu$ where $\mu_{max}$ is the maximum magnification considered (220 in this case) and $dA/d\mu$ is the area in the lens plane with magnification $\mu$ and in the interval $d\mu$, divided by the magnification $\mu$ (i.e, the corresponding area in the source plane). The curves $A(>\mu)$ follow the usual $A_o\mu^{-2}$ above magnification $\mu\approx 10$. The values of the normalization for the different models and at $z_s=3.3$ are (in arcmin$^2$): $A_o=4.5$ (Cerny18 model), $A_o=10$ (driver model) and $A_o=8.5$ (full model). $A(>\mu)$ can be interpreted as the probability of a galaxy being lensed by a factor larger than $\mu$. At high magnifications, the driver and full models predict about twice the probability compared with the model in [@Cerny2018]. This difference is mostly due to the shallower profiles in the driver and full models around the position of the critical curves. The values of $A_o$ put El Gordo at a level comparable to other powerful lenses, like the Hubble Frontier Fields clusters, in terms of lensing efficiency (see Vega-Ferrero et al. 2019 (accepted)). This means that future observations, like the planned ones with JWST, promise to reveal many additional high-redshift lensed galaxies. Due to the relatively large separation between the two groups, the critical curve on the Central-East side of the cluster is relatively unperturbed by cluster members (this can be appreciated in Figure \[Fig\_CC\_Arcs\_Chandra\] where the critical curves are very smooth in this part of the cluster). At even higher redshift, the critical curves move outwards where the distortion by cluster members is expected to be even smaller. This has important implications for the probability of observing caustic crossing events of distant stars, for instance Pop III stars at $z>7$ as suggested by [@Windhorst2018]. Pristine critical curves (that is, critical curves which are not perturbed by microlensing from stars or remnants in the cluster members or in the intracluster medium) can host lensed images in their vicinity with magnifications factors of order $10^6$ when the background source has the size of a Pop III star. In contrast, critical curves that are close to the cluster centre (for instance, for background objects at relatively low redshifts of $z\approx 2$ or less) are normally perturbed by such microlenses resulting in maximum magnification factors of order $10^4$ for background sources with sizes comparable to giant stars [see for instance @Diego2019 for details]. In terms of total mass, the agreement between the models is made more evident when looking at the integrated mass as a function of aperture radius. In order to better account for the asymmetric nature of the cluster, we set the centre of the aperture at the position of the two main galaxies (or BCGs). For each centre, we compute the projected mass within a given aperture as a function of the aperture radius. The resulting profiles are shown in Figure \[Fig\_Profiles\]. All models agree well specially between $\approx 100$–$300$ kpc, which is the range where lensing constraints are more abundant. At small radii ($r<100$ kpc), the model in [@Cerny2018] predicts slightly more mass than our free-form models, specially in the SE clump. At radii larger than $\approx 400$ kpc our free-form models falls below the prediction of the model of [@Cerny2018]. This is an expected behaviour, since the free-form models usually assigns low masses to areas extending beyond the realm of the lensing constraints. This is simply a memory effect of the algorithm that does not constrain distant regions in the field of view, leaving their masses close to their initial value before the minimization (these masses are originally assigned small random values). Discussion {#sect_disc} ========== The results from the previous section suggests that the mass in El Gordo is relatively well constrained in the inner 500 kpc region. Within this range, [@Cerny2018] finds that the masses within the 500 kpc radius for each clump have a mass ratio of 1.19 (for SE/NW). Compared with our results, we find that within the same radius, the SE/NW ratio is 0.98 for the driver model and 1.11 for the model with all systems. At 100 kpc, this ratio grows to 1.17 and 1.18 for the driver model and full model, respectively. This should be compared with the dynamical masses inferred in [@Menanteau2012], where for the SE/NW ratio (within the virial radius) they find a value of $0.6\pm0.4$, and hence consistent with a ratio of $\sim 1$ at $1\sigma$ with their measurement. The weak lensing analysis in [@Jee2014] finds a more discrepant ratio of the SE/NW groups (in the virial masses) of $0.56 \pm 0.17$ (statistical), in contrast with our results. This discrepancy may be due to systematic effects in either analysis, but it is also possible that the NW group becomes more massive than the SE group beyond the 500 kpc radius studied in the strong lensing analysis. One of the more puzzling aspects of El Gordo cluster is the position of the X-ray emission in relation to the peak in the mass distribution. [@Botteon2016] study this cluster with X-rays and infers a very high velocity for the shock (with a Mach number of 3 or above), which is spatially coincident with one of the radio relics. [@Ng2015] combines different observations from El Gordo cluster to constrain the dynamical state of the cluster. Based on the separation of the two subgroups, the morphology of the radio relics and their polarization angle, they conclude that the cluster is most likely in a return phase. This naturally explains the relative position of the X-ray peak and the main BCG, which seems to be lagging behind the X-ray peak. [@Hallman2004] studies the low-redshift cluster A168, which resembles El Gordo. Like in El Gordo, in A168 the peak of the X-ray emission seems to have moved ahead of the dominant galaxies in the cluster. They conclude that the “subcluster gas slingshots past the dark matter center, becomes unbound from the subcluster and expands adiabatically”. This type of adiabatic expansion has been observed in N-body hydrodynamical simulations [@Mathis2005], and it can result in a substantial cooling of the gas as it leaves the potential well. For a cluster in a return phase after a head-on collision, the gas leaves the potential well twice; a first time as it drags behind the peak of the mass distribution due to ram pressure, and a second time when the peak of the mass distribution falls back towards the centre of mass sprinting through the gas. The results from our driver model seems to agree better with the [@Ng2015] interpretation (returning phase), since the X-ray peak is ahead of the mass peak. On the contrary, the full model seems to agree better with the interpretation of [@Molnar2015] and [@Zhang2015], since the mass peak coincides with the X-ray peak, and not with the BCG. This would suggest that the BCG was perturbed out from the potential well of the infalling cluster, which may be explained by the merging. Based on results from full N-body.hydro simulations [@Molnar2015; @Zhang2015], the infall velocity for El Gordo is not as large as for the Bullet cluster. If the velocity is not large enough, the ram pressure does not produce noticeable offsets between the mass and gas centers of the infalling cluster. ![Comparison of the critical curves between our models and the model in [@Cerny2018]. The yellow ellipse marks the observed position of the critical curve from system 2 (at $z \approx 3.3$). []{data-label="Fig_CCcomparison"}](Figs/CriticalCurves.pdf){width="9cm"} Total mass ---------- We estimate the total mass within $r_{200c}$ by fitting a projected NFW profile to the projected mass along the line of sight. We compute the integrated mass as a function of aperture radius taking as the centre of the cluster the actual centre of the FOV considered in this work, which is a position that falls right in between the two clumps. The integrated mass is shown in Figure \[Fig\_TotalMass\] as a black solid line. Since the centre of the profile is taken in the middle point between the two subgroups, at small radii the integrated mass grows relatively slow with increasing radius. Only at radii larger than $\approx 400$ kpc, the integrated mass shown in Figure \[Fig\_TotalMass\] includes both clumps. At radii extending beyond the region where lensing constraints are available, our lensing code typically results in profiles which fall below an extrapolated NFW profile. This is a consequence of the lack of constrains in these regions, which translate in the final solution [*remembering*]{} the initial condition, typically values of the mass close to zero [see our earlier work for a detailed discussion of this memory effect @Diego2005; @Diego2007]. The memory effect can be appreciated in Figure \[Fig\_TotalMass\] beyond radii$\approx 600$ kpc. Given the asymmetry of the cluster and the extension of the lensing constraints, a fit to a symmetrical profile makes sense only in between these two radii (400–600 kpc). In Figure \[Fig\_TotalMass\] we show three NFW profiles that are fit to the aperture mass profile in this range of radii. For each NFW model, we assume concentration value of $C=R_{\rm vir}/r_s=6$ and vary only the scale radius. This value of the concentration is expected for clusters of masses similar to El Gordo. The dependency with the concentration is shown for the model with scale radius 250 kpc (dark blue dotted curves), where we vary the concentration between $C=4.6$ and $C=7.8$. The asymmetry of the cluster, together with the limited extension of the model in radius, does not allow us to get a good constraint on neither the concentration parameter nor the scale radius (with the consequent impact on the uncertainty due to the extrapolation), but the valid range of models predict a mass in the range $M_{200c}=(1.35\pm 0.15)\times10^{15}$M$_{\odot}$ based on the extrapolation of suitable NFW profiles. This mass estimate is lower than previous estimates [by up to a factor $\approx 2.3$ compared with the results in @Jee2014], reducing the tension between the mass of this cluster and predictions from LCDM models. When compared with the simulated El Gordo cluster in [@Molnar2015] (black dashed line in Figure \[Fig\_TotalMass\]), the integrated mass profile shows a remarkably good agreement with the lens model (thick black line) below $\approx 500$ kpc. Note that the simulated mass profile is not a fit to our reconstructed mass profile. Between $\approx 500$ kpc and $\approx 700$ kpc the simulated cluster increase the mass more rapidly than the lens model, suggesting that the scale radius in the simulated cluster is larger than 250 kpc. By fitting this regime in radius to an NFW profile, we infer a large core radius for the simulated cluster (light blue curve in Figure \[Fig\_TotalMass\] with scale radius of 800 kpc and concentration parameter c=3). Extrapolation of the NFW profile, derived from fitting the simulated profile in the radii between $\approx 500$ kpc and $\approx 700$ kpc results in an overestimation of the $M_{200c}$ mass by $\approx 60\%$, suggesting that our inferred $M_{200c}$ mass for El Gordo cluster may be also overestimated, although the smaller scale radius of the NFW profiles used in this case should result in a smaller percentage of overestimation. The question of the total mass of El Gordo can not be settled until a proper joint analysis (combining strong lensing to break the mass-sheet degeneracy and weak lensing to cover the larger scales) is performed. As shown by the simulated cluster in Figure \[Fig\_TotalMass\], extrapolations of a single analytical radial profile (where the analytical radial profile is constrained in range of radii) of a non-symmetric cluster can be unreliable. The contribution from the gas and filamentary structure around the SE BCG {#Sect_Filament} ------------------------------------------------------------------------- The position of the peak in the mass distribution of the full model is coincident with the peak in the X-ray emission. This coincidence raises the possibility that the X-ray emitting gas is contributing substantially to the projected mass in this region of the lens plane. Based on X-ray data, [@Menanteau2012] constraints the electron density to values between 0.023 cm$^{-3}$ and 0.045 cm$^{-3}$ within a region of diameter $\approx 170$ kpc (or $\approx 22$”). Based on this electron density, the gas surface mass density (projected on 170 kpc along the line of sight) is then $\Sigma_{gas}\approx 100-200 {\rm M}_{\odot}$ pc$^{-2}$, which should be compared with the critical surface mass density of $\Sigma_{crit} = 2800 {\rm M}_{\odot}$ pc$^{-2}$ for a source at $z_s=3$. The contribution from the gas to the convergence, $\kappa$ (where $\kappa$ is defined as the ratio between the surface mass density and the critical surface mass density at the given lens and source redshifts), projected along this relatively small interval of 170 kpc is 0.035-0.07. At the peak of the X-ray emission, and projecting over larger distances, the gas can easily contribute up to 0.1 to $\kappa$. This may be sufficient to explain the correlation between the total mass peak and the X-ray emission shown in Figure \[Fig\_Mass\_Xray\]. ![The thick black solid line shows the integrated mass from the lens model as a function of aperture radius (i.e mas projected in a cylinder with radius R) when the centre is in the middle of the field of view (that is, in between the two clumps). The black-dashed line is the profile derived from the simulated “El Gordo” cluster in [@Molnar2015] when the center is chosen in the middle of the two subgroups (i.e like in the lens model profile). The three colored solid lines are the corresponding mass in the same aperture radius (i.e mass projected in a cylinder) for NFW profiles with concentration parameter $C=6$ and scale radius between 150 kpc and 250 kpc. The two dotted blue lines are two alternative models with scale radius 250 kpc and concentration $C=6*1.3$ and $C=6/1.3$. The normalization is obtained after fitting the NFW profiles to the the black line in the range 400–600 kpc (see text for details). The light blue curve is the mass enclosed in a sphere with radius R and density 200 times the critical density (at the redshift of the cluster). []{data-label="Fig_TotalMass"}](Figs/TotalMass_ElGordo.pdf){width="9cm"} Interestingly, as discussed earlier, the peak of the X-ray emission coincides with blue features observed in the UV-optical bands (A and B in Figure \[Fig\_CoolingFlow\]). Based on the driver model, if these two features are multiply lensed objects they need to be at a redshift above $z=1.8$. However, at this redshift, the predicted images would not form radially oriented arcs in our models, but rather tangential arcs. Radially oriented arcs at this position of the lens plane appear for redshifts $z>2.5$, but the lens model can not reproduce arcs with a morphology similar to the observed ones. For redshifts between $z=4.5$ and $z=5.5$, counterimages for the arcs are expected at positions compatible with the three arclets of system 4. This correspondence explains the original classification of these features as part of system 4, but the morphology of the predicted images differ significantly from the observation, making this possibility unlikely. The two features are at distances of $\approx 35$ and $\approx 50$ kpc respectively from the centre of the BCG. The tight correlation shown in Figure \[Fig\_CoolingFlow\] between these two features and the offset peak of the X-ray emission suggests that these two features may be the optical counterpart of a cooling flow (alternatively they could be associated with a jet emitting in UV-optical and X-rays). A well studied case that could serve as a similar example is the nearby cluster Abell 2597 (z=0.0821) where an arc-like feature correlates also very well with the peak in the X-ray emission. In [@Tremblay2012] they study this cluster combining data from X-ray, UV/optical, NIR and radio observations. They find evidence for a cooling flow in the X-ray band and filamentary features in the FUV and optical bands that resemble the blue features shown in Figure \[Fig\_CoolingFlow\], which could be associated with precipitation of the gas [@Voit2015]. ![UV feautures in EL Gordo. in The contours are the X-ray emission observed by Chandra. Feautres A,B and C are visible in the bluer HST bands and are marked with yellow ellipses. The orange circle marks the position of a compact radio source in [@Lindner2014]. []{data-label="Fig_CoolingFlow"}](Figs/ElGordo_Xray_vs_Optical_ZoomBCG_CoolingFlow_Gamma.pdf){width="9cm"} Another remarkable example is Abell 1795, where the spatial correlation between the cooling flow and the FUV emission is even more clear. Bright structures visible in the UV/optical, but also in Ly-$\alpha$ (and H$\alpha$) emission, and around the BCGs are believed to contain active star forming regions with luminous and hot stars [@Oonk2011]. [@Oonk2011] suggest that contributions from stars are not sufficient to explain the FUV emission in Abell 2597, and that additional contributions from non-thermal processes should be considered. In [@Donahue2015], the authors study a larger sample of 25 BCGs in CLASH clusters spanning redshifts between $z=0.206$ and $z=0.890$. Similar filamentary features are found around several of the BCGs, some of them resembling the features seen around the BCG in El Gordo. Based on the similarity with the cold-gas structures produced in simulations of precipitation-driven active galactic nucleus feedback, in which jets uplift low-entropy gas to greater altitudes [@Li2014; @Li2015], they argue that AGN jets uplift the low-entropy gas, causing it to condense. In a similar study based on 10 cool-core BCGs, [@Mittal2015] argue that the cooling of the ICM contributes to the star formation in cool-core BCGs. [@Tremblay2015] study a sample of 16 cool-core BCGs at $z<0.3$ (also exhibiting UV filamentary structure around the BCGs). They find that “nearly half of the sample possesses kpc-scale filaments that, in projection, extend towards and around radio lobes and/or X-ray cavities”, and conclude that the “filaments may have been uplifted by the propagating jet or buoyant X-ray bubble, or may have formed in situ by cloud collapse” In this earlier work, the presence of an AGN is needed in order to power the uplifting mechanism. In the case of El Gordo, neither radio observations nor X-ray data support the hypothesis of an AGN at the centre of the BCG powering recent radio jets. Radio data reveal relics, but at a much larger distance from the BCG. [@Lindner2014] shows a compact radio source (named U7 in Figure 16 in their paper) at 75 kpc SE from the BCG, which is spatially coincident with the peak of the X-ray emission (see Figure \[Fig\_CoolingFlow\]). The peak of the emission in Chandra data is coincident with the position of the two blue features, and shows no evidence of point source emission at the centre of the BCG. One could argue that an AGN (i.e a SMBH) is ejected from the BCG, but this would imply an record breaking offset of $\approx 75$ kpc, which is very difficult to explain with simulations, where typical offsets are in the range of $\approx 10$ kpc at most. Instead, it may be more plausible that the UV/blue emission is a consequence of the cooling of the gas into large star forming regions, or that we are witnessing the tail of the X-ray radiation (bremstrahlung). On the other hand, the other blue features observed around the BCG would agree also with the scenario discussed in earlier work [@Donahue2015; @Tremblay2015; @Mittal2015], except for the fact that there is no sign of AGN at the centre of the galaxy (i.e., X-ray cavities around the BCG, or an X-ray point source at the centre of the BCG, or radio emission associated with the BCG). Conclusions {#sect_concl} =========== We derive a new lens model for the El Gordo cluster using data from the RELICS program. We first derive a robust model (nicknamed the driver model) based on a reliable subsample of lensed galaxies. Using the driver model, we unveil new strongly lensed system candidates and infer their redshifts. With the full set of lens systems we derive an alternative model (or full model) for the mass distribution. Both models are similar to each other, but small differences can be identified, specially in the SE sector of the cluster. We explicitly compare our models with the one derived by [@Cerny2018] using the same RELICS data, but a different sample of lensed galaxies (although with substantial overlap between our sample and theirs). We find that our lens model predicts wider critical curves, but the integrated mass as a function of aperture is consistent with the model of [@Cerny2018]. Our new model predicts also nearly twice the lensing efficiency above a given magnification factor (at large magnifications). By fitting our full lens model to an NFW profile, and extrapolating up to $R_{200c}$, we find a mass $M_{200c}=(1.35 \pm 0.15)\times10^{15}$M$_{\odot}$, where the uncertainty comes mostly from the poorly constrained scale radius (and to a lesser degree the concentration parameter). This mass estimate is smaller than previous estimates (by factors ranging from 1.3 to 2.3), relaxing the tension with standard LCDM models that predict that clusters with masses above $M_{200\rho}=1.7\times10^{15}$M$_{\odot}$ and at this redshift should be rare. We test the accuracy of the profile extrapolation using an N-body simulation that matches most of the observed features in El Gordo, and conclude that our mass estimate may still be overestimating the real mass due to an improper extrapolation of the profile. A combination of strong and weak lensing data should allow for a better constrain of the total mass of this cluster. We find evidence for the lens model being sensitive to the gas mass. In particular, we find that the peak of the smooth component of the mass distribution in the full lens model agrees well with the peak of the X-ray emission (which is offset with respect to the nearby BCG). We discuss the possibility that two features at the location of these peaks, which are observed in the optical-UV bands, and interpreted in the past as background lensed galaxies, are instead the optical counterpart of a cooling flow, or a precipitation mechanism from the hot plasma. The new lens model will be valuable when El Gordo is observed as part of of Cycle 1 of JWST. New arc systems, including several at high redshift are expected to be discovered with the new JWST data. Our reliable lens model will be used to identify new strongly lensed system candidates, as well as to estimate their redshifts. J.M.D. acknowledges the support of projects AYA2015-64508-P (MINECO/FEDER, UE), funded by the Ministerio de Economia y Competitividad. This work was funded by NASA JWST Interdisciplinary Scientist grants NAG5-12460, NNX14AN10G, and 80NSSC18K0200 to RAW from GSFC. We would like to thank Harald Ebeling for making the code [ASMOOTH]{} [@Ebeling2006] available. J.M.D. acknowledges the hospitality of the Physics Department at the University of Pennsylvania for hosting him during the preparation of this work. This work is based on observations made with the NASA/ESA [*Hubble Space Telescope*]{} and operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-2655. Part of the data for this study is retrieved from the Mikulski Archive for Space Telescope (MAST). The authors would like to thank the RELICS team for making the reduced data set available to the community. The scientific results reported in this article are based in part on data obtained from the Chandra Data Archive [^3][^4][^5]. Compilation of arc positions ============================ This appendix presents the sample of secure and likely lensed multiple images detected behind El Gordo using the updated imaging from the RELICS program. Table \[tab\_arcs\] lists the complete sample of images and their redshifts assigning IDs to each of them. The first column shows system ID following the original notation of [@Zitrin2013] (ID1.ID2.ID3 = System.Image.Knot) and ranks (A, B and C) . Systems 1-4 were initially presented by [@Zitrin2013]. IDs marked with bold face are new systems presented in this work. Photometric redshifts are given in column $z_{\rm phot}$. The systems having spectroscopic redshift are marked with bold face. Redshifts predicted by the lens model are given in column $z_{\rm model}$. The redshift used to reconstruct the lens are given in column $z_{\rm used}$. The column labeled Rank shows the quality of the system. Systems marked with rank A are very reliable and are used to derive the [*driver*]{} model. Systems marked with B are used to derive (together with systems having rank A) the [*full*]{} model. Systems marked with C are less reliable, but still highly consistent with the driver lens model. In the last column 1, 2 and 3 refer to previous work, where these systems are defined. Z13 stands for [@Zitrin2013] while C18 for [@Cerny2018]. The number in parenthesis next to the reference indicates the system ID in the corresponding publication. KnotID RA DEC $z_{\rm phot}$ $z_{\rm model}$ $z_{\rm used}$ Rank Comments ---------------- ------------- -------------- ---------------- ----------------- ---------------- ------ ---------- 1.1.1 1 02 53.275 -49 15 16.49 3.0 3.0 A Z13(1) 1.2.1 1 02 52.819 -49 15 18.29 2.93 A 1.3.1 1 02 55.411 -49 14 59.90 A 1.1.2 1 02 53.340 -49 15 16.36 A 1.2.2 1 02 52.763 -49 15 18.70 2.8 A 1.3.2 1 02 55.391 -49 15 00.33 2.91 A 1.1.3 1 02 53.480 -49 15 16.01 A 1.2.3 1 02 52.600 -49 15 19.68 A 1.3.3 1 02 55.320 -49 15 01.18 3.26 A 2.1.1 1 02 55.828 -49 15 52.37 3.21 3.3 3.3 A Z13(2) 2.2.1 1 02 56.749 -49 15 46.01 3.39 A 2.3.1 1 02 54.429 -49 16 04.63 3.3 A 2.1.2 1 02 55.671 -49 15 53.54 A 2.2.2 1 02 56.885 -49 15 45.17 3.27 A 2.3.2 1 02 54.456 -49 16 04.00 2.9 A 2.1.3 1 02 55.983 -49 15 51.24 A 2.2.3 1 02 56.573 -49 15 47.06 A 2.3.3 1 02 54.383 -49 16 04.61 A 3.1.1 1 02 56.257 -49 15 07.03 4.4 4.4 A Z13(3) 3.2.1 1 02 54.751 -49 15 19.54 A 3.3.1 1 02 51.536 -49 15 38.47 4.54 A 4.1.1 1 02 59.986 -49 15 49.54 3.98 3.2 4.0 A Z13(4) 4.2.1 1 02 55.362 -49 16 26.09 4.0 A 4.3.1 1 02 56.599 -49 16 08.45 A 5.1.1 1 02 54.539 -49 14 58.60 2.4 2.8 2.8 A C18(13) 5.2.1 1 02 53.230 -49 15 07.11 A 5.3.1 1 02 51.803 -49 15 17.05 2.2,2.5 A [**6.1.1**]{} 1 02 55.484 -49 15 05.04 4.3 4.3 B [**6.2.1**]{} 1 02 55.067 -49 15 09.84 4.3 B [**6.3.1**]{} 1 02 51.242 -49 15 37.08 4.3,4.5 4.3 C [**6.1.2**]{} 1 02 55.330 -49 15 05.70 B [**6.2.2**]{} 1 02 55.134 -49 15 07.80 B [**6.3.2**]{} 1 02 51.193 -49 15 37.08 C 7.1.1 1 02 55.477 -49 16 07.32 4.53 4.5 B Z13 7.2.1 1 02 54.927 -49 16 14.85 B 7.3.1 1 02 59.321 -49 15 44.52 4.8 C 8.1.1 1 02 55.836 -49 16 07.56 3.55 4.0 3.5 D Z13 8.2.1 1 02 55.211 -49 16 16.10 D 9.1.1 1 02 56.288 -49 16 07.90 2.72 3.0 2.9 B Z13 9.2.1 1 02 55.641 -49 16 17.54 2.26 B 9.3.1 1 02 59.043 -49 15 53.35 2.32 C [**10.1.1**]{} 1 02 55.784 -49 15 13.91 5.1 5.15 5.1 B [**10.2.1**]{} 1 02 55.558 -49 15 15.99 B [**10.3.1**]{} 1 02 51.772 -49 15 44.75 C 11.1.1 1 02 59.612 -49 16 26.61 2.19 3.1 2.2 B Z13(5) 11.2.1 1 02 59.467 -49 16 27.99 B [**11.3.1**]{} 1 02 57.774 -49 16 39.10 B 12.1.1 1 02 54.571 -49 14 54.16 3.36 3 3.0 B C18(14) [**12.2.1**]{} 1 02 53.021 -49 15 04.94 B [**12.3.1**]{} 1 02 51.782 -49 15 14.38 2.8 B [**13.1.1**]{} 1 02 59.884 -49 16 30.53 2.4 3.0 B [**13.2.1**]{} 1 02 59.719 -49 16 32.59 B [**14.1.1**]{} 1 03 00.135 -49 15 46.29 2.74 4 4.0 B [**14.2.1**]{} 1 02 55.161 -49 16 23.07 4 B [**14.3.1**]{} 1 02 56.331 -49 16 08.55 B KnotID RA DEC $z_{phot}$ $z_{\rm model}$ $z_{\rm used}$ Rank Comments ---------------- ------------- -------------- ------------ ----------------- ---------------- ------ ---------- 15.1.1 1 02 58.512 -49 16 37.00 2.7 2.65 2.7 B C18(5) 15.2.1 1 02 58.736 -49 16 35.71 2.8 B 15.3.1 1 03 00.100 -49 16 21.12 C [**16.1.1**]{} 1 02 58.017 -49 15 33.48 4.3 4.1 B [**16.2.1**]{} 1 02 55.237 -49 15 53.35 B [**16.3.1**]{} 1 02 53.719 -49 16 01.99 4.13 B [**17.1.1**]{} 1 02 55.546 -49 14 58.28 4.6 4.6 C [**17.2.1**]{} 1 02 54.693 -49 15 04.33 C [**17.3.1**]{} 1 02 50.950 -49 15 33.57 4.4 C [**18.1.1**]{} 1 02 57.018 -49 15 47.45 3.4 3.3 C [**18.2.1**]{} 1 02 55.784 -49 15 56.22 3.27 C [**18.3.1**]{} 1 02 54.575 -49 16 06.89 C [**19.1.1**]{} 1 02 52.709 -49 15 51.82 4.5 5.0 C [**19.2.1**]{} 1 02 55.275 -49 15 33.43 C [**19.3.1**]{} 1 02 56.886 -49 15 21.16 C [^1]: [email protected] [^2]: Note however, that through observations one measures the reduced shear, $\gamma_r=\gamma/(1-\kappa)$ where $\kappa$ is the convergence. [^3]: ivo://ADS/Sa.CXO\#obs/12258 [^4]: ivo://ADS/Sa.CXO\#obs/14022 [^5]: ivo://ADS/Sa.CXO\#obs/14023
--- abstract: 'Sum-of-norms clustering is a clustering formulation based on convex optimization that automatically induces hierarchy. Multiple algorithms have been proposed to solve the optimization problem: subgradient descent by Hocking et al. [@hocking], ADMM and ADA by Chi and Lange [@Chi], stochastic incremental algorithm by Panahi et al. [@Panahi] and semismooth Newton-CG augmented Lagrangian method by Yuan et al. [@dsun1]. All algorithms yield approximate solutions, even though an exact solution is demanded to determine the correct cluster assignment. The purpose of this paper is to close the gap between the output from existing algorithms and the exact solution to the optimization problem. We present a clustering test which identifies and certifies the correct cluster assignment from an approximate solution yielded by any primal-dual algorithm. The test may not succeed if the approximation is inaccurate. However, we show the correct cluster assignment is guaranteed to be found by a symmetric primal-dual path following algorithm after sufficiently many iterations, provided that the model parameter $\lambda$ avoids a finite number of bad values. Numerical experiments are implemented to support our results.' author: - 'Tao Jiang[^1]' - 'Stephen Vavasis[^2]' bibliography: - 'optimization.bib' title: 'On identifying clusters from sum-of-norms clustering computation' --- Introduction ============ Clustering is a fundamental problem in unsupervised learning. The goal of clustering is to seek a partition of $n$ points, ${\bm{a}}_1,{\bm{a}}_2, \ldots,{\bm{a}}_n \in \mathbb R^d$, such that points in the same subset are closer to each other than those that are not. Clustering is usually formulated as a discrete optimization problem, which is combinatorially hard to solve and beset by nonoptimal local minimizers. Classical methods such as k-means and hierarchical clustering are prone to these issues. Meanwhile, issues of hardness and suboptimality of many nonconvex optimization problems are resolved by convex relaxation. At an affordable computational cost, convex relaxation yields a good solution to the original problem. Pelckmans et al. [@pelckmans], Hocking et al. [@hocking], and Lindsten et al. [@lindsten] proposed the following convex formulation for the clustering problem: $$\min_{{\bm{x}}_1,\ldots,{\bm{x}}_n\in{\bm{\mathrm{R}}}^d} f'({\bm{x}}) = \frac{1}{2}\sum_{i=1}^n {\left \| {\bm{x}}_i-{\bm{a}}_i \right \|}^2 +\lambda\sum_{1 \le i < j \le n}{\left \| {\bm{x}}_i-{\bm{x}}_j \right \|}. \label{eq:son-clustering}$$ where ${\bm{a}}_1, {\bm{a}}_2, \ldots, {\bm{a}}_n$ denote the given data and $\lambda$ denotes the tuning parameter. The formulation is best known as sum-of-norms (SON) clustering, convex clustering, or clusterpath clustering. The clusters are read from the optimizer of . Let ${\bm{x}}_1^*, {\bm{x}}_2^*, \ldots, {\bm{x}}_n^*$ denote the optimizer. Points $i, i'$ are assigned to the same cluster if ${\bm{x}}_i^* = {\bm{x}}_{i'}^*$, and they are assigned to different clusters otherwise. The optimizer must satisfy the following condition: $${\bm{x}}_i^*-{\bm{a}}_i+\lambda\sum_{j\ne i}{\bm{w}}_{ij}^*={\bm{0}}\qquad\forall i=1,\ldots,n, \label{eq:KKTcond}$$ where ${\bm{w}}_{ij}^*$ is the subgradients of the Euclidean norm ${\left \| {\bm{x}}_i - {\bm{x}}_j \right \|}$ satisfying $${\bm{w}}_{ij}^* = \left\{ \begin{array}{ll} \frac{{\bm{x}}_i^*-{\bm{x}}_j^*}{{\left \| {\bm{x}}_i^*-{\bm{x}}_j^* \right \|}}, & \mbox{for ${\bm{x}}_i^*\ne {\bm{x}}_j^*$}, \\ \mbox{arbitrary point in $B({\bm{0}},1)$}, & \mbox{for ${\bm{x}}_i^*={\bm{x}}_j^*$}, \end{array} \right.$$ with ${\bm{w}}_{ij}^*=-{\bm{w}}_{ji}^*$ in the second case. We use $B({\bm{c}},r)$ to denote a closed Euclidean ball centered at ${\bm{c}}$ of radius $r$. The first term of the objective function ensures ${\bm{x}}^*$ is a good approximation of the original data ${\bm{a}}$, while the second term penalizes the differences ${\bm{x}}^*_i - {\bm{x}}^*_{i'}$. As a result, the second term tends to make ${\bm{x}}_i^*$ equal to each other for many $i$. Furthermore, the tuning parameter $\lambda$ controls the number of clusters indirectly. When $\lambda=0$, each point is assigned to a cluster of its own. When $\lambda$ is sufficiently large, all points are assigned to the same cluster. In this paper, we only consider the $l_2$ norm. Nonetheless, the reader should be aware that many other norms such as $l_1, l_\infty$, or the general $l_p$ norms are also extensively studied in the literature of sum-of-norms clustering. Also, many researchers investigate the weighted penalty in the objective function. However, our result only applies to the case of unit weights. Many algorithms, both primal-only and primal-dual methods, have been proposed to solve . Primal-only algorithms include subgradient descent by Hocking et al. [@hocking] and stochastic incremental algorithm by Panahi et al. [@Panahi]. Primal-dual algorithms are also widely considered such as ADMM and ADA by Chi and Lange [@ChiLange], and semismooth Newton-CG augmented Lagrangian method by Yuan et al. [@dsun1]. All these iterative algorithms yield only approximate solutions, even though exact knowledge of the optimizer is demanded to determine the clusters. To identify the correct clusters from an approximate solution, authors in practice propose the following approximate test with an artificial tolerance, $\epsilon > 0$. If the approximate solution satisfies ${\left \| {\bm{x}}_i - {\bm{x}}_{i'} \right \|} \le \epsilon$, $i, i'$ are assigned to the same cluster. Otherwise, $i, i'$ are assigned to different clusters. Hence, the value of artificial tolerance is critical. Unfortunately, to the best of our knowledge, neither the value of the tolerance nor the approximate test itself has been rigorously justified. The test is not robust. Since the relation ${\left \| {\bm{x}}_i - {\bm{x}}_{i'} \right \|} \le \epsilon$ is not transitive, it is not clear how the test would cluster points $i,j,k$ if ${\left \| {\bm{x}}_i-{\bm{x}}_j \right \|} \le \epsilon, {\left \| {\bm{x}}_j-{\bm{x}}_k \right \|} \le \epsilon$, and ${\left \| {\bm{x}}_i-{\bm{x}}_k \right \|} > \epsilon$. The test may not be accurate. The clusters obtained by the approximate test could deviate from the clusters corresponding to the optimizer of . The inaccuracy may lead to the failure of known properties of sum-of-norms clustering such as the recovery of a mixture of Gaussians and the agglomeration property. It has been established that for the appropriate choice of $\lambda$, exactly recovers a mixture of Gaussians due to Panahi et al. [@Panahi], Sun et al. [@dsun1], and Jiang et al. [@jiangvavasiszhai]. However, it is unknown if the recovery result still holds when the approximate test is applied. Hocking et al. [@hocking] conjectured that sum-of-norms clustering is agglomerative in the sense that as $\lambda$ increases, clusters may fuse but never break apart. The conjecture was proven by Chiquet, Gutierrez and Rigaill [@chiquet] with some techniques which may not be applicable when the approximate test is implemented. Thus the agglomeration property may no longer hold. The full agglomeration theorem is stated as follows. If there is a $C$ such that minimizer ${\bm{x}}^*$ of at $\lambda$ satisfies ${\bm{x}}_i^*=\hat{\bm{x}}$ for $i\in C$, ${\bm{x}}_i^*\ne\hat{\bm{x}}$ for $i\notin C$ for some $\hat{\bm{x}}\in {\bm{\mathrm{R}}}^d$, then at any $\lambda' \ge \lambda$, there exists an $\hat{\bm{x}}' \in {\bm{\mathrm{R}}}^d$ such that the minimizer of , $\bar{\bm{x}}^*$, satisfies $\bar{\bm{x}}_i^*=\hat{\bm{x}}'$ for $i\in C$. \[thm:agglomeration\] Let *fusion values* denote the values of $\lambda$ at which clusters fuse to form a larger cluster. According to Theorem \[thm:agglomeration\], there are at most $n$ fusion values. The purpose of this paper is to present our clustering test and to justify it rigorously. The clustering test takes a primal and dual feasible solution for the second-order cone formulation of sum-of-norms clustering and attempts to determine all clusters. The test may report ‘success’ or ‘failure’. If the test reports ‘success’, all clusters are correctly identified and a certificate is produced. The test and the proof of correctness are stated in Section \[sec:test\]. The proof heavily relies on two sufficient conditions for clustering and distinct clustering, which are presented in Section \[sec:suff\_cond\]. The test requires the knowledge of a primal and dual feasible solution for the conic formulation of sum-of-norms clustering, which can be constructed from the output of any primal-dual algorithm. The conic formulation and algorithms are stated in Section \[sec:feasibility\_CS\]. If a primal-dual path following algorithm is used, the test is guaranteed to report ‘success’ after a finite number of iterations except the test may never report ‘success’ when $\lambda$ is at a fusion value. These results are shown in Section \[sec:guarantee\]. The proof of the theoretical guarantee is a result of the properties of the central path, which are stated in Section \[sec:central\_path\]. In Section \[sec:exper\], we present a few computational experiments to verify our test in practice. Sufficient conditions on clustering {#sec:suff_cond} =================================== Let $C \subseteq \{1,2,...,n\}$ denote a subset. To draw meaningful conclusions about $C$, we use the two sufficient conditions in this section to develop our test. Theorem \[thm:clust\_suff\] is due to Chiquet et al. [@chiquet], which is a sufficient condition for clustering. The reader may refer to the work by Jiang, Vavasis and Zhai [@jiangvavasiszhai] for an exposition of Theorem \[thm:clust\_suff\]. Theorem \[thm:noclust\_suff\] is a sufficient condition for distinct clustering; its proof is trivial. Let ${\bm{x}}^*$ denote the optimal solution of , and let ${\bm{x}}$ denote the output of some primal-dual algorithm which solves . Suppose there exist ${\bm{q}}_{ij}^*$ for all $i, j \in C, i \ne j$ solving the following system . Then there exists some $\hat{\bm{x}}\in {\bm{\mathrm{R}}}^d$ such that the minimizer ${\bm{x}}^*$ of satisfies ${\bm{x}}_i^*=\hat{\bm{x}}$ for $i\in C$, hence $C$ is a cluster or part of a larger cluster. $$\begin{aligned} {\bm{a}}_i-\frac{1}{|C|}\sum_{l\in C}{\bm{a}}_l&= \lambda\sum_{j\in C-\{i\}} {\bm{q}}_{ij}^*, \quad \forall i\in C,\\ {\left \| {\bm{q}}_{ij}^* \right \|} &\leq 1, \quad \forall i,j\in C, i\ne j, \\ {\bm{q}}_{ij}^* &= -{\bm{q}}_{ji}^*, \quad \forall i,j\in C, i\ne j. \end{aligned} \label{eq:zstar1}$$ \[thm:clust\_suff\] Define $\tau>0$ such that the true optimizer and the approximate solution are at distance of at most $\tau$ away (i.e. $\|{\bm{x}}- {\bm{x}}^*\| \le \tau$). If there exist $i,j \in C$ such that $\|{\bm{x}}_i - {\bm{x}}_j\| > 2 \tau$, then $C$ is not a cluster or part of a larger cluster. \[thm:noclust\_suff\] Feasibility and complementary slackness {#sec:feasibility_CS} ======================================= In this section, we consider a second-order cone (SOCP) formulation of . Both feasibility and complementary slackness are stated. A second-order cone program can be directly solved by a feasible interior-point method. For infeasible algorithms such as the ADMM proposed by Chi and Lange [@ChiLange], we construct a feasible solution for the SOCP from the outputs of such algorithms. Second-order cone formulation {#sec:socp} ----------------------------- We first present the equivalent SOCP formulation to \[eq:socp\_primal\] $$\begin{aligned} \underset{{\bm{x}}, {\bm{y}}, {\bm{z}}, s, u, t}{\text{min}} & \quad f({\bm{x}}, {\bm{y}}, {\bm{z}}, s, u, t) = \sum_{i=1}^n s_i + \lambda \sum_{1 \le i < j \le n} t_{ij}\label{eq:p_obj}\\ \text{s.t} & \quad {\bm{x}}_i - {\bm{x}}_j - {\bm{y}}_{ij} = {\bm{0}}\;, \quad \forall 1 \le i<j \le n\;, \label{eq:p_constr1}\\ & \quad {\bm{x}}_i - {\bm{z}}_i = {\bm{a}}_i\;, \quad \forall i = 1,\ldots,n\;, \label{eq:p_constr2}\\ & \quad s_i - u_i = 1 \;, \quad \forall i = 1,\ldots,n\;, \label{eq:p_constr3}\\ & \quad t_{ij} \ge {\left \| {\bm{y}}_{ij} \right \|} \;, \quad \forall 1 \le i<j \le n \;, \label{eq:p_constr4}\\ & \quad s_i \ge {\left \| \begin{pmatrix} {\bm{z}}_i\\ u_i \end{pmatrix} \right \|}\;, \quad\forall i=1,\ldots,n\;. \label{eq:p_constr5}\end{aligned}$$ The SOCP formulation of the dual problem is as follows. \[eq:socp\_dual\] $$\begin{aligned} \underset{{\bm{\delta}}, {\bm{\beta}}, \gamma}{\text{max}} & \quad h({\bm{\delta}}, {\bm{\beta}}, \gamma) = \sum_{i = 1}^n {\bm{a}}_i^T {\bm{\beta}}_i + \sum_{i = 1}^n \gamma_i \label{eq:d_obj} \\ \text{s.t} & \quad -\sum_{j=1}^{i-1}{\bm{\delta}}_{ji}+\sum_{j=i+1}^n{\bm{\delta}}_{ij}+{\bm{\beta}}_i = {\bm{0}}\;, \quad \forall i =1,\ldots,n \;, \label{eq:d_constr1}\\ & \quad \lambda \ge {\left \| {\bm{\delta}}_{ij} \right \|}\;, \quad \forall 1 \le i<j \le n\;, \label{eq:d_constr2}\\ & \quad 1-\gamma_i \ge {\left \| \begin{pmatrix} {\bm{\beta}}_i\\ \gamma_i \end{pmatrix} \right \|} \;, \quad \forall i = 1,\ldots,n\;. \label{eq:d_constr3}\end{aligned}$$ Both primal and dual problems are feasible, and Slater condition holds for both problems. Consider the following primal and dual feasible solution: $${\bm{x}}_i = {\bm{a}}_i, {\bm{z}}_i = {\bm{0}}, s_i = 1, u_i = 0, \; \forall i =1,\ldots,n; \quad {\bm{y}}_{ij} = {\bm{a}}_i - {\bm{a}}_j, t_{ij} = {\left \| {\bm{a}}_i - {\bm{a}}_j \right \|} + 1, \; \forall 1\le i<j \le n.$$ $${\bm{\delta}}_{ij} = {\bm{0}}, \; \forall 1 \le i<j \le n; \quad {\bm{\beta}}_i = {\bm{0}}, \gamma_i = 0, \; \forall i = 1,\ldots,n.$$ which is a also primal and dual Slater point. Hence, strong duality holds since the problem is formulated as convex optimization. For the clustering test, we require primal-dual feasibility for the above primal and dual SOCP. Such a primal and dual feasible solution can be obtained by applying a feasible primal-dual interior-point method to the problem above. Each iterate of the algorithm is feasible, so is the output. Nevertheless, the output may not be feasible for our SOCP when a general primal-dual algorithm is used to solve for . Luckily, given that the output is close to the feasible set, we are able to find a small perturbation on the output to attain feasibility. The rest of the section elaborates on the perturbation and validates feasibility for the perturbed solution. Now let us consider a general primal-dual algorithm which solves and yields an output that is either in or close to the feasible set. The dual of is as follows. \[eq:son\_dual\] $$\begin{aligned} \underset{{\bm{\delta}}}{\text{max}} & \quad h'({\bm{\delta}}) = - \frac{1}{2} \sum_{i = 1}^n {\left \| \sum_{j = 1}^{i - 1} {\bm{\delta}}_{ji} - \sum_{j = i+1}^n {\bm{\delta}}_{ij} \right \|}^2 - \sum_{1 \le i < j \le n} \langle {\bm{\delta}}_{ij}, {\bm{a}}_i - {\bm{a}}_j \rangle \label{eq:d_obj_admm} \\ \text{s.t} & \quad {\left \| {\bm{\delta}}_{ij} \right \|} \le \lambda \;,\quad \forall 1 \le i<j \le n \;. \label{eq:d_constr1_admm}\end{aligned}$$ Notice the formulation is equivalent to the SOCP dual after ${\bm{\beta}}$ and $\gamma$ are eliminated. Let $({\bm{x}}, {\bm{\delta}})$ denote the output yielded by the primal-dual algorithm. To construct a feasible solution from $({\bm{x}}, {\bm{\delta}})$, we first update ${\bm{\delta}}$ as follows $${\bm{\delta}}_{ij} \leftarrow \left\{ \begin{array}{ll} \frac{\lambda {\bm{\delta}}_{ij}}{{\left \| {\bm{\delta}}_{ij} \right \|}}, \quad & \mbox{if } {\left \| {\bm{\delta}}_{ij} \right \|} > \lambda, \\ {\bm{\delta}}_{ij}, & \mbox{otherwise}. \end{array} \right.$$ The updated ${\bm{\delta}}_{ij}$ has norm no more than $\lambda$. Notice that the perturbation is small provided that the dual solution was already close to the feasible set. Next, define the following variables: $$\begin{aligned} {\bm{y}}_{ij} = {\bm{x}}_i - {\bm{x}}_j, \quad &\forall 1 \le i < j \le n,\\ {\bm{z}}_{i} = {\bm{x}}_i - {\bm{a}}_i, \quad &\forall i = 1, \dots, n,\\ s_i = \frac{1}{2} (1 + \|{\bm{z}}_{i}\|^2), \quad &\forall i = 1, \dots, n,\\ u_i = \frac{1}{2} (-1 + \|{\bm{z}}_{i}\|^2), \quad &\forall i= 1, \dots, n,\\ t_{ij} = \|{\bm{y}}_{ij}\|, \quad &\forall 1 \le i < j \le n,\\ {\bm{\beta}}_i = \sum_{j=1}^{i-1}{\bm{\delta}}_{ji}-\sum_{j=i+1}^n{\bm{\delta}}_{ij}, \quad &\forall i = 1, \dots, n,\\ \gamma_i = \frac{1}{2}(1 - \|{\bm{\beta}}_i\|^2), \quad &\forall i = 1, \dots, n.\end{aligned}$$ It can be easily verified that these newly defined variables, ${\bm{x}}$ from the primal-dual algorithm and the updated ${\bm{\delta}}$ form a primal and dual feasible solution for the SOCP. The original objective function value at $({\bm{x}}, {\bm{\delta}})$ and the SCOP objective function value at the updated solution differ by a constant $\frac{n}{2}$: $$\begin{aligned} f'({\bm{x}}) &= \frac{1}{2}\sum_{i=1}^n \|{\bm{x}}_i - {\bm{a}}_i\|^2 + \lambda \sum_{1 \le i < j \le n} \|{\bm{y}}_{ij}\|\\ &= \sum_{i=1}^n s_i - \frac{n}{2} + \lambda \sum_{1 \le i < j \le n} t_{ij} \\ &= f({\bm{x}}, {\bm{y}}, {\bm{z}}, s, u, t) - \frac{n}{2} \label{eq:ff'}\\ \end{aligned}$$ $$\begin{aligned} h'({\bm{\delta}}) &= - \frac{1}{2} \sum_{i=1}^n \|\sum_{j = i+1}^n {\bm{\delta}}_{ij} - \sum_{j=1}^{i-1} {\bm{\delta}}_{ji}\|^2 - \sum_{1 \le i < j \le n} \langle {\bm{\delta}}_{ij}, {\bm{a}}_i - {\bm{a}}_j \rangle \\ &= \sum_{i=1}^n \gamma_i - \frac{n}{2} + \sum_{i=1}^n \langle {\bm{a}}_i, {\bm{\beta}}_{i} \rangle \\ &= h({\bm{\delta}}, {\bm{\beta}}, \gamma) - \frac{n}{2} \label{eq:hh'} \end{aligned}$$ Complementary slackness {#sec:cs} ----------------------- Let $({\bm{x}}, {\bm{y}}, {\bm{z}}, s, u, t, {\bm{\delta}}, {\bm{\beta}}, \gamma)$ be a primal and dual feasible solution for the SOCP formulation of sum-of-norms clustering. Let us define ${\bm{\epsilon}}^{ij} = \begin{pmatrix} \epsilon_1^{ij}\\ {\bm{\epsilon}}_2^{ij} \end{pmatrix}$ for all $1 \le i < j \le n$ and ${\bm{\sigma}}^{i} = \begin{pmatrix} \sigma_1^{i}\\ {\bm{\sigma}}_2^{i}\\ \sigma_3^{i}\\ \end{pmatrix}$ for all $i= 1, \dots, n$ as follows: $$\begin{aligned} t_{ij} \lambda + {\bm{y}}_{ij}^T {\bm{\delta}}_{ij} &= \epsilon_1^{ij}, \quad \forall 1\le i<j \le n, \label{eq:adcs_a1}\\ t_{ij} {\bm{\delta}}_{ij} + \lambda {\bm{y}}_{ij} &= {\bm{\epsilon}}_2^{ij}, \quad \forall 1\le i<j \le n, \label{eq:adcs_a2}\\ s_{i} (1 - \gamma_i) + {\bm{z}}_{i}^T {\bm{\beta}}_{i} + u_i \gamma_i &= \sigma_1^{i}, \quad \forall i = 1, \dots, n\label{eq:adcs_b1},\\ s_{i} {\bm{\beta}}_{i} + (1 - \gamma_{i}) {\bm{z}}_{i} &= {\bm{\sigma}}_2^i, \quad \forall i = 1, \dots, n \label{eq:adcs_b2},\\ s_{i} \gamma_i + (1 - \gamma_{i}) u_i &= \sigma_3^i, \quad \forall i = 1, \dots, n \label{eq:adcs_b3}.\end{aligned}$$ At the optimizer, there hold ${\bm{\epsilon}}= {\bm{0}}, {\bm{\sigma}}= {\bm{0}}$ by KKT conditions. The system of equalities above becomes the complementary slackness condition. At an approximate solution, the right-hand sides ${\bm{\epsilon}}, {\bm{\sigma}}$ are non-zero. If ${\bm{\epsilon}}^{ij} = \begin{pmatrix} \mu'\\ {\bm{0}}\end{pmatrix}, {\bm{\sigma}}^i = \begin{pmatrix} \mu'\\ {\bm{0}}\\ 0 \end{pmatrix}$ for all $i = 1, \dots, n$ and for all $1 \le i < j \le n$, we refer the corresponding solution as a $\mu'$-centered solution. Otherwise, an upper bound on general right-hand sides ${\bm{\epsilon}}, {\bm{\sigma}}$ can be derived from the duality gap: $$\begin{aligned} & f'({\bm{x}}) - h'({\bm{\delta}})\\ = &f({\bm{x}}, {\bm{y}}, {\bm{z}}, s, u, t) - h({\bm{\delta}}, {\bm{\beta}}, \gamma) \quad \text{(By \eqref{eq:ff'} and \eqref{eq:hh'})}\\ = &\sum_{i=1}^n s_i - \sum_{i=1}^n \gamma_i + \sum_{i=1}^n \langle {\bm{x}}_i - {\bm{a}}_i, {\bm{\beta}}_i \rangle + \lambda \sum_{1 \le i < j \le n} t_{ij} - \sum_{i=1}^n \langle {\bm{x}}_i, {\bm{\beta}}_i \rangle \quad \text{(By adding and subtracting} \sum_{i=1}^n \langle {\bm{x}}_i, {\bm{\beta}}_i \rangle)\\ = &\sum_{i=1}^n s_i - \sum_{i=1}^n \gamma_i + \sum_{i=1}^n \langle {\bm{x}}_i - {\bm{a}}_i, {\bm{\beta}}_i \rangle + \lambda \sum_{1 \le i < j \le n} t_{ij} - \sum_{i=1}^n \langle {\bm{x}}_i, \sum_{j=1}^{i-1}{\bm{\delta}}_{ji} - \sum_{j=i+1}^n{\bm{\delta}}_{ij} \rangle \quad \text{(By \eqref{eq:d_constr1})}\\ = &\sum_{i=1}^n s_i - \sum_{i=1}^n \gamma_i + \sum_{i=1}^n \langle {\bm{x}}_i - {\bm{a}}_i, {\bm{\beta}}_i \rangle + \lambda \sum_{1 \le i < j \le n} t_{ij} - \sum_{1 \le i < j \le n} \langle {\bm{x}}_j - {\bm{x}}_i, {\bm{\delta}}_{ij} \rangle \quad \text{(By expanding the summation)}\\ = & \sum_{i=1}^n (s_i - \gamma_i + \langle {\bm{x}}_i - {\bm{a}}_i, {\bm{\beta}}_i \rangle) + \sum_{1 \le i < j \le n} ( \lambda t_{ij} + \langle {\bm{y}}_{ij}, {\bm{\delta}}_{ij} \rangle) \quad \text{(By \eqref{eq:p_constr1})}\\ = & \sum_{i=1}^n (s_i (1 - \gamma_i) + \langle {\bm{z}}_i, {\bm{\beta}}_i \rangle + u_i \gamma_i) + \sum_{1 \le i < j \le n} ( \lambda t_{ij} + \langle {\bm{y}}_{ij}, {\bm{\delta}}_{ij} \rangle) \quad \text{(By \eqref{eq:p_constr2}, \eqref{eq:p_constr3})}\\ =& \sum_{i=1}^n \sigma_1^i + \sum_{1 \le i < j \le n} \epsilon_1^{ij}\end{aligned}$$ Each term in the both summations is non-negative as shown below: $$\sigma_1^i = s_i (1 - \gamma_i) + \langle {\bm{z}}_i, {\bm{\beta}}_i \rangle + u_i \gamma_i = \frac{1}{2} ({\left \| {\bm{z}}_i \right \|}^2 + 2 \langle {\bm{z}}_i, {\bm{\beta}}_i \rangle + {\left \| {\bm{\beta}}_i \right \|}^2) \ge 0, \quad \forall i = 1, \dots, n,$$ $$\epsilon_1^{ij} = \lambda t_{ij} + \langle {\bm{y}}_{ij}, {\bm{\delta}}_{ij} \rangle \ge \lambda t_{ij} - {\left \| {\bm{y}}_{ij} \right \|} {\left \| {\bm{\delta}}_{ij} \right \|} \ge \lambda t_{ij} - \lambda {\left \| {\bm{y}}_{ij} \right \|} = 0, \quad \forall 1 \le i < j \le n.$$ Define $\mu:= f'({\bm{x}}) - h'({\bm{\delta}})$ to be the duality gap at the feasible solution. Combined with the non-negativity condition, $\sigma_1^i, \epsilon_1^{ij}$ satisfy $\sigma_1^i \le \mu$ for all $i = 1, \dots, n$ and $\epsilon_1^{ij} \le \mu$ for all $1\le i<j \le n$. At termination, the duality gap $\mu$ at the feasible solution is small, which implies the right-hand sides $\sigma_1^i, \epsilon_1^{ij}$ are also well bounded. We now have $\sigma_1^i, \epsilon_1^{ij}$ upper bounded in terms of $\mu$, and the remainder of the section is to establish upper bounds on ${\left \| {\bm{\epsilon}}^{ij}_2 \right \|}$ and ${\left \| \begin{pmatrix} {\bm{\sigma}}_2^i\\ \sigma_3^i \end{pmatrix} \right \|}$. In fact, in and below, we show that both are upper bounded by $O(\sqrt{\mu})$. Consider a general setting of second-order cone programming. Let $({\bm{x}}, {\bm{z}})$ denote a primal and dual feasible solution for a second-order cone program where ${\bm{x}}= \begin{pmatrix} x_0\\ \bar {\bm{x}}\end{pmatrix}, {\bm{z}}= \begin{pmatrix} z_0\\ \bar {\bm{z}}\end{pmatrix}$. If ${\bm{x}}^T {\bm{z}}\le \mu$, then ${\left \| z_0 \bar {\bm{x}}+ x_0 \bar {\bm{z}}\right \|} \le \sqrt{2x_0 z_0 \mu}$. \[lemma:O(sqrt(mu))\] If $x_0 = 0$, then ${\left \| \bar {\bm{x}}\right \|} \le x_0 = 0$ by feasibility assumption. Hence, $\bar {\bm{x}}= {\bm{0}}$, which implies ${\left \| z_0 \bar {\bm{x}}+ x_0 \bar {\bm{z}}\right \|} = 0 \le \sqrt{2x_0 z_0 \mu}$. If $z_0 = 0$, then ${\bm{z}}$ satisfies ${\left \| z_0 \bar {\bm{x}}+ x_0 \bar {\bm{z}}\right \|} = 0 \le \sqrt{2x_0 z_0 \mu}$ by the same argument. Otherwise, $x_0 > 0, z_0 > 0$, and we derive the following inequalities $$\begin{aligned} {\bm{x}}^T {\bm{z}}&= x_0 z_0 + \bar {\bm{x}}^T \bar {\bm{z}}\le \mu\\ \Rightarrow 1 + \frac{\bar {\bm{x}}^T}{x_0} \frac{\bar {\bm{z}}}{z_0} &\le \frac{\mu}{x_0 z_0} \quad \text{(Since $x_0 > 0, z_0 > 0$)}\\ \Rightarrow {\left \| \frac{\bar {\bm{x}}}{x_0} + \frac{\bar {\bm{z}}}{z_0} \right \|}^2 = {\left \| \frac{\bar {\bm{x}}}{x_0} \right \|}^2 + {\left \| \frac{\bar {\bm{z}}}{z_0} \right \|}^2 + 2 \frac{\bar {\bm{x}}^T}{x_0} \frac{\bar {\bm{z}}}{z_0} &\le 2 - 2 + \frac{2\mu}{x_0 z_0} \quad \text{(Since $x_0 \ge {\left \| \bar {\bm{x}}\right \|}, z_0 \ge {\left \| \bar {\bm{z}}\right \|}$)}\\ \Rightarrow {\left \| \frac{\bar {\bm{x}}}{x_0} + \frac{\bar {\bm{z}}}{z_0} \right \|} &\le \sqrt{\frac{2\mu}{x_0 z_0}}\\ \Rightarrow {\left \| z_0 \bar {\bm{x}}+ x_0 \bar {\bm{z}}\right \|} &\le \sqrt{2x_0 z_0 \mu}.\end{aligned}$$ Let $\bar {\bm{a}}:= \frac{1}{n} \sum_{i=1}^n {\bm{a}}_i$ denote the centroid of all data points. Let ${\bm{x}}_1' := {\bm{x}}_2' := ... := {\bm{x}}_n' := \bar {\bm{a}}$. Then the primal objective value of the original sum-of-norms formulation at ${\bm{x}}'$ is $$f'({\bm{x}}') = \frac{1}{2} \sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2.$$ Let ${\bm{\delta}}_{ij}' = {\bm{0}}$ for all $1 \le i < j \le n$. Then ${\bm{\delta}}'$ is a feasible solution to the dual problem of the original formulation and the dual objective value at ${\bm{\delta}}'$ is $$h'({\bm{\delta}}') = 0.$$ Let $f^*$ and $h^*$ denote the primal and dual optimal values of the SOCP respectively, which must satisfy the following inequality by strong duality: $$\frac{n}{2} = h'({\bm{\delta}}') + \frac{n}{2} \le h^* = f^* \le f'({\bm{x}}') + \frac{n}{2} = \frac{1}{2} \sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + \frac{n}{2},$$ where the term $\frac{n}{2}$ comes from and . At the feasible solution $({\bm{x}}, {\bm{y}}, {\bm{z}}, s, u, t, {\bm{\delta}}, {\bm{\beta}}, \gamma)$, the objective value is at a distance of at most $\mu$ away from the optimal value, which implies $$\sum_{i=1}^n s_i + \lambda \sum_{1\le i<j \le n} t_{ij} \le f^* + \mu \le \frac{1}{2} \sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + \frac{n}{2} + \mu,$$ which is rearranged to $$\sum_{i=1}^n \left (s_i - \frac{1}{2} \right) + \lambda \sum_{1\le i<j \le n} t_{ij} \le \frac{1}{2} \sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + \mu.$$ Moreover, by feasibility, $s_i \ge \frac{1}{2}$ holds for all $i = 1, \dots, n$ and $t_{ij} \ge 0$ holds for all $1 \le i < j \le n$. Hence, $$s_i \le \frac{1}{2} \sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + \frac{1}{2} + \mu, \quad t_{ij} \le \frac{1}{\lambda} \left (\frac{1}{2} \sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + \mu \right).$$ As $t_{ij} \lambda + {\bm{y}}_{ij}^T {\bm{\delta}}_{ij} = \epsilon_1^{ij} \le \mu$, ${\left \| {\bm{\epsilon}}_2^{ij} \right \|}$ has the following upper bound by Lemma \[lemma:O(sqrt(mu))\] $${\left \| {\bm{\epsilon}}_2^{ij} \right \|} = {\left \| t_{ij} {\bm{\delta}}_{ij} + \lambda {\bm{y}}_{ij} \right \|} \le \sqrt{2 t_{ij} \lambda \mu} \le \sqrt{\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 \mu + 2\mu^2}. \label{eq:bound_epsilon2}$$ Similarly, at the feasible solution, the dual objective value is at a distance of at most $\mu$ away from the optimal value, which implies $$\sum_{i=1}^n {\bm{a}}_i^T {\bm{\beta}}_i + \sum_{i=1}^n \gamma_i \ge h^* - \mu \ge \frac{n}{2}- \mu,$$ which is rearranged to $$\sum_{i=1}^n \left (\frac{1}{2} - \gamma_i \right) \le \sum_{i=1}^n {\bm{a}}_i^T {\bm{\beta}}_i + \mu.$$ By feasibility, $\frac{1}{2} - \gamma_i \ge 0$, which implies $$1 - \gamma_i \le \frac{1}{2} + \sum_{i=1}^n {\bm{a}}_i^T {\bm{\beta}}_i + \mu.$$ Since $\lambda \ge {\left \| {\bm{\delta}}_{ij} \right \|}$, ${\left \| {\bm{\beta}}_i \right \|}$ satisfies $${\left \| {\bm{\beta}}_i \right \|} = {\left \| \sum_{j=1}^{i-1}{\bm{\delta}}_{ji}-\sum_{j=i+1}^n{\bm{\delta}}_{ij} \right \|} \le (n-1) \lambda.$$ By Cauchy-Schwartz inequality, $${\bm{a}}_i^T {\bm{\beta}}_i \le {\left \| {\bm{a}}_i \right \|} \cdot {\left \| {\bm{\beta}}_i \right \|} \le (n-1) \lambda {\left \| {\bm{a}}_i \right \|}.$$ Therefore, $1 - \gamma_i$ satisfies $$1 - \gamma_i \le \frac{1}{2} + \sum_{l=1}^n (n-1) \lambda {\left \| {\bm{a}}_l \right \|} + \mu.$$ Since $s_{i} (1 - \gamma_i) + {\bm{z}}_{i}^T {\bm{\beta}}_{i} + u_i \gamma_i = \sigma_1^{i}$, ${\left \| \begin{pmatrix} {\bm{\sigma}}_2^i\\ \sigma_3^i \end{pmatrix} \right \|}$ has the following upper bound by Lemma \[lemma:O(sqrt(mu))\] $$\begin{aligned} {\left \| \begin{pmatrix} {\bm{\sigma}}_2^i\\ \sigma_3^i \end{pmatrix} \right \|} &= {\left \| \begin{pmatrix} s_{i} {\bm{\beta}}_{i} + (1 - \gamma_{i}) {\bm{z}}_{i}\\ s_{i} \gamma_i + (1 - \gamma_{i}) u_i \end{pmatrix} \right \|} \le \sqrt{2 s_i (1 - \gamma_i) \mu} \\ &\le \sqrt{2 \cdot \left (\frac{1}{2} \sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + \frac{1}{2} + \mu \right) \cdot \left (\frac{1}{2} + \sum_{l=1}^n (n-1) \lambda {\left \| {\bm{a}}_l \right \|} + \mu \right) \cdot \mu}\\ &\le \sqrt{\left (\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + 1 + 2\mu \right) \cdot \left (\frac{1}{2} + \sum_{l=1}^n (n-1) \lambda {\left \| {\bm{a}}_l \right \|} + \mu \right ) \mu}. \end{aligned} \label{eq:bound_sigma23}$$ Clustering test {#sec:test} =============== Given a primal and dual feasible solution $({\bm{x}}, {\bm{y}}, {\bm{z}}, s, u, t, {\bm{\delta}}, {\bm{\beta}}, \gamma)$ with duality gap $\mu$, we find candidate clusters as follows. First, select an index $i$ from $\{1,\ldots,n\}$ arbitrarily. Construct a ball of radius $\mu^{0.75}$ about ${\bm{x}}_i$. Create a candidate cluster with all indices $k$ such that ${\bm{x}}_k$ is located in the ball about ${\bm{x}}_i$ (i.e. $\{k:{\left \| {\bm{x}}_i-{\bm{x}}_k \right \|}\le\mu^{3/4}\}$). Now find an index $j$ that is not in any candidate cluster and construct a ball about ${\bm{x}}_j$. Repeat until all data points are used up. If the output of the primal-dual algorithm is not feasible for the second-order cone program, we construct a feasible solution as described in the previous section. With the feasible solution, we define $${\bm{g}}_{ij}:= \left\{ \begin{array}{ll} -{\bm{\delta}}_{ij}, \quad & \mbox{if } i < j,\\ {\bm{\delta}}_{ji}, & \mbox{if } j < i. \end{array} \right.$$ For any candidate cluster $C$, compute ${\bm{q}}_{ij}:= {\bm{g}}_{ij} + \frac{1}{m} \cdot ({\bm{x}}_i - {\bm{x}}_j - {\bm{\omega}}_i + {\bm{\omega}}_j) + \frac{1}{m} \sum_{k \notin C} ({\bm{g}}_{ik} - {\bm{g}}_{jk})$ for all $i, j \in C, i \ne j$, denoted as *Chiquet-Gutierrez-Rigail (CGR) subgradients*. Check if the following two conditions hold: ***CGR subgradient condition*:** All CGR subgradients ${\bm{q}}_{ij}$ satisfy the CGR inequality ${\left \| {\bm{q}}_{ij} \right \|} \le \lambda$. ***Separation condition*:** All candidate clusters are separated at distance of at least $2\tau$, where $\tau = \sqrt{2\mu}$. If both conditions hold for all candidate clusters, then the test terminates and reports ‘success’. Each candidate cluster is a real cluster given by the optimal solution, thus all clusters are correctly identified. The ${\bm{q}}_{ij}$’s serve as certificates. If either condition fails for any candidate cluster, the test reports ‘failure’. One has to run more iterations of the algorithm to decrease the duality gap $\mu$. Repeat the process until the test reports ‘success’. Note that this test is algorithm-independent, but it does require the algorithm be of primal-dual type. The test is validated by our two sufficient conditions presented in Section \[sec:suff\_cond\]. The CGR subgradients condition certifies that each cluster we identify is indeed a cluster or part of a larger cluster by Theorem \[thm:clust\_suff\]. This is presented in Section \[sec:cgr\_subgradient\]. The separation condition certifies that there is no super-cluster with more than one cluster we identify by Theorem \[thm:noclust\_suff\], as shown in \[sec:CGR\_duality\_gap\]. Therefore, we determine all clusters correctly when the test succeeds. CGR subgradients and clustering corollary {#sec:cgr_subgradient} ----------------------------------------- Let $C \subseteq [n]$ denote a subset of points. Let $m:= |C|$ denote the cardinality of $C$. For all $i,j \in C, i \ne j$, define ${\bm{q}}_{ij}:= {\bm{g}}_{ij} + \frac{1}{m} \cdot ({\bm{x}}_i - {\bm{x}}_j - {\bm{\omega}}_i + {\bm{\omega}}_j) + \frac{1}{m} \sum_{k \notin C} ({\bm{g}}_{ik} - {\bm{g}}_{jk})$. Then ${\bm{q}}_{ij}$ satisfies $$\begin{aligned} {\bm{a}}_i - \bar {\bm{a}}&= \sum_{j \in C \backslash \{i\}} {\bm{q}}_{ij}, \quad \forall i \in C\\ {\bm{q}}_{ij} &= -{\bm{q}}_{ji}, \quad \forall i, j \in C, i \ne j,\end{aligned}$$ where $\bar {\bm{a}}= \frac{1}{m} \sum_{i \in C} {\bm{a}}_i$. \[cluster\_lemma\_subgradient\] Substitute the primal constraint into the perturbed complementary slackness to obtain the following equality of $\gamma_i$ and $s_i$ $$1 - \gamma_i = s_i - \sigma_3^i, \quad \forall i = 1, \dots, n.$$ Substitute the equality above into and divide both sides by $s_i$ to obtain the following equation of ${\bm{\beta}}_i$ in terms of ${\bm{z}}_i$ $${\bm{\beta}}_{i} = - {\bm{z}}_i + {\bm{\omega}}_i, \quad \forall i = 1, \dots, n.$$ Notice that the operation is valid because $s_i \ge \frac{1}{2}$ by the primal constraint and . Substitute the primal constraint and the equality above into the dual constraint yielding $$-\sum_{j=1}^{i-1}{\bm{\delta}}_{ji}+\sum_{j=i+1}^n{\bm{\delta}}_{ij} - {\bm{x}}_i + {\bm{a}}_i + {\bm{\omega}}_i = {\bm{0}}, \quad \forall i = 1, \dots, n.$$ With the definition of ${\bm{g}}_{ij}$, the equality above is rewritten as $$- {\bm{x}}_i + {\bm{a}}_i + {\bm{\omega}}_i -\sum_{j \ne i}{\bm{g}}_{ij} = {\bm{0}}, \quad \forall i = 1, \dots, n. \label{eq:opt_condp1}$$ By , we have the following equality holds for all $i \in C$ $$- {\bm{x}}_i + {\bm{a}}_i + {\bm{\omega}}_i - \sum_{j \in C \backslash \{i\}}{\bm{g}}_{ij} - \sum_{k \notin C}{\bm{g}}_{ik} = {\bm{0}}. \label{eq:opt_condp2}$$ Sum over all $i \in C$ and divide the new equality by $m$ to obtain $$- \frac{1}{m} \sum_{i \in C} {\bm{x}}_i + \bar {\bm{a}}+ \frac{1}{m} \sum_{i \in C} {\bm{\omega}}_i - \frac{1}{m} \sum_{i \in C} \sum_{k \notin C} {\bm{g}}_{ik} = {\bm{0}}. \label{eq:opt_condp3}$$ Change the index in from $i$ to $j$. Subtract from to obtain $$- {\bm{x}}_i + \frac{1}{m} \sum_{j \in C} {\bm{x}}_j + {\bm{a}}_i - \bar {\bm{a}}+ {\bm{\omega}}_i - \frac{1}{m} \sum_{j \in C} {\bm{\omega}}_j - \sum_{j \in C \backslash \{i\}} {\bm{g}}_{ij} + \frac{1}{m} \sum_{j \in C} \sum_{k \notin C} ({\bm{g}}_{jk} - {\bm{g}}_{ik}) = {\bm{0}}, \quad \forall i \in C,$$ which is rearranged to $$\begin{aligned} {\bm{a}}_i - \bar {\bm{a}}&= {\bm{x}}_i - \frac{1}{m} \sum_{j \in C} {\bm{x}}_j - {\bm{\omega}}_i + \frac{1}{m} \sum_{j \in C} {\bm{\omega}}_j + \sum_{j \in C \backslash \{i\}} {\bm{g}}_{ij} + \frac{1}{m} \sum_{j \in C} \sum_{k \notin C} ({\bm{g}}_{ik} - {\bm{g}}_{jk})\\ &= \sum_{j \in C \backslash \{i\}} \left [\frac{1}{m} ({\bm{x}}_i - {\bm{x}}_j - {\bm{\omega}}_i + {\bm{\omega}}_j) + {\bm{g}}_{ij} + \frac{1}{m} \sum_{k \notin C} ({\bm{g}}_{ik} - {\bm{g}}_{jk})\right]\\ &= \sum_{j \in C \backslash \{i\}} {\bm{q}}_{ij} \quad (\text{By definition}), \quad \forall i \in C. \label{eq:opt_cond5}\end{aligned}$$ Moreover, by the definition of ${\bm{q}}_{ij}$, we observe the following property for all $i, j \in C, i \ne j$ $${\bm{q}}_{ij}= {\bm{g}}_{ij} + \frac{1}{m} \cdot ({\bm{x}}_i - {\bm{x}}_j - {\bm{\omega}}_i + {\bm{\omega}}_j) + \frac{1}{m} \sum_{k \notin C} ({\bm{g}}_{ik} - {\bm{g}}_{jk}) = - {\bm{g}}_{ji} - \frac{1}{m} \cdot ({\bm{x}}_j - {\bm{x}}_i - {\bm{\omega}}_j + {\bm{\omega}}_i) - \frac{1}{m} \sum_{k \notin C} ({\bm{g}}_{jk} - {\bm{g}}_{ik}) = - {\bm{q}}_{ji}$$ If ${\left \| {\bm{q}}_{ij} \right \|} \le \lambda$ holds for all $i \ne j, i,j \in C$ where $C$ is a candidate cluster, then $C$ is a cluster or part of a larger cluster. The proof of the corollary follows trivially by Theorem \[thm:clust\_suff\]. Duality gap and distinct clustering corollary {#sec:CGR_duality_gap} --------------------------------------------- As derived earlier in Section 3.2, the duality gap at the feasible solution is: $$f({\bm{x}}, {\bm{y}}, {\bm{z}}, s, u, t) - h({\bm{\delta}}, {\bm{\beta}}, \gamma) = \sum_{i=1}^n s_i + \lambda \sum_{1 \le i < j \le n} t_{ij} - \sum_{i=1}^n {\bm{a}}_i^T {\bm{\beta}}_i - \sum_{i=1}^n \gamma_i = \sum_{i=1}^n \sigma_1^i + \sum_{1 \le i < j \le n} \epsilon_1^{ij} =: \mu. \label{eq:duality_gap1}$$ By the property of strong convexity of $f'$, we have $$\frac{1}{2} \|{\bm{x}}- {\bm{x}}^*\|^2 \le f'({\bm{x}}) - f'({\bm{x}}^*) = f({\bm{x}}, {\bm{y}}, {\bm{z}}, s, u, t) - f({\bm{x}}^*, {\bm{y}}^*, {\bm{z}}^*, s^*, u^*, t^*),$$ which is further bounded as follows by weak duality $$\frac{1}{2} \|{\bm{x}}- {\bm{x}}^*\|^2 \le f({\bm{x}}, {\bm{y}}, {\bm{z}}, s, u, t) - h({\bm{\delta}}, {\bm{\beta}}, \gamma).$$ Then, for any primal-dual algorithm, the distance between the approximate solution and the optimizer is given by $$\tau = \|{\bm{x}}- {\bm{x}}^*\|\le \sqrt{2\mu}$$ Let $C$ denote a candidate cluster. If for all $i\in C, j \notin C$ it holds that $\|{\bm{x}}_i - {\bm{x}}_j\| > 2 \tau$ where $\tau = \sqrt{2\mu}$ for any primal-dual algorithm, then there does not exist a super-cluster which strictly contains $C$. The proof follows directly from Theorem \[thm:noclust\_suff\]. Properties of the central path {#sec:central_path} ============================== In this section, we explore the properties of the central path for a symmetric primal-dual path following algorithm. These properties play a fundamental role in the proof of our main theorem in Section 6. In the main theorem, we state that if a symmetric primal-dual path following algorithm is used, our clustering test will eventually succeed after a finite number of iterations when $\lambda$ is not at any fusion value. The proof of the ultimate success relies on the linear convergence to the optimal primal-dual pair, which will be shown to be satisfied in the remainder of this section. Even though there are very few theorems about the central path of second-order-cone programming in literature, there are established theorems from semidefinite programming, which specialize in SOCP. The following theorem states that the $\mu'$-centered iterates converge to the analytic center superlinearly. Assume the semidefinite program has a strictly complementary solution and the iterates of the algorithm converge tangentially to the central path. Let $(X(\mu'), Z(\mu'))$ denote a $\mu'$-centered primal-dual pair. Let $(X^a, Z^a)$ denote the analytic centers of the primal and dual optimal sets. Let $\mu' \in (0,1)$ be the central path parameter. There holds $${\left \| X(\mu')-X^a \right \|} = O(\mu'), \quad {\left \| Z(\mu')-Z^a \right \|} = O(\mu').$$ \[thm:Luo\] Assume a primal-dual path following algorithm satisfying the assumptions of Luo et al. is applied. To employ Theorem \[thm:Luo\], we show that our SOCP has a strictly complementary optimizer. Notice that this statement is not necessarily true for all values of $\lambda$. One has to assume $\lambda$ is not at any fusion value. When $\lambda$ is exactly at a fusion value $\lambda^*$, strict complementarity may fail. The failure is not surprising since any arbitrarily small negative perturbation $\lambda^* + \epsilon$ yields a different clustering. In other words, complete cluster identification for these fusion values is ill-posed. Thus it is unreasonable to expect an algorithm which satisfies a guarantee for such a problem. There are at most $n$ fusion values as a result of Theorem \[thm:agglomeration\]. Strict complementarity {#sec:strict_complementarity} ---------------------- By specializing the definition of strict complementarity in SDP to SOCP [@goldfarb], a primal and dual feasible solution satisfies strict complementarity if and only if $$\begin{aligned} t_{ij} + \lambda &> \|{\bm{y}}_{ij} + {\bm{\delta}}_{ij}\|, \quad \forall 1 \le i < j \le n, \label{eq:strict_cs_a}\\ s_{i} + 1 - \gamma_i &> {\left \| \begin{pmatrix} {\bm{z}}_{i} + {\bm{\beta}}_i\\ u_i + \gamma_i \end{pmatrix} \right \|}, \quad \forall i = 1, ..., n\label{eq:strict_cs_b}\end{aligned}$$ Let $\lambda > 0$ be a parameter value for which fusion does not occur and let $\lambda_1, \lambda_2$ be the two successive fusion values such that $\lambda \in (\lambda_1, \lambda_2)$. Note that it is possible for $\lambda_1 = 0$ or $\lambda_2 = \infty$. We will show there exists a strict complementary primal-dual solution at $\lambda$.\ Let $({\bm{x}}', {\bm{y}}', {\bm{z}}', s', u', t', {\bm{\delta}}', {\bm{\beta}}', \gamma')$ denote optimal primal and dual solutions at $\lambda_1$. Let $C_1, C_2, ..., C_K$ denote the clusters identified by the optimal solutions above. When $\lambda_1 = 0$, there are $n$ clusters, and each cluster is a singleton set. When $\lambda_1$ is the largest fusion value, there is only one cluster containing all $n$ points. Define ${\bm{g}}'$ with ${\bm{\delta}}'$ as before $${\bm{g}}_{ij}':= \left\{ \begin{array}{ll} -{\bm{\delta}}_{ij}', \quad & \mbox{if } i < j,\\ {\bm{\delta}}_{ji}', & \mbox{if } j < i. \end{array} \right.$$ By Chiquet et al. [@chiquet], the dual solutions satisfy $${\bm{a}}_i - \bar {\bm{a}}_k = \sum_{j \in C_k - \{i\}} {\bm{g}}_{ij}', \quad \forall i \in C_k, k \in [K], \qquad \text{and} \quad {\left \| {\bm{g}}_{ij}' \right \|} = {\left \| {\bm{\delta}}_{ij}' \right \|} \le \lambda_1, \quad \forall i \ne j \label{eq:son-clustering_chiquet}$$ where $\bar {\bm{a}}_k:= \frac{1}{|C_k|}\sum_{i \in C_k} {\bm{a}}_i$. Consider the following optimization problem $$\min_{{\bm{p}}_1, ..., {\bm{p}}_K\in{\bm{\mathrm{R}}}^d} \frac{1}{2}\sum_{k=1}^K |C_k| {\left \| {\bm{p}}_k-\bar{\bm{a}}_k \right \|}^2 +\lambda\sum_{1 \le k<k' \le K} |C_k| \cdot |C_{k'}| {\left \| {\bm{p}}_k-{\bm{p}}_{k'} \right \|}. \label{eq:son-clustering_multweight}$$ Let ${\bm{p}}$ denote the optimal solution of . Vector ${\bm{p}}$ satisfies ${\bm{p}}_k \ne {\bm{p}}_{k'}$ for all $k, k' \in [K], k \ne k'$. \[lemma\_pkpk’\] For the purpose of contradiction, we may assume there exist $\hat k \ne \hat k'$ such that ${\bm{p}}_{\hat k} = {\bm{p}}_{\hat k'}$. Let ${\bm{x}}^*_i = {\bm{p}}_k, \forall i \in C_k, k \in [K]$. By the first-order optimality condition of at ${\bm{p}}$, there exist ${\bm{g}}_{kk'} \in \partial \|{\bm{p}}_k - {\bm{p}}_{k'}\|$ for all $k \ne k'$ such that $$\begin{aligned} {\bm{0}}&= {\bm{p}}_k - \bar {\bm{a}}_k + \lambda \cdot |C_{K'}| \cdot \sum_{k' \ne k} {\bm{g}}_{kk'}\\ &= {\bm{p}}_k - {\bm{a}}_i + {\bm{a}}_i - \bar {\bm{a}}_k + \lambda \cdot |C_{K'}| \cdot \sum_{k' \ne k} {\bm{g}}_{kk'}\\ &= {\bm{p}}_k - {\bm{a}}_i + \sum_{j \in C_k - \{i\}} {\bm{g}}_{ij}' + \lambda \cdot |C_{K'}| \cdot \sum_{k' \ne k} {\bm{g}}_{kk'} \\ &= {\bm{x}}^*_i - {\bm{a}}_i + \sum_{j \in C_k - \{i\}} {\bm{g}}_{ij}' + \lambda \cdot |C_{K'}| \cdot \sum_{k' \ne k} {\bm{g}}_{kk'},\end{aligned}$$ satisfying at $i$. As $i \in C_k, k \in[K]$ are chosen arbitrarily, the equality holds for all $i$ hence ${\bm{x}}^*$ is an optimal solution to . From the assumption on ${\bm{p}}$ and the agglomerative properties of the clusterpath, cluster $C_{\hat k}, C_{\hat k'}$ merge at $\lambda' \in (\lambda_1, \lambda]$, which contradicts our choice of $\lambda_2$. That concludes our proof. By Lemma \[lemma\_pkpk’\], the objective function is differentiable at ${\bm{p}}$. Hence, there holds $$|C_k| ({\bm{p}}_k-\bar{\bm{a}}_k) + \lambda |C_k| \cdot |C_{k'}| \frac{{\bm{p}}_k - {\bm{p}}_{k'}}{\|{\bm{p}}_k - {\bm{p}}_{k'}\|} = {\bm{0}}, \quad \forall k \in [K]. \label{eq:son-clustering_multweight_oc}$$ Define the following solutions: $$\begin{aligned} {\bm{x}}^*_i &= {\bm{p}}_k, \quad \forall i \in C_k, k \in [K]\\ {\bm{y}}_{ij}^* &= {\bm{x}}^*_i - {\bm{x}}^*_j, \quad \forall 1 \le i < j \le n\\ {\bm{z}}^*_{i} &= {\bm{x}}^*_i - {\bm{a}}_i, \quad \forall i = 1, \dots, n, \\ s_i^* &= \frac{1}{2} (1 + \|{\bm{z}}_{i}^*\|^2), \quad \forall i = 1, \dots, n\\ u_i^* &= \frac{1}{2} (-1 + \|{\bm{z}}_{i}^*\|^2), \quad \forall i = 1, \dots, n\\ t_{ij}^* &= \|{\bm{y}}_{ij}^*\|, \quad \forall 1 \le i < j \le n\\ {\bm{\delta}}_{ij}^* &= \left \{ \begin{array}{ll} {\bm{\delta}}_{ij}', \quad &\mbox{if } i<j \mbox{ and } i,j \in C_k\\ \lambda \frac{{\bm{x}}^*_{j} - {\bm{x}}_i^*}{\|{\bm{x}}_j^* - {\bm{x}}_i^*\|}, & \mbox{otherwise} \end{array} \right. \quad \forall 1 \le i < j \le n\\ {\bm{\beta}}_i^* &= - {\bm{z}}^*_i, \quad \forall i = 1, \dots, n\\ \gamma_i^* &= \frac{1}{2}(1 - \|{\bm{\beta}}^*_i\|^2), \quad \forall i = 1, \dots, n \end{aligned} \label{eq:feasible_sol}$$ The solutions defined above are optimal for the second-order cone program at $\lambda$. By construction, the primal constraints , , , , , the dual constraints , , and the complementary slackness conditions , , , and with ${\bm{\epsilon}}= {\bm{0}}, {\bm{\sigma}}= {\bm{0}}$ are automatically satisfied. It remains to check if these solutions satisfy .\ **Verification for :** For any $i \in C_k$ with some $k \in [K]$, is rewritten as follows due to and $$\begin{aligned} &-\sum_{j=1}^{i-1}{\bm{\delta}}^*_{ji}+\sum_{j=i+1}^n{\bm{\delta}}^*_{ij}+{\bm{\beta}}_i^* \\ = &-\sum_{j < i, j \in C_k -\{i\}}{\bm{\delta}}'_{ji}+\sum_{j>i, j \in C_k - \{i\}}{\bm{\delta}}'_{ij} + \lambda \sum_{k \ne k'}|C_{k'}| \frac{{\bm{p}}_{k'} - {\bm{p}}_k}{\|{\bm{p}}_{k'} - {\bm{p}}_k\|}+{\bm{a}}_i-{\bm{x}}^*_i\\ = & - \sum_{j \in C_k -\{i\}} {\bm{g}}'_{ij} + \lambda \sum_{k \ne k'}|C_{k'}| \frac{{\bm{p}}_{k'} - {\bm{p}}_k}{\|{\bm{p}}_{k'} - {\bm{p}}_k\|}+{\bm{a}}_i-{\bm{x}}^*_i\\ =& \bar {\bm{a}}_k - {\bm{a}}_i + \lambda \sum_{k \ne k'}|C_{k'}| \frac{{\bm{p}}_{k'} - {\bm{p}}_k}{\|{\bm{p}}_{k'} - {\bm{p}}_k\|}+{\bm{a}}_i-{\bm{p}}_i\\ =& \bar {\bm{a}}_k + \lambda \sum_{k \ne k'}|C_{k'}| \frac{{\bm{p}}_{k'} - {\bm{p}}_k}{\|{\bm{p}}_{k'} - {\bm{p}}_k\|}-{\bm{p}}_i\\ =& {\bm{0}}.\end{aligned}$$ By KKT conditions, the solutions defined above form an optimal primal-dual pair. The solutions defined above are strictly complementary. The strict complementarity is equivalent to and , which can be easily checked as shown below\ **Verification for :** Let $1 \le i<j \le n$. If ${\bm{y}}_{ij}^* = {\bm{0}}$, then there exists some $k \in [K]$ such that $i, j \in C_k$. By definition, $t_{ij}^* = 0$ and ${\bm{\delta}}_{ij}^* = {\bm{\delta}}_{ij}$. Notice that ${\bm{\delta}}_{ij}$ is the optimal dual solution of at $\lambda_1$, then it satisfies $\|{\bm{\delta}}_{ij}\| \le \lambda_1 < \lambda$ by the definition of $\lambda$. Hence, $$t_{ij}^* + \lambda = \lambda > \|{\bm{\delta}}_{ij}\| = \|{\bm{\delta}}_{ij}^*\| = \|{\bm{y}}_{ij}^* + {\bm{\delta}}_{ij}^*\|.$$ If ${\bm{y}}_{ij}^* \ne {\bm{0}}$, then there exist $k, k' \in [K]$ such that $i \in C_k, j \in C_{k'}$ and $k \ne k'$. By definition, $t_{ij}^* = \|{\bm{y}}_{ij}^*\| = \|{\bm{p}}_k - {\bm{p}}_{k'}\|$ and ${\bm{\delta}}_{ij}^* = \lambda \frac{{\bm{p}}_{k'} - {\bm{p}}_k}{\|{\bm{p}}_{k'} - {\bm{p}}_k\|}$. Hence, $$t_{ij}^* + \lambda = {\left \| {\bm{p}}_k - {\bm{p}}_{k'}\| + \lambda > |\|{\bm{p}}_k - {\bm{p}}_{k'} \right \|} - \lambda| = {\left \| {\bm{p}}_{k} - {\bm{p}}_{k'} + \lambda \frac{{\bm{p}}_{k'} - {\bm{p}}_k}{{\left \| {\bm{p}}_{k'} - {\bm{p}}_k \right \|}} \right \|} = {\left \| {\bm{y}}_{ij}^* + {\bm{\delta}}_{ij}^* \right \|}.$$ **Verification for :** Let $i \in [n]$. By construction, $$s^*_i + 1 - \gamma^*_i = \|{\bm{z}}_i^*\|^2 + 1 > 0 = {\left \| \begin{pmatrix} {\bm{z}}_{i}^* - {\bm{z}}_{i}^*\\ -\frac{1}{2} (1 - \|{\bm{z}}_i\|^2) + \frac{1}{2} (1 - \|{\bm{z}}_i\|^2) \end{pmatrix} \right \|} = {\left \| \begin{pmatrix} {\bm{z}}_{i}^* + {\bm{\beta}}_i^*\\ u_i^* + \gamma_i^* \end{pmatrix} \right \|}$$ Since the indices are chosen arbitrarily, the solutions defined above are strictly complementary. Test Guarantee {#sec:guarantee} ============== In Section 4, we validated our test theoretically in the sense that if the test succeeds, it is guaranteed that the correct clusters are found. In this section, we show that the test succeeds after a finite number of iterations of a certain interior point method, provided that $\lambda$ is not at any fusion value. Specifically, we prove that the two conditions in our test are guaranteed to hold for a primal-dual path following algorithm satisfying the assumptions of Luo et al. [@Luo] when the duality gap $\mu$ is sufficiently small. If $\lambda$ is not a fusion value, then there exists $\mu_0 > 0$ such that both CGR subgradients and separation conditions in the test are satisfied for any duality gap $\mu \le \mu_0$ for a primal-dual path following algorithm satisfying the assumptions of Luo et al. [@Luo]. Let $({\bm{x}}, {\bm{y}}, {\bm{z}}, s, u, t, {\bm{\delta}}, {\bm{\beta}}, \gamma)$ denote a primal and dual feasible solution. Let $C_1, C_2, ..., C_K$ denote the clusters obtained at optimum. Let $\mu' \in (0,1)$ denote the central path parameter and let $\mu$ denote the duality gap at the feasible solution. By Theorem \[thm:Luo\], there hold $${\left \| {\bm{x}}(\mu') - {\bm{x}}^a \right \|} = O(\mu'), \quad {\left \| {\bm{\delta}}(\mu) - {\bm{\delta}}^a \right \|} = O(\mu')$$ where ${\bm{x}}(\mu'), {\bm{\delta}}(\mu')$ are $\mu'$-centered solutions and ${\bm{x}}^a, {\bm{\delta}}^a$ are the analytic centers of the primal and dual optimal sets respectively. Moreover, since the iterates converge tangentially to the central path, we may assume the size of the central path neighborhood to be as follows $${\left \| {\bm{x}}- {\bm{x}}(\mu') \right \|} = O(\mu'), \quad {\left \| {\bm{\delta}}- {\bm{\delta}}(\mu') \right \|} = O(\mu').$$ Luo et al. [@Luo] validated the assumption above for their interior point algorithm, which is a generalization of the Mizuno-Todd-Ye predictor-corrector method for linear programming. Combine the two sets of equations above and employ the triangle inequality to obtain $${\left \| {\bm{x}}- {\bm{x}}^a \right \|} = O(\mu'), \quad {\left \| {\bm{\delta}}- {\bm{\delta}}^a \right \|} = O(\mu').$$ As the duality gap $\mu$ is of linear order of the central path parameter $\mu'$, the equalities above are rewritten as $${\left \| {\bm{x}}- {\bm{x}}^a \right \|} = O(\mu), \quad {\left \| {\bm{\delta}}- {\bm{\delta}}^a \right \|} = O(\mu).$$ Define $p, p' \ge 0$ such that ${\left \| {\bm{x}}_i - {\bm{x}}_i^a \right \|} \le p \mu$ for all $i$ and ${\left \| {\bm{\delta}}_{ij} - {\bm{\delta}}^a_{ij} \right \|} \le p' \mu$ for all distinct pairs $(i,j)$. Then, for all distinct pairs $(i, j)$ in any cluster $C_k$, there holds ${\left \| {\bm{x}}_i - {\bm{x}}_j \right \|} \le 2p \mu$. Moreover, define $q > 0$ such that all ${\bm{x}}_i^a$’s in different clusters are at least $q$ apart, which implies that ${\bm{x}}_i$’s in different clusters are separated by a distance of at least $q - 2p \mu$. We may assume the duality gap satisfies $\mu < \frac{q}{2p}$. Notice that this assumption is guaranteed to be true after a finite number of iterations. Let $C:= C_k$ for some $k \in [K]$. By Lemma \[lemma:O(sqrt(mu))\], there hold ${\left \| {\bm{\epsilon}}_2^{ij} \right \|} \le \sqrt{\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 \mu + 2\mu^2}$ for all $i < j, C \cap \{i, j\} \ne \emptyset, [n] \backslash C \cap \{i, j\} \ne \emptyset$ and $${\left \| \begin{pmatrix} {\bm{\sigma}}_2^i\\ \sigma_3^i \end{pmatrix} \right \|} \le \sqrt{\left (\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + 1 + 2\mu \right) \cdot \left (\frac{1}{2} + \sum_{l=1}^n (n-1) \lambda {\left \| {\bm{a}}_l \right \|} + \mu \right ) \mu}$$ for all $i$. Bound ${\left \| {\bm{\delta}}_{ij} \right \|}$ {#sec:delta} ----------------------------------------------- For all $i,j \in C, i \ne j$, the following inequality holds $${\left \| {\bm{\delta}}_{ij} \right \|} \le \lambda - r + p' \mu$$ where $r:= \min_{l \ne l', l, l' \in C_k, k \in [K]} (\lambda - {\left \| {\bm{\delta}}^a_{ll'} \right \|}) > 0$. \[lemma\_bdelta\] Let $i, j \in C$ and $i \ne j$. By the definition of analytic center and strict complementarity, $${\left \| {\bm{\delta}}^a_{ll'} \right \|} < \lambda,$$ holds for all $l \ne l', l, l' \in C_k, k \in [K]$. Hence, $r >0$ by definition. Moreover, $r$ also satisfies $${\left \| {\bm{\delta}}^a_{ij} \right \|} \le \lambda - r, \quad \forall i,j \in C, i \ne j.$$ Since ${\left \| {\bm{\delta}}_{ij} - {\bm{\delta}}^a_{ij} \right \|} \le p' \mu$, we obtain $${\left \| {\bm{\delta}}_{ij} \right \|} \le \lambda - r + p' \mu, \quad \forall i,j \in C, i \ne j.$$ Bound ${\left \| {\bm{g}}_{ik} - {\bm{g}}_{jk} \right \|}$ {#sec:g_diff} ---------------------------------------------------------- For all $i,j \in C$ and $k \notin C$, the following inequality holds $${\left \| {\bm{g}}_{ik} - {\bm{g}}_{jk} \right \|} \le \frac{4 \lambda p \mu}{q - 2p \mu} + \frac{2 \sqrt{\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 \mu + 2\mu^2}}{q - 2p \mu} + \frac{\mu}{q - 2p \mu}$$ \[lemma\_g\_diff\] Let $i, j \in C$ and $k \notin C$. Without loss of generality, we may assume $i<j<k$. Hence, ${\bm{g}}_{ik}=-{\bm{\delta}}_{ik}, {\bm{g}}_{jk}=-{\bm{\delta}}_{jk}$. By $\eqref{eq:adcs_a2}$, we derive $$t_{ik} {\bm{\delta}}_{ik} - t_{jk} {\bm{\delta}}_{jk} = - \lambda {\bm{y}}_{ik} + \lambda {\bm{y}}_{jk}+{\bm{\epsilon}}_2^{ik} - {\bm{\epsilon}}_2^{jk} = - \lambda ({\bm{x}}_i - {\bm{x}}_j) +{\bm{\epsilon}}_2^{ik} - {\bm{\epsilon}}_2^{jk}. \label{eq:tdelta_diff}$$ Adding the term $(t_{jk} - t_{ik}) {\bm{\delta}}_{jk}$ to both sides of the equality to obtain $$t_{ik}({\bm{\delta}}_{ik} - {\bm{\delta}}_{jk}) = (t_{jk} - t_{ik}) {\bm{\delta}}_{jk} - \lambda ({\bm{x}}_i - {\bm{x}}_j) +{\bm{\epsilon}}_2^{ik} - {\bm{\epsilon}}_2^{jk}.$$ Notice that $t_{ik} \ge {\left \| {\bm{y}}_{ik} \right \|} = {\left \| {\bm{x}}_i - {\bm{x}}_k \right \|} \ge q - 2p \mu > 0$ by the primal constraint and our assumption on the duality gap. Divide the equality above by $t_{ik}$ to obtain $${\bm{\delta}}_{ik} - {\bm{\delta}}_{jk} = \frac{t_{jk} - t_{ik}}{t_{ik}} {\bm{\delta}}_{jk} - \frac{\lambda ({\bm{x}}_i - {\bm{x}}_j)}{t_{ik}} +\frac{{\bm{\epsilon}}_2^{ik} - {\bm{\epsilon}}_2^{jk}}{t_{ik}}.$$ Substitute the definition of ${\bm{g}}$ into the equality above to obtain $${\bm{g}}_{ik} - {\bm{g}}_{jk} = \frac{t_{ik} - t_{jk}}{t_{ik}} {\bm{\delta}}_{jk} + \frac{\lambda ({\bm{x}}_i - {\bm{x}}_j)}{t_{ik}} -\frac{{\bm{\epsilon}}_2^{ik} - {\bm{\epsilon}}_2^{jk}}{t_{ik}}. \label{eq:delta_diff}$$ By the perturbed complementary slackness , the primal constraint and the Cauchy-Schwarz inequality, we derive the following inequality $$\epsilon_1^{ik} = t_{ik} \lambda + {\bm{y}}_{ik}^T {\bm{\delta}}_{ik} \ge t_{ik} \lambda - {\left \| {\bm{y}}_{ik} \right \|} \cdot {\left \| {\bm{\delta}}_{ik} \right \|} \ge t_{ik} \lambda - {\left \| {\bm{y}}_{ik} \right \|} \cdot \lambda,$$ which yields an upper bound on $t_{ik}$ $$t_{ik} \le {\left \| {\bm{y}}_{ik} \right \|} + \frac{\epsilon_1^{ik}}{\lambda}.$$ Combined with the primal constraint at $t_{jk}$ and the triangle inequality, we obtain the following $$t_{ik} - t_{jk} \le {\left \| {\bm{y}}_{ik} \right \|} + \frac{\epsilon_1^{ik}}{\lambda} - {\left \| {\bm{y}}_{jk} \right \|} \le {\left \| {\bm{y}}_{ik} - {\bm{y}}_{jk} \right \|} + \frac{\epsilon_1^{ik}}{\lambda} = {\left \| {\bm{x}}_i - {\bm{x}}_j \right \|} + \frac{\epsilon_1^{ik}}{\lambda}. \label{eq:t_diff}$$ The same inequality holds for $t_{jk} - t_{ik}$ due to the symmetry of . By , and triangle inequality, the norm bound of ${\bm{g}}_{ik} - {\bm{g}}_{jk}$ is as follows $$\begin{aligned} {\left \| {\bm{g}}_{ik} - {\bm{g}}_{jk} \right \|} &\le \frac{|t_{ik} - t_{jk}| \cdot {\left \| {\bm{\delta}}_{jk} \right \|}}{t_{ik}} + \frac{\lambda {\left \| {\bm{x}}_i - {\bm{x}}_j \right \|}}{t_{ik}} +\frac{{\left \| {\bm{\epsilon}}_2^{ik} \right \|} + {\left \| {\bm{\epsilon}}_2^{jk} \right \|}}{t_{ik}} \quad \text{(By triangle inequality)}\\ &\le \frac{{\left \| {\bm{x}}_i - {\bm{x}}_j \right \|} + \frac{\epsilon_1^{ik}}{\lambda}}{t_{ik}} {\left \| {\bm{\delta}}_{jk} \right \|} + \frac{\lambda {\left \| {\bm{x}}_i - {\bm{x}}_j \right \|}}{t_{ik}} +\frac{{\left \| {\bm{\epsilon}}_2^{ik} \right \|} + {\left \| {\bm{\epsilon}}_2^{jk} \right \|}}{t_{ik}} \quad \text{(By \eqref{eq:t_diff})} \\ &\le \frac{2\lambda {\left \| {\bm{x}}_i - {\bm{x}}_j \right \|}}{t_{ik}} + \frac{{\left \| {\bm{\epsilon}}_2^{ik} \right \|} + {\left \| {\bm{\epsilon}}_2^{jk} \right \|}}{t_{ik}} + \frac{\epsilon_1^{ik}}{t_{ik}} \quad \text{(By \eqref{eq:p_constr4} and \eqref{eq:tdelta_diff})}.\end{aligned}$$ Since $i,j \in C$ and $k \notin C$, there hold $t_{ik} \ge {\left \| {\bm{y}}_{ik} \right \|} = {\left \| {\bm{x}}_i - {\bm{x}}_k \right \|} \ge q - 2p\mu$ and ${\left \| {\bm{x}}_i - {\bm{x}}_j \right \|} \le 2p \mu$. Moreover, there also hold $\epsilon_1^{ik} \le \mu$, ${\left \| {\bm{\epsilon}}_2^{ik} \right \|} \le \sqrt{\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 \mu + 2\mu^2}$ and ${\left \| {\bm{\epsilon}}_2^{jk} \right \|} \le \sqrt{\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 \mu + 2\mu^2}$. Hence, ${\left \| {\bm{g}}_{ik} - {\bm{g}}_{jk} \right \|}$ is further upper bounded as follows $${\left \| {\bm{g}}_{ik} - {\bm{g}}_{jk} \right \|} \le \frac{4 \lambda p \mu}{q - 2p \mu} + \frac{2 \sqrt{\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 \mu + 2\mu^2}}{q - 2p \mu} + \frac{\mu}{q - 2p \mu} \label{eq:norm_delta_diff}$$ Bound ${\left \| {\bm{\omega}}_i \right \|}$ {#sec:omega} -------------------------------------------- For all $i\in C$, it holds $${\left \| {\bm{\omega}}_i \right \|} \le 2 \sqrt{ 2 \left (\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + 1 + 2\mu \right ) \cdot \left (\frac{1}{2} + \sum_{l=1}^n (n-1) \lambda {\left \| {\bm{a}}_l \right \|} + \mu \right ) \mu}.$$ \[lemma\_bomega\] Let $i \in C$. By definition, $${\bm{\omega}}_i = \frac{\sigma_3^i}{s_{i}} {\bm{z}}_{i} + \frac{1}{s_i} {\bm{\sigma}}_2^i.$$ By the primal constraint , we have $${\left \| {\bm{z}}_{i} \right \|} \le \sqrt{2s_i -1} \le \sqrt{2s_i}, \quad s_i \ge \frac{1}{2},$$ which implies $$\frac{{\left \| {\bm{z}}_i \right \|}}{s_i} \le \sqrt{\frac{2}{s_i}} \le \sqrt{4} = 2, \quad \frac{1}{s_i} \le 2.$$ Coupled with triangle inequality, these two inequalities yield $${\left \| {\bm{\omega}}_i \right \|} \le \frac{{\left \| {\bm{z}}_{i} \right \|}}{s_{i}} \sigma_3^i + \frac{1}{s_i} {\left \| {\bm{\sigma}}_2^i \right \|} \le 2 \sigma_3^i + 2 {\left \| {\bm{\sigma}}_2^i \right \|}.$$ Moreover, since ${\left \| \begin{pmatrix} {\bm{\sigma}}_2^i\\ \sigma_3^i \end{pmatrix} \right \|} \le \sqrt{\left (\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + 1 + 2\mu \right) \cdot \left (\frac{1}{2} + \sum_{l=1}^n (n-1) \lambda {\left \| {\bm{a}}_l \right \|} + \mu \right ) \mu}$ holds for any $i \in [n]$ by Lemma \[lemma:O(sqrt(mu))\] and the duality gap, $$(\sigma_3^i)^2 + {\left \| {\bm{\sigma}}_2^{i} \right \|}^2 \le \left (\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + 1 + 2\mu \right) \cdot \left (\frac{1}{2} + \sum_{l=1}^n (n-1) \lambda {\left \| {\bm{a}}_l \right \|} + \mu \right ) \mu.$$ which implies the following inequality since $(a + b)^2 \le 2 a^2 + 2b^2$ $$(\sigma_3^i + {\left \| {\bm{\sigma}}_2^{i} \right \|})^2 \le 2 \cdot \left (\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + 1 + 2\mu \right) \cdot \left (\frac{1}{2} + \sum_{l=1}^n (n-1) \lambda {\left \| {\bm{a}}_l \right \|} + \mu \right ) \mu.$$ Therefore, the following holds as $i \in C$ is chosen arbitrarily: $${\left \| {\bm{\omega}}_i \right \|} \le 2 \sigma_3^i + 2 {\left \| {\bm{\sigma}}_2^i \right \|} \le 2 \sqrt{ 2 \cdot \left (\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + 1 + 2\mu \right) \cdot \left (\frac{1}{2} + \sum_{l=1}^n (n-1) \lambda {\left \| {\bm{a}}_l \right \|} + \mu \right) \mu}.$$ Bound the CGR subgradient {#sec:cgr_bound} ------------------------- For all $i, j \in C$ and $i < j$, there holds $$\begin{aligned} {\left \| {\bm{q}}_{ij} \right \|} \le & \lambda - r + p' \mu + \frac{1}{m} \cdot \left (2 p \mu + 4 \sqrt{ 2 \cdot \left (\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + 1 + 2\mu \right) \cdot \left (\frac{1}{2} + \sum_{l=1}^n (n-1) \lambda {\left \| {\bm{a}}_l \right \|} + \mu \right) \mu} \right )\\ & + \frac{n-m}{m} \left (\frac{4 \lambda p \mu}{q - 2p \mu} + \frac{2 \sqrt{\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 \mu + 2\mu^2}}{q - 2p \mu} + \frac{\mu}{q - 2p \mu} \right).\end{aligned}$$ \[lemma\_main\] Let $i, j \in C$ and $i < j$. By triangle inequality, $${\left \| {\bm{q}}_{ij} \right \|} \le {\left \| {\bm{\delta}}_{ij} \right \|} + \frac{1}{m} \cdot ({\left \| {\bm{x}}_i - {\bm{x}}_j \right \|} + {\left \| {\bm{\omega}}_i \right \|} + {\left \| {\bm{\omega}}_j \right \|}) + \frac{1}{m} \sum_{k \notin C} {\left \| {\bm{g}}_{ik} - {\bm{g}}_{jk} \right \|}.$$ With the assumptions on the distance between points, $${\left \| {\bm{x}}_i - {\bm{x}}_j \right \|} \le 2 p \mu.$$ By Lemma \[lemma\_bdelta\], \[lemma\_g\_diff\], Lemma \[lemma\_bomega\] and the inequality above, we obtain $$\begin{aligned} {\left \| {\bm{q}}_{ij} \right \|} \le & \lambda - r + p' \mu + \frac{1}{m} \cdot \left (2 p \mu + 4 \sqrt{ 2 \cdot \left (\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 + 1 + 2\mu \right) \cdot \left (\frac{1}{2} + \sum_{l=1}^n (n-1) \lambda {\left \| {\bm{a}}_l \right \|} + \mu \right) \mu} \right)\\ & + \frac{n-m}{m} \left (\frac{4 \lambda p \mu}{q - 2p \mu} + \frac{2 \sqrt{\sum_{l=1}^n {\left \| \bar {\bm{a}}- {\bm{a}}_l \right \|}^2 \mu + 2\mu^2}}{q - 2p \mu} + \frac{\mu}{q - 2p \mu} \right). \end{aligned} \label{eq:cgr_subgradient_norm}$$ Proof of the main theorem {#sec:main_proof} ------------------------- We rewrite with $O(\cdot)$ notation to obtain the following inequality $${\left \| {\bm{q}}_{ij} \right \|} \le \lambda - r + O(\sqrt{\mu}), \quad \forall i, j \in C_k, i \ne j, k \in [K],$$ since $C=C_k$ is an arbitrarily cluster. As $r>0$ by Lemma \[lemma\_bdelta\], there exists $\mu_1 > 0$ such that for all $\mu \le \mu_1$, ${\left \| {\bm{q}}_{ij} \right \|} \le \lambda$ holds for all $i, j \in C_k, i \ne j, k \in [K]$. Here concludes the proof of the CGR subgradient condition. Since $q>0$, there exists $\mu_2>0$ such that $2\sqrt{2\mu_2} < q - 2 p \mu_2$. Hence, for all $\mu \le \mu_2$, all clusters are separated at distance of at least $2\sqrt{2\mu_2}$. Here concludes the proof of the second condition. Let $\mu_0 = \min \{\mu_1, \mu_2\}$, then both CGR subgradient and separation conditions are satisfied for any $\mu \le \mu_0$. Computational experiments {#sec:exper} ========================= In this section, we conduct experiments in which a Chi-Lange ADMM solver [@ChiLange] and our clustering test for sum-of-norms clustering are applied to a simulated dataset of two normally distributed half moons. We intend to answer the following questions: (1) How does the performance of our test depend on $\lambda$? and (2) How does the recovery of two half moons depend on $\lambda$? Our algorithm is implemented in Julia [@Julia2017]. It terminates if the clustering test succeeds, or if the maximum number of iterations is reached. In the algorithm, the code tests for clustering every $t$ iterations of the ADMM solver. The value of $t$ is taken to be 8 in our experiment. At the end of every $t$ iterations, the solver yields a primal solution and a dual solution, from which our algorithm constructs a primal and dual feasible pair for the SOCP formulation by . With the feasible solutions, the algorithm then creates candidate clusters, computes duality gap and constructs CGR subgradients. The code checks for the CGR subgradient condition and separation condition. If both conditions hold, the clustering test reports ‘success’. Otherwise, the code runs $t$ more iterations of the ADMM solver and repeats the clustering test. The detailed algorithm is outlined as follows. Each iteration of the ADMM solver is of complexity $O(n^2d)$. \[alg:find\_cluster\] $C \gets \{1,\ldots, n\}$ $k \gets 1$ Return candidate clusters $\{R_1, R_2, \ldots, R_{K'}\}$ Initialize $({\bm{x}}, {\bm{\delta}})$ Return recovered clusters $\{R_1, R_2, \ldots, R_{K'}\}$. To assess the performance of recovery, we employ the Rand index by Rand [@Rand]. Rand index is a measure which specifically evaluates the performance of clustering. It compares two clusterings $\{R_1,\ldots,R_{K'}\}$ and $\{V_1,\ldots,V_{K}\}$ in a pairwise manner. If a pair of data points are placed in the same cluster in both clusterings, or if a pair of data points are placed in different clusters in both clusterings, then this pair is called a similar assignment and it contributes to the measure of similarity between two clusterings. We define the following two sets of similar assignments on all distinct pairs of instances: $$\begin{aligned} S:&= \{(i,j): 1 \le i < j \le n \text{ such that there exist } m, m' \text{ satisfying } i,j \in V_m \cap R_{m'}\},\\ D:&= \{(i,j): 1 \le i < j \le n \text{ such that } i \in V_{m_1},j \in V_{m_2}, m_1 \ne m_2, \text{ and }\\ &\qquad i \in R_{m_1'}, j \in R_{m_2'}, m_1' \ne m_2'\}.\end{aligned}$$ Then Rand index is defined as the fraction of all distinct pairs which are similar assignments: $$R = \frac{|S| + |D|}{\binom{n}{2}},$$ where $|\cdot|$ denotes the cardinality function. The value of $R$ ranges from 0 to 1. When $R=0$, two clusterings are completely dissimilar. When $R=1$, two clusterings are identical. A higher Rand index indicates a higher level of similarity. A random assignment to clusters in the case of equally sized clusters, $K=2$ yields expected Rand index of 0.5. The experiment is conducted on a simulated dataset of two normally distributed half moons with 500 instances. The angle of two half moons follows a Gaussian distribution with a mean of 0 and a standard deviation of $\frac{\pi}{6}$. A random noise which follows a two-dimensional Gaussian distribution with a mean of 0 and a standard deviation of 0.05 displaces the instances from the moons. Fifty linearly spaced values of $\lambda$ are taken from the range $[10^{-8}, 0.00496]$. The range is determined empirically. Furthermore, the maximum number of iterations is chosen to be 50,000. It took approximately 15 hours total on an Intel Xeon processor single-threaded to complete the experiment. ![Iteration counts versus $\lambda$[]{data-label="fig:iter"}](iter_lambda.pdf) Our first objective is to evaluate the performance of our clustering test. At 32 out of 50 values of $\lambda$, the clustering test succeeds before the maximum number of iterations is reached. When $\lambda$ is in the range between $\lambda = 0.0018219$ and $\lambda = 0.0028341$, the algorithm repeatedly reaches the iteration threshold before the test succeeds as shown in Figure \[fig:iter\]. The performance is interpretable with theories discussed earlier. The clustering test is not guaranteed to succeed when $\lambda$ is at a fusion value, and the test performs poorly near a fusion value as shown in Figure \[fig:iter\]. When $n = 500$, there are at most 500 fusion values. All fusion values are in the range between $\lambda = 0.00040$ and $\lambda = 0.00405$ as observed in the experiment. Hence, fusion occurs frequently, and massive fusion values are located densely in a small region. Thus, in our experiment, it is very likely that the $\lambda$ we pick is near or at a fusion value, which leads to poor performance of our clustering test. We anticipate that the clustering test improves with fewer data points, and it is indeed the case. The same experiment is also implemented for 200 instances generated from two normally distributed half moons with the same parameters. At 89 out of 100 values of $\lambda$, our clustering test succeeds before the maximum number of iterations is reached. The experiment also attempts to explore the relationship between $\lambda$ value and the recovery of half moons. To evaluates the recovery, we compute the Rand index with the recovered clustering and the generative clustering. The figure below shows Rand index against $\lambda$ values. The value of Rand index increases monotonically and peaks at $\lambda = 0.00395$, where the clustering test succeeds and the Rand index achieves a value of 0.949. ![Rand index versus $\lambda$[]{data-label="fig:rand_index"}](rand_index_lambda2.pdf) To illustrate the clustering at $\lambda = 0.00395$, we also plot the two half moons and color the clusters. Red instances belong to one cluster, and blue instances belong to another cluster. Yellow instances are assigned to clusters of singleton points, and they are identified as noises. ![Labeled points with clustering at $\lambda = 0.00395$[]{data-label="fig:halfmoons"}](halfmoons_pi6_2.pdf) Sum-of-norms clustering with equal weights performs poorly on standard half moons [@ChiLange] and normally distributed half moons with large standard deviation. To resolve the issue, many authors such as Sun et al. [@dsun1] apply exponentially decaying weights to the sum-of-norms clustering. The exponentially decaying weight of pair $(i,j)$ is determined by the distance between original data ${\bm{a}}_i$ and ${\bm{a}}_j$. The weight is set to zero if $j$ is not among $i's$ k-nearest neighbors. Otherwise, the weight is computed as follows $$w_{ij} = \exp(-\phi {\left \| a_i - a_j \right \|}^2)$$ where $\phi$ is a nonnegative parameter. Assigning weights in this manner implicitly imposes a prior hypothesis that the nearest-neighbor structure corresponds to true clustering, which is certainly the case for the standard half-moon data set. Chi and Lange [@ChiLange] assess the effect of the number of nearest neighbors $k$ and the parameter $\phi$ on SON clustering with numerical experiments on a half-moon dataset of 100 points. Setting $k=10$ and $\phi = 0.5$ yields the best clustering. Choosing $k=50$ and $\phi = 0$ results in a similar clustering pattern to our experiment: clusters only form until late then all points quickly coalesce to one cluster. At any value of $\lambda$, SON clustering could not identify two half moons with a high accuracy. When $k=10$ and $\phi=0$, or $k=50$ and $\phi = 0.5$, SON clustering correctly identifies clusters for the easier points but fails to cluster points located at the lower tip of the right moon and the upper tip of the left moon. Discussion ========== We proposed a test to determine all clusters from an approximate solution yielded from any primal-dual type method. If the test reports ‘success’, then the clusters are correctly identified. Moreover, if a primal-dual path following method that maintains close proximity to the central path is used, the test is guaranteed to report ‘success’ after a finite number of iterations at non-fusion values of $\lambda$, where strict complementarity holds. A few natural questions concerning strict complementarity and the test itself are (1) Is there a rigorous test that works when strict complementarity fails? (2) What is the complexity for our clustering test since it depends on the choice of $\lambda$ values? (3) Is the test guaranteed to work for a general primal-dual algorithm? (4) Can one identify clusters correctly from a primal-only algorithm? [^1]: Department of Combinatorics & Optimization, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1, [[email protected]]{}. [^2]: Department of Combinatorics & Optimization, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1, [[email protected]]{}. Research supported in part by a Discovery Grant from the Natural Science and Engineering Research Council (NSERC) of Canada.
--- author: - 'Subhrajit Modak[^1]' - Priyam Das - Challenger Mishra - 'Prasanta K. Panigrahi' title: Chemical Oscillation in Ultracold Chemistry --- Introduction {#introduction .unnumbered} ============ After the experimental realization of atomic Bose–Einstein condensate (BEC)[@eo; @eo1] at nanokelvin temperatures, a major research effort over the past few years has been to extend the techniques of atom cooling and trapping[@ac; @bc] to molecular systems for realizing molecular Bose-Einstein condensates. Complex spectral structure of molecules has made it difficult to cool them to the ultracold regime by direct laser-cooling techniques that has successfully worked for atoms. Although significant progress has been made for capturing molecules in different opto-magnetic trapping configurations[@mot1; @mot2], such techniques have not been successful in preparing dense samples of molecules in specific quantum states. An alternative pathway has now been followed for realizing molecular condensates through the conversion of pre-cooled atomic BECs[@1; @2; @3; @4]. This approach successfully exploits the existence of scattering resonances[@fr; @fr1] for connecting ultracold atoms to transient resonant states. For example, a two-photon stimulated Raman transition in a ${}^{87}\text{Rb}$ BEC has been used to produce ${}^{87}\text{Rb}_{2}$ molecules in a single rotational-vibrational state[@tw3]. Ultracold molecules have also been formed through photoassociation (PA)[@tw4; @tw5]. An immediate advantage of atom-molecule co-trapping is that it offers longer order of interaction time compared to the molecular crossed beam methods. This can facilitate the study of cold chemistry for especially slow processes. The prospect of creating superposition of atomic and molecular condensates has initiated much theoretical work[@tw1; @tw2], although the coherence properties of these systems have not been probed in great detail. Experimentally PA can be effectively used to produce coherent coupling between atomic and molecular BECs for investigating the atom-molecular interaction within the life time of trapped molecules.\ Chemical reaction at ultracold temperature can be surprisingly efficient, aided by nonlinear scattering effects. At such temperature, disordered movement of high momenta particles is absent. Hence, conventional description[@7] of collision dependent reaction, based on Maxwell-Boltzman statistics gets replaced with the framework of quantum statistics, where reactants are characterized by their de-Broglie wavelengths. The first phenomenological model in this direction was proposed by Heinzen et al.[@5], where a mean-field ansatz was used to describe the coherent formation of diatomic molecules in BEC. In this approach, reactants are represented by their corresponding fields, while the density operator replaces reactant concentration. The mean field description for coupled atom-molecule BEC[@oles; @tt] (AMBEC) has been developed, which includes pair correlations, quantum fluctuations and thermal effects. Dynamics of AMBEC is described by a modified coupled non-linear Schr$\ddot{o}$dinger equation (NLSE). The difference from pure two-species BEC[@aa; @bb], described by a coupled NLSE, arises due to inter-conversion, which induces quadratic non-linearity in addition to the cubic non-linearity arising from s-wave scattering. Simultaneous appearance of cubic and quadratic non-linearity in AMBEC provides novel cross-phase modulation[@ca], affecting the conversion process in the presence of PA. Fig\[po\] schematically depicts the reaction pathways between atoms and molecules at ultraccold temperature, that can circumvent the conventional chemical barriers. In the mean field approach, suitable ground state solutions can asymptotically connect two different configurations, without reaching the top of the potential barrier. The complex nature of the mean field wave function enables this phenomenon, which is a manifestation of coherence, a prime example being the Lieb mode in BEC[@lie].\ \ The present work is devoted to the study of oscillatory chemical reactions[@bm1; @bm] in the atom-molecular system. The role of various non-linear interactions and that of PA will be probed in detail. Oscillatory kinetics refers to spontaneous progression of reaction in both forward and backward directions, appearing to violate the second law of thermodynamics. These reactions occur far from thermodynamic equilibrium[@co] and do not last forever, dying away slowly as the mixture settles into an unchanging state. In the present case, nonlinear oscillations are found to set in for restricted values of elliptic modulus, giving rise to both in- and out-phase modulations in the atom-molecule population density. PA is found to play the key role in the oscillatory reaction, controlling the speed of the reaction front as well as its amplitude. We find the exact parametric conditions, separating different non-linear excitations. Exact localized solutions are found to be in-phase and gapped[@tup], differing significantly from the two-BEC[@pup] case, with one class of soliton necessarily accompanied by a background. Interestingly, nonlinear excitations in the form of *cnoidal* waves, similar to the ones in atomic BEC, for both the repulsive and attractive domains, are found as exact solutions, wherein the heavy molecular component has a plane wave character. This is similar to the case of optical fiber, with a core and clad component, core allowing solitonic excitations, whereas the clad supports plane wave modes[@tsr]. ![Schematic representation of reaction pathways in ultracold reaction, avoiding the effective barrier in conventional reactions.[]{data-label="po"}](fig.pdf) Model {#model .unnumbered} ===== Two atoms can be combined together to form a molecule through the absorption of a photon from an applied optical field during an atomic collision. In recent years, Feshbach resonances[@fbr1; @fbr2] have come into prominence in the study of ultracold atomic gases, wherein positions of resonances can be adjusted using applied magnetic fields. It is possible to control the interactions between atoms and molecules appropriately by tuning resonances to near-zero collision energy. This interaction close to the absolute zero of temperature has been christened as *ultracold chemistry*[@uc1; @uc2; @uc3; @uc4]. Since the energy produced in this exoergic process is very low, the reaction products remain in the trap. We are considering only the elementary reactions that proceed without forming any identifiable intermediate species. The most elementary second-order reaction, *diatomic molecule formation*, is represented by: $A + A \overset{k}{\rightleftarrows} A_2$. \[eq:molecule\] In this case, possible product formation leads to different possibilities for the quantum statistics: $bb\rightarrow b$, $bf\rightarrow f$ and $ff\rightarrow b$, where $b$ stands for bosonic and $f$ for fermionic counterpart. Interestingly, these conversions correspond to well-known field theroretical models, the Lee-Van Hove model of meson theory, and the Friedberg-Lee model of high-$T_{C}$ superconductivity[@po]. We consider a chemical system of the first type, where bosonic enhancement of the chemical dynamics is the strongest. Order Reaction $\hat{H}_{\text{int}}$ ------- ----------------------------------------------- ------------------------------------------------------------------ 0. $\text{Bath} \overset{k}{\rightleftarrows} A$ $k(\hat{a}_A^\dag+ \text{h.c.})$ 1. $A \overset{k}{\rightleftarrows} B$ $k(\hat{a}^\dag_{A} \hat{a}_{B} + \text{h.c.})$ 2. $A + A \overset{k}{\rightleftarrows} A_{2}$ $k(\hat{a}_{A}^\dag\hat{a}_{A}^\dag\hat{a}_{A_2} + \text{h.c.})$ : The proposed interaction Hamiltonians[@7] for low-order bosonic reactions.[]{data-label="tab:interaction"} Table \[tab:interaction\] illustrates different orders of chemical reaction in a general scenario. Zeroth-order reaction physically represents an exchange of species with reservoir, while the First-order reaction models a linear interaction between two quantum fields. On the other hand, atom-molecule inter-conversion requires a Hamiltonian involving second-order reaction: $$\hat{H} =E_{A}\hat{n}_A +E_{A_2}\hat{n}_{A_2} + k(\hat{a}_A^\dag\hat{a}_A^\dag\hat{a}_{A_2} + h.c.), \label{eq:quantum_hamiltonian}$$ which can be generalized to include the effect of multiple concurrent reactions, particle loss and dissipation. For the rest of the paper, we neglect these effects and assume the reaction rate to be much larger than the ground state energies, i.e., $k \gg |E_A| + |E_{A_2}|$, where $E_A$ and $E_{A_2}$ label the corresponding ground-state energies. Notice that, within this framework, we are restricted to the description of reversible reactions with a single reaction rate. Moreover, this model can only probe the outcome of a chemical reaction, providing no direct information regarding the actual process of bond breaking and bond making. The Hamiltonian for such system can be written in terms of field operators for atoms and for the molecular resonant state. At resonance, the number of molecules becomes considerable and molecular BEC is formed. We take into account two body atom-atom, molecule-molecule and atom-molecule collisions, as well as the term responsible for transfer of pairs of atoms into molecules and vice versa in the unit $m_a=1$: $$\begin{aligned} \hat{H} &=& \large{\int} {\rm d}^3 r \Big( \hat{\psi}^{\dagger}_a \Big[ -\frac{\hbar^2}{2}\nabla^2 + V_a(\vec r) + \frac{g_a}{2} \hat{\psi}^{\dagger}_a \hat{\psi}_a \Big] \hat{\psi}_a \nonumber \\ &+& \hat{\psi}^{\dagger}_m \Big[ -\frac{\hbar^2}{4}\nabla^2 + V_m(\vec r) + \epsilon + \frac{g_m}{2} \hat{\psi}^{\dagger}_m \hat{\psi}_m \Big] \hat{\psi}_m \cr \nonumber \\ &+& g_{am} \hat{\psi}^{\dagger}_a \hat{\psi}_a \hat{\psi}^{\dagger}_m \hat{\psi}_m + \frac{\alpha}{\sqrt{2}} \big[ \hat{\psi}^{\dagger}_m \hat{\psi}_a \hat{\psi}_a + \hat{\psi}_m \hat{\psi}^{\dagger}_a \hat{\psi}^{\dagger}_a \big] \Big) \label{HFR} \end{aligned}$$ Here $g_{a}$, $g_{m}$ and $g_{am}$ measure the strength of atom-atom, molecule-molecule and atom-molecule interactions respectively. $V_a$ and $V_m$ stand for atomic and molecular trapping potentials, with $\alpha$ being the strength of PA. The parameter $\epsilon$ is the energy mismatch in converting the atoms to molecules. In the following we consider a cigar shaped geometry. The modified parameters in the case of quasi one dimensional geometry[@sala; @sala1] are kept unchanged for notational convenience. Equations of motion for the atomic and molecular mean fields, are given by $$\begin{aligned} i\frac{\partial\psi_{a}}{\partial t} &=& -\frac{1}{2}\frac{\partial^{2} \psi_{a}}{\partial x^{2}} + (V_a+g_{a}|\psi_{a}|^{2} + g_{am} |\psi_{m}|^{2}) \psi_{a} + \alpha \sqrt{2}\psi_{m}\psi_{a}^{*}, \label{ac} \nonumber\\ \\ i\frac{\partial\psi_{m}}{\partial t} &=& -\frac{1}{4}\frac{\partial^{2} \psi_{m}}{\partial x^{2}}+(V_m+\epsilon +g_{m}|\psi_{m}|^{2}+g_{am}|\psi_{a}|^{2})\psi_{m} + \frac{\alpha}{\sqrt{2}}\psi_{a}^{2} \label{mc}. \nonumber\\\end{aligned}$$ Evidently, for nonzero $\alpha$, the overall particle number is conserved, $$N=\int{(\vert\psi_{a}\vert^2+2\vert\psi_{m}\vert^2}) dx=N_{a}+2N_{m}.$$ Here $\psi_{j}$’s are taken as, $\psi_{j}(x,t)=\sqrt{n_{j}(x,t)}e^{i\phi_{j}(x,t)}$ for $j=a,m$., for which the continuity equation can be written as, $$\frac{\partial}{\partial {t}}(n_{a}+2n_{m})+\frac{\partial}{\partial {x}}(\sum_{a,m}n_{j}\frac{\partial\phi_{j}}{\partial x})=0.$$ This condition is invariant under scale transformation and Galilean boost[@boo]. Under scaling, density and phase change as, $n(x,t)\rightarrow{\beta n(\beta x,\beta^2 t)}$, $\phi(x,t)\rightarrow{\phi(\beta x, \beta^2 t)}$, while for the boost by an amount $v$, the changes are $n(x,t)\rightarrow n(x-vt,t)$ and $\phi(x, t)\rightarrow\phi(x-vt, t)+v[x-vt/2]$. In the following, we first consider the static configurations, which yield the asymptotic equilibrium states with given initial configurations of atoms and molecules and highlight some compatible excitation pairs that induce desired reactions. One needs to consider the trapping potential for possible comparison with experimental scenario. Trapping Configuration {#trapping-configuration .unnumbered} ====================== Confining traps are usually approximated by harmonic potentials. Trap frequency in general, can be time-dependent. Depending on the sign of trap frequency $\omega(t)$, oscillator potential can either be confining or expulsive. Interestingly, in the mean field approach, the wave packet dynamics in presence of a time-dependent trap can be related to the dynamics without a trap through the similarity transformation[@gsa]: $$\psi_j(f,g)=M_j(t)\psi_j\Big[f(x,t)\Big]e^{i\phi(x,t)}$$ Here $M(t)$ represents the amplitude of the pulse and $f(x,t)$ is the similarity variable $$f(x,t)=\frac{x-x_c}{w(t)}$$ where $w(t)$ and $x_c$ are the dimensionless width and center of the self-similar wave. The quadratically chirped phase is given by $$\phi(x,t)=c(t)\frac{x^2}{2}+b(t)x+a(t)$$ where the parameters $c(t)$, $b(t)$ and $a(t)$ are to be determined. They are related to the phase-front curvature, the frequency shift, and the phase offset, respectively. In the general case of time-dependent harmonic trapping, condensate profile gets appropriately modulated in time, along with the conditions: $$\begin{aligned} \dot{c}-c^2(t)=\omega(t), \\ \ddot{x_c}+\omega(t)x_c=0.\end{aligned}$$ The first one, involves the motion related to chirping, which can be expressed as a Schr$\ddot{o}$dinger eigenvalue problem via the Cole-Hopf transformation[@cole]. Taking advantage of this connection, it can be shown that, corresponding to each solvable quantum-mechanical system, one can identify a soliton configuration. The fact that the Schr$\ddot{o}$dinger equation can be exactly solved for a variety of potential, gives us freedom to control the dynamics of the BEC in a number of analytically tractable trap configuration. This type of chirped phase regularly arises in nonlinear fiber optics, as an acceleration induced inhomogeneity, which can balance the effect of harmonic trap. On the other hand, the later condition is a manifestation of $\emph{Kohn}$ mode, depending solely on the frequency of the harmonic trap. The time-dependent trap and scattering length can be used to compress and accelerate the wave, leading to the possibility of their coherent control[@nath]. Ground state configuration {#sgs .unnumbered} ========================== The ground state is governed by the values of densities and phases that minimize the energy per unit volume. Energy density for the case of constant density and phase, assuming same chemical potential for both the condensate components, is given by $$\begin{aligned} \mathcal{E} = \frac{1}{2} g_{a} n^{2}_{a} + \frac{1}{2} g_{m} n^{2}_{m} &+& g_{am} n_{a} n_{m}+\epsilon n_{m} - \mu (n_{a} + 2n_{m}) \nonumber \\ &+& \frac{\alpha}{\sqrt{2}} n_{a} \sqrt{n_{m}} \cos(\phi_{am}).\end{aligned}$$ Here, $\phi_{am}$ is the phase difference between the condensate components, $\phi_{am}=\phi_{m}-2\phi_{a}$, leading to a phase correlation in the presence of PA, different from the density-density correlations arising from inter-species interaction. As mentioned earlier, this term can arise from the two-photon (Raman) process or a direct Rabi coupling between the components. The minimum energy configuration corresponds to $\phi_{am} = \pi$, with the equilibrium configuration characterized by, $$(g_{a}-\frac{g_{m}}{4}) n+p ((g_{a}+\frac{g_{m}}{4})-g_{am})-\frac{\alpha}{2\sqrt{2}}\frac{n-3p}{\sqrt{n-p}}-\epsilon=0,$$ where $p=n_{a}-2n_{m}$ is the density difference. For convenience, this state equation can be written as a cubic polynomial in $p/n$, $$\Big(\frac{p}{n}\Big)^3-\big[1-2\Big(\frac{F}{G}\Big)-\frac{9\alpha^2}{8nG^2}\Big]\Big(\frac{p}{n}\Big)^2-\Big[2\Big(\frac{F}{G}\Big)-\frac{6\alpha^2}{8nG^2}-\Big(\frac{F}{g}\Big)^2\Big]\Big(\frac{p}{n}\Big)-\Big[\Big(\frac{F}{G}\Big)^2-\frac{\alpha^2}{8nG^2}\Big]=0,$$ where, $F=(g_a-\frac{g_m}{4})$ and $G=((g_a+\frac{g_m}{4})-g_{am})$. For this configuration, we keep the mismatch part to zero. Generally, cubic equations of state are much used in thermodynamical system, arises due to the addition of a co-volume parameter $b$ and an attractive pressure term inversely proportional to $V^2$ (molar volume) to the ideal gas equation. This form of equations are capable of representing different phases at temperatures below and above the critical point. In such cases, one will either have one or three real roots, depending on, whether the phases are co-existing or separated from others. Since we are dealing with physical quantities, only real roots are of interest. More specifically, we look for real, positive roots of $p/n$. We fix our case in a way such that $p/n$ could vary only within -1 to +1. At these points, the density is either occupied by atoms ($p/n=1$) or by molecules ($p/n=-1$). For any values of $p/n$ in between -1 to +1, the density is shared by the mixture of atoms and molecules. The structure of the ground state is better understood if the interactions are repulsive. For an example, we take $g_a=g_m=1/2$ and $g_{am}=1/4$. Fig (\[fig2\]) represents numerically computed roots of Eq.\[14\] for different inter-conversion strength. The solid-black curve (GS-1) corresponds to the case, when $\alpha=0$, implying the condensate is having only an atomic part. On the other hand, the case (GS-2), shown by dashed-black curve, corresponds to finite inter-conversion to start with. As $\alpha$ starts to increase, the ratio between the density difference to the total density decreases. In other words, atomic density rises in comparison to the molecular proportion. In this process, when $\alpha$ comes to a certain critical strength, $\alpha_{c}=2\sqrt{2}[(g_{a}-g_{m}/4)\sqrt{n}]$, the ratio between density difference to the total density becomes half. Further increment in $\alpha$ takes the system into a state, where both atoms and molecules co-exist, accompanied by a constant density difference. As will be evident later, the condensate configuration will no longer be stable upto this strength of inter-conversion. Here, critical strength does not mark any appearance of phase transition. It is about quantifying a particular ratio of $p/n$. The solid-red and dashed-red curves are explicated in the presence of a finite mismatch (positive) term, where the critical strength is reduced by a factor of $2\sqrt{2}\epsilon/\sqrt{n}$. ![Two different ground states (GS-1 and GS-2) are shown for the initial conditions corresponding to the presence and absence of photoassociation for two different energy mismatch values.[]{data-label="fig2"}](gs-state-1.pdf) Stability {#stability .unnumbered} ========= We now examine the effect of small variation of the condensate density due to the presence of phonons, which can potentially destabilize the condensation. In case of a single species condensate, stability depends on the sign of interaction. For repulsive interaction, condensate is found to be stable, whereas for attractive case, it is unstable after a critical value of the interaction. In case of a two-species system, it has been observed that even if intra- and inter-species interactions are repulsive, instability may occur in the system[@stbl1]. To examine stability, we insert a weak perturbation of the form $\delta\tilde\phi_{j}\sim\delta\phi_j e^{i(qx-\omega t)}$ of frequency $\omega$ and wave vector $q$ around the steady state solution[@stbl], $$\begin{aligned} \phi_{a}(x,t) &=& (\phi_{a0}+ \delta\tilde\phi_{a}(x,t))e^{-i\mu t}, \\ \quad \phi_{m}(x,t) &=& (\phi_{m0} + \delta\tilde\phi_{m}(x,t))e^{-2i\mu t},\end{aligned}$$ where $\phi_{j0}$ are the real densities to start with and $\delta\tilde\phi_j$ satisfies $\delta\tilde\phi_{j}\ll\phi_{jo}$. Linearizng in $\delta\phi_j$, one obtains the dispersion relation. The corresponding gain/loss spectrum can be put into a matrix form $\mathcal{M}$, with the diagonal elements $\mathcal{M}_{11} = \frac{q^2}{2} + 2 g_{a} \vert\phi_{a0}\vert^2 + g_{am}\vert\phi_{m0}\vert^2 - (\mu+\omega)$, $\mathcal{M}_{22} = \frac{q^2}{4} + 2 g_{m}\vert\phi_{m0}\vert^2 + g_{am}\vert\phi_{a0}\vert^2 + (\epsilon-2\mu-\omega)$ and the off-diagonal elements$\mathcal{M}_{12} = g_{am} \psi^{*}_{m0} \psi_{a0} + \sqrt{2} \alpha \psi^{*}_{a0}$, $\mathcal{M}_{13} = g_{a}\phi_{a0}^2 + \alpha\sqrt{2}\phi_{m0}$ and $\mathcal{M}_{34}=g_{am}\phi_{m0}\phi_{a0}^{*}+\alpha\sqrt{2}\phi_{a0}$, $\mathcal{M}_{24}=g_{m}\phi_{m0}^2$. It is evident that, the atom-molecular system is stable if the imaginary part of the eigenvalues of the matrix $\mathcal{M}$ are positive and becomes unstable if the lowest eigenvalue of $\mathcal{M}$ is negative. Fig.(\[fig3\]) delineates the stablity/instability regions. This is physically justified since the fluctuation in the form of plane waves must not decrease the energy of the system for the stable condensate. Semi-positive eigenvalues of $\mathcal{M}$ lead to real normal mode frequencies (collective excitations), which in turn ensures the stability of the system. We investigate the behavior of the eigenvalues of the matrix $\mathcal{M}$ as a function of the quasi-momentum $q$, for different values of atom-molecular inter-conversion term. It can be seen from the Fig.(\[fig3\]a) that, for small values of $\alpha$, the system stays in a stable domain. The red (dotted) and black (dash-dotted) curves correspond to $\alpha = 0.1$ and $1$, where the eigenvalues lie in the positive region, indicating that the system is stable. Further increasing the value of $\alpha$, e.g., the case with $\alpha = 2$ and $5$, respectively shown in blue (dashed) and black (solid) curves, makes the system unstable, as the eigenvalues lie in the negative regime. Fig.(\[fig3\]b) depicts the eigenvalues as a function of inter-species interaction ($g_{am}$) for the same values of $\alpha$, showing minimal effect of $g_{am}$ on stability. As will be evident later, the condensate wave-packets and localized solutions are primarily controlled by PA. ![ Depiction of the spatial modulational instability. (3a) shows the domain of negative eigenvalues, representing the region of instability, (3b) shows the weak dependence of eigenvalues on atom-molecule scattering effect.[]{data-label="fig3"}](st.pdf "fig:") ![ Depiction of the spatial modulational instability. (3a) shows the domain of negative eigenvalues, representing the region of instability, (3b) shows the weak dependence of eigenvalues on atom-molecule scattering effect.[]{data-label="fig3"}](sb.pdf "fig:") Oscillatory Excitations {#oscillatory-excitations .unnumbered} ======================= We now investigate the dynamics of AMBEC, concentrating on the possibility of chemical oscillations. Oscillatory excitations manifest in several chemical reactions, most well-known being the Belousov-Zhabotinsky reaction[@bz; @bz1], where the products exhibit periodic changes either in space or in time and give rise to remarkable spatio-temporal pattern[@bz3]. In the present case, both linear and quadratic $\emph{cnoidal}$ waves are found as exact solutions. The linear case is analogous to the one in atomic BEC, whereas the quadratic one is novel to the atom-molecular system. Its presence crucially depends on photo-association. We start with the quadratic excitation and without loss of generality, consider the following pair of $\emph{cnoidal}$ waves, for both the condensate components: $$\begin{aligned} \nonumber \phi_{a}(x, t) &=& \left(A + B~ \textrm{cn}^{2}(\xi, m) \right) e^{i k x}, \\ \phi_{m}(x, t) &=& \left(C + D~ \textrm{cn}^{2}(\xi, m) \right) e^{2 i k x},\end{aligned}$$ here, $\xi = \beta (x - u t)$ with $u$ being the velocity. The cnoidal wave excitations exist only in the presence of a fast varying plane wave component: $e^{i k x}$. A lengthy calculation leads to the amplitudes of periodic pair, $$\begin{aligned} % \nonumber % Remove numbering (before each equation) B &=& \pm\frac{D}{2}; \qquad D = - \frac{3 \sqrt{2} \hbar^{2} \beta^{2} m}{\alpha} \\ A &=& \frac{C}{2} - \frac{ \alpha}{\sqrt{2} g_{a}}; \qquad C = \frac{3 \alpha}{2 \sqrt{2} g_{a}} - \frac{m (m - 1)}{2 (2 m - 1)} \pm \frac{1}{2} \sqrt{\frac{ \alpha^{2}}{2 g_{a}^{2}} + \left(\frac{m (m - 1)}{2 m - 1}\right)^{2}}\end{aligned}$$ along with $\beta^{2}=\frac{2m_{a}}{3 \hbar^{2}(1-2m)} \left( \frac{-3 \alpha}{2 \sqrt{2}}C +\frac{ 9 \alpha^{2}}{8 g_{a}}\right)$ and the wave vector, $k^{2} =-\frac{2\alpha^2}{g_{a}}$. It is seen that the nature of atom-atom interaction decides the sign of energy mismatch, $\epsilon=\frac{15\alpha^2}{8g_{a}}$, with $g_{a}=16g_{m}$ and $g^{2}_{am} = g_{a} g_{m}$. We consider $\epsilon<0$[@oles] and obtain explicit solutions for general values of couplings. Accessible density parameters are ensured for $g_{a},g_{m}<0$. PA plays crucial role in determining the front velocity[@cs] and can be used to control how quickly the reactants are used up. This is in sharp contrast to the prediction of usual chemical kinematics, where rates do not depend on the number of participating particles and tend to zero at low temperature. The parameter controlling PA, leads to two different physical situations in case of $\alpha=0$ and $\alpha\rightarrow 0$. In the absence of $\alpha$, velocity of excitation remains a free parameter, whereas for $\alpha\rightarrow 0$, the velocity tends to zero. Reaction rates at these temperatures can be made comparable to or even larger than they are at room temperature. It is evident that, $\emph{cnoidal}$ oscillation for both the components would exist only if $\frac{1}{2}< m \leq 1$. This pair of excitations lead to an unique elevation-depression or depression-depression density profile. On the other hand, no trigonometric counterpart can be found, as in the limit $m = 0$, amplitude vanishes, leaving only a constant background. This is different from the case of two BEC where sinusoidal excitation[@pup] appears as an exact solution. Densities of atomic and molecular BECs are explicated in Fig.(\[fig4\]). Both the densities are characterized by two-frequency modulations. Cnoidal chemical waves in a background {#cnoidal-chemical-waves-in-a-background .unnumbered} -------------------------------------- Unlike the quadratic oscillatory excitations for both the components, the AMBEC system also exhibits periodic atomic density waves in a constant molecular background, $$\begin{aligned} \nonumber % \nonumber % Remove numbering (before each equation) \phi_{a}(x,t) &=& \phi_{a0}~\textrm{cn}(\xi,m)e^{ikx},\\ \phi_{m}(x,t) &=& \phi_{m0}~e^{2ikx}\nonumber \\ \label{an}\end{aligned}$$ with $\xi=\beta(x-vt)$. Amplitudes of the atomic and molecular densities are found to be of the form, $\phi_{m0}=\frac{\alpha}{\sqrt{2} g_{am}}$, $\phi_{a0}^2=-\frac{m\beta^2}{2 g_{am}}$ with $\beta^4=\frac{8\alpha^2(\alpha^2+\epsilon g_{am})}{(9-17m)g_{am}}$, $k^{2} =v^2=-(\epsilon+\frac{g_{m}\alpha^2}{2g_{am}^2})$ and $g_{a}=2g_{am}$. Physical solutions are assured for $g_{a}, g_{m}, g_{am}<0$, implying both intra and inter-species interactions must be attractive. Interesting to note that, this class of solution exhibits two disjoint domains: for $\frac{1}{2}<m<1$, energy mismatch $\epsilon < \frac{\alpha^2}{\vert g_{am}\vert}$ with $\vert g_{m}\vert>\frac{2}{\kappa}\vert g_{am}\vert$, while for the remaining half, $0<m\leq\frac{1}{2}$, with $\epsilon > \frac{\alpha^2}{\vert g_{am}\vert}$ and $\vert g_{m}\vert>2\kappa\vert g_{am}\vert$, where $\kappa$ is a positive number. If one considers atomic condensate in terms of elliptic $\textrm{sn}$ function, keeping the molecular density constant, one obtains, $\beta^2=\frac{(\frac{\alpha^2}{g_{am}}-v^2)}{(m+1)}$ and $\frac{\alpha^2}{g_{am}} > v^{2}$, with all other parameters remain same. It is evident that, both intra- and inter-species interactions must be repulsive, $g_{a}, g_{m}, g_{am}>0$, this refers to a complete opposite interaction landscape. ![image](atom1.pdf) ![image](molecule1.pdf) \[\] ![Elevation and depression in density for static configuration. (left) shows elevation-elevation pair for atomic and molecular components. (right) shows elevation-depression pair for both the components.[]{data-label="fig4"}](b1.pdf "fig:") ![Elevation and depression in density for static configuration. (left) shows elevation-elevation pair for atomic and molecular components. (right) shows elevation-depression pair for both the components.[]{data-label="fig4"}](b2.pdf "fig:") ![image](molecule2.pdf) ![image](atom2.pdf) \[\] ![Density profiles of localized solitons. (left bottom) shows density distribution for $\sigma=1$. (right bottom) shows W-type density distribution for $\sigma=3$ with a non-vanishing background.[]{data-label="fig5"}](sigma-1.pdf "fig:") ![Density profiles of localized solitons. (left bottom) shows density distribution for $\sigma=1$. (right bottom) shows W-type density distribution for $\sigma=3$ with a non-vanishing background.[]{data-label="fig5"}](sigma-3.pdf "fig:") Homo-density Gapped Solitons {#homo-density-gapped-solitons .unnumbered} ---------------------------- In addition to the *cnoidal* waves, bright localized solitons for both the atomic and molecular components, are found as exact solutions: $$\begin{aligned} \phi_{a}(x,t) &=& \sigma_{0}\left(1 - \sigma\tanh^{2}\left[\beta(x-ut)\right]\right) e^{i (k x-\Omega t)} \label{sac} \\ \textrm{and\,\,\,\,\,} \phi_{m}(x,t) &=& \sigma_{0}\left(1 - \sigma\tanh^{2}\left[\beta(x-ut)\right]\right) e^{2 i (k x-\Omega t)} \label{smc}\end{aligned}$$ The mean field equations yield two distinct configurations $$\begin{aligned} \sigma_{0} &=&-\frac{\epsilon}{3\sqrt{2}\alpha};\beta^2=\frac{\epsilon}{3}, \\ \textrm{or\,\,\,\,\,} \sigma_{0} &=& \frac{\epsilon}{\sqrt{2}\alpha}; ~\beta^2=-\frac{\epsilon}{3}\end{aligned}$$ with $\Omega-k^{2}/2 = 2\epsilon/3$. The consistency conditions allow only two discrete values for the parameter $\sigma$: $\sigma = 1, 3$. Above solutions exist only for $g_{a} = g_{m} = -g_{am}$, i.e., when the self-interactions (atom-atom or molecule-molecule) are repulsive and the cross-interaction (atom-molecule) is attractive or vice versa. For $\epsilon>0$, $k^2=2\Omega-\frac{4}{3}\epsilon$, which is consistent with $\Omega>2\epsilon/3$, indicating a finite wave number is needed to excite solution for positive frequency case, while the negative half of the mismatch sets a lower limit for the frequency, $\Omega>-2\vert\epsilon\vert/3$. Density profiles of the localized solitons are depicted in Fig \[fig5\], showing distinct behaviour for the cases with and without background. Interestingly, for $\sigma=3$, we find a W-type soliton profile with background. On the other hand, for $\sigma=1$, one finds an asymptotically vanishing solitary excitation. Conclusion {#conclusion .unnumbered} ========== To summarize, we have investigated the reaction kinetics associated with distinct set of collective chemical waves with different density distributions. Photoassociation is found to dominantly regulate the rate of reactions and product formation. This can be used to produce selectively dense mixture of atoms and molecules. Possibility of formation of multidimensional spatio-temporal solitons in pure cubic media has been theoretically demonstrated previously, here we extend this prediction to matter-wave interactions in BEC systems where quadratic nonlinear contribution due to atom-molecule conversion is unavoidable. The results obtained yield precise conditions under which two distinct pair of atomic-molecular BEC solitons can form, in terms of the parameters originating from atom-molecule coupling, atom-atom $s$-wave scattering, and the energy detuning between the atomic and molecular fields. The unique properties of ultracold energy regime lead to an effective quantization of scattering phase shift, enabling the interference between reaction pathways which contribute to the total reaction rate. Using this mechanism of controlling interference, one can switch the reaction on or off by varying external fields. This new mechanism is a general property of ultracold reactions and will play a crucial role in their technological applications. [9]{} M.H. Anderson, J.R. Ensher, M.R. Matthews, C.E. Wieman, and E.A. Cornell, *Science* 269, 198 (1995). K.B. Davis, M.O. Mewes, M.R. Andrews, N.J. van Druten, D.S. Durfee, D.M. Kurn, and W. Ketterle, *Phys. Rev. Lett.* 75, 3969 (1995). C. E. Wieman, D. E. Pritchard, and D. J. Wineland, *Rev. Mod. Phys.* 71, 2 (1999). A. Griffin, D. W. Snoke, and S. Stringari, *Bose-Einstein Condensation*, Cambridge University Press (1995). J. F. Barry, D. J. McCarron, E. B. Norrgard, M. H. Steinecker, and D. DeMille, *Nature*, 512, 286 (2014). L. Anderegg, B. L. Augenbraun, E. Chae, B. Hemmerling, N. R. Hutzler, A. Ravi, A. Collopy, J. Ye, W. Ketterle, and J. M. Doyle, *Phys. Rev. Lett.* 119, 103201 (2017). K.K. Ni, S. Ospelkaus, M. H. G. de Miranda, A. Pe’er, B. Neyenhuis, J. J. Zirbel, S. Kotochigova, P. S. Julienne, D. S. Jin, and J. Ye, *Science* 322, 231 (2008). J. G. Danzl, E. Haller, M. Gustavsson, M. J. Mark, R. Hart, N. Bouloufa, O. Dulieu, H. Ritsch, and H.-C. N$\ddot{a}$gerl, *Science* 321 ,1062 (2008). M. Guo, B. Zhu, B. Lu, X. Ye, F. Wang, R. Vexiau, N. BouloufaMaafa, G. QuEmEner, O. Dulieu, and D. Wang, *Phys. Rev. Lett.* 116 , 205303 (2016). P. K. Molony, P. D. Gregory, Z. Ji, B. Lu, M. P. Kppinger, C. R. Le Sueur, C. L. Blackley, J. M. Hutson, and S. L. Cornish, *Phys. Rev. Lett.* 113, 255301 (2014). R. S. Tasgal, G. Menabde, and Y. B. Band, *Phys. Rev. A* 74, 053613 (2006). V. A. Yurovsky, A. Ben-Reuven, P. S. Julienne, and C. J. Williams, *Phys. Rev. A* 60, R765 (1999). R. H. Wynar, R. S. Freeland, D. J. Han, C. Ryu, and D. J. Heinzen, *Science* 287, 1016 (2000). C. McKenzie et al., *Phys. Rev. Lett.* 88, 120403 (2001). J. Ulmanis, J. Deiglmayr, M. Repp, R. Wester, and Matthias Weidem$\ddot{u}$ller, *Chem. Rev.* 112, 4890 (2012). J. R. Anglin, and A. Vardi, *Phys. Rev. A* 64, 013605 (2001). B. J. Cusack, T. J. Alexander, E. A. Ostrovskaya, and Y. S. Kivshar, *Phys. Rev. A* 65, 013609 (2001). F. Richter, D. Becker, C. Beny, T. A. Schulze, S. Ospelkaus and T. J. Osborne, *New J. Phys.* 17, 055005 (2015). D. J. Heinzen, R. Wynar, P. D. Drummond, and K. V. Kheruntsyan, *Phys. Rev. Lett.* 84, 5029 (2000). B. Oles and K. Sacha, *J. Phys. B: At. Mol. Opt. Phys.* 40 , 1103 (2007). A. Vardi, V. A. Yurovsky, and J. R. Anglin, *Phys. Rev. A* 64, 063611 (2001). T.L. Ho and V. B. Shenoy, *Phys. Rev. Lett.* 77, 3276 (1996). H. Pu and N. P. Bigelow, *Phys. Rev. Lett.* 80, 1130 (1998). G. P. Agrawal, P. L. Baldeck, and R. R. Alfano, *Phys. Rev. A* 40, 5063 (1989). E. H. Lieb, *Phys. Rev.* 130, 1616 (1963). S. K. Scott, *Oscillations, waves and chaos in chemical kinetics*, Oxford University Press (1994). B. M. Deb, M. Sadhukhan, S. S. Sinha, S. Sengupta, and R. Biswas, *Resonance* 13, 54 (2008). M. Sharma and P. Kumar, *Resonance* 11, 43 (2006). U. Roy, B. Shah, K. Abhinav, and P. K. Panigrahi, *J. Phys. B: At. Mol. Opt. Phys* 44, 035302 (2011). P. Das, T. S. Raju, U. Roy, and P. K. Panigrahi *Phys. Rev. A* 79, 015601 (2009). T. S. Raju, P. K. Panigrahi, and K. Porsezian, *Phys. Rev. E* 71, 026608 (2005). C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, *Rev. Mod. Phys.* 82, 1225 (2010). J. M. Hutson, *New. J. Phys.* 9, 152 (2007). J.F.E. Croft, C. Makrides, M.Li, A. Petrov, B.K. Kendrick, N. Balakrishnan, and S. Kotochigova, *Nat. Comm.* 8, 15897 (2017). R. V. Krems, *Phys.Chem.Chem.Phys.*, 10, 4079 (2008). R. V. Krems, W. C. Stwalley, and B. Friedrich, *Cold molecules: theory, experiment, applications*, CRC Press, Boca Raton, Florida (2009). N. Balakrishnan, *J. Chem. Phys.* 145, 150901 (2016). R. Friedberg and T. D. Lee, *Phys. Rev. B* 40, 6745 (1989). L. Salasnich, A. Parola, and L. Reatto, *Phys. Rev. A* 5, 043614 (2002). A. M. Kamchatnov, and V. S. Shchesnovich, *Phys. Rev. A* 70, 023604 (2004). C. Pethick and H. Smith, *Bose-Einstein condensation in dilute gases*, Cambridge University Press, 2002. J. Ulmanis, J. Deiglmayr, M. Repp, R. Wester, and M. Weidemuller, *Chem. Rev.* 112, 4890 (2012). S. A. Moses, J. P. Covey, M. T. Miecnikowski, D. S. Jin, and J. Ye, *Nature Phys.* 13, 13 (2017). U. Al Khawaja and H. Stoof, *New J. Phys.* 13, 085003 (2011). P. D. Drummond, K. V. Kheruntsyan, and H. He, *Phys. Rev. Lett.* 81, 3055 (1998). K. V. Kheruntsyan and P. D. Drummond, *Phys. Rev. A* 58, R2676 (1998). R. K. Bhaduri, S. Ghosh, MVN Murthy and D. Sen, *J. Phys. A: Math. Gen.* 34, 6553 (2001). F. Lederer, *Phys. Rev. A* 85, 013828 (2012). R. Atre, P. K. Panigrahi, and G. S. Agarwal, *Phys. Rev. E* 73, 056611 (2006). J. D. Cole, *Quart. Appl. Math.* 9, 225 (1951). A. Nath, and U. Roy, *J. Phys. A: Math. Theor.* 47, 415301 (2014). E. V. Goldstein and P. Meystre, *Phys. Rev. A* 55, 2935 (1997). C. K. Law, *Phys. Rev. Lett.* 79, 3105 (1997). R. M. Noyes, *J. Phys. Chem.* 94, 4404 (1990). K. Gizynski and J. Gorecki, *Phys. Chem. Chem. Phys.* 19 , 6519 (2017). V. K. Vanag, A. M. Zhabotinsky, and I. R. Epstein, *J. Phys. Chem. A* 104, 11566 (2000). S. Modak, P. Das, and P. K. Panigrahi, *arXiv: 1708. 04286* (2017). C. Haimberger, J. Kleinert, M. Bhattacharya, and N. P. Bigelow, *Phys. Rev. A* 70, 021402 (2004). [^1]: corresponding author
--- address: | China Center of Advanced Science and Technology (World Laboratory),\ P.O.Box 8730, Beijing 100080, People’s Republic of China [^1]\ and\ Department of Physics, Zhongshan University, Guangzhou 510275,\ People’s Republic of China author: - 'Qiong-gui Lin[^2]' title: 'Geometric phases for neutral and charged particles in a time-dependent magnetic field[^3]' --- 0 cm-0.5 cm Introduction ============ The motion of spin (especially spin $1/2$) in a rotating magnetic field is a rather classical problem in quantum mechanics, which was discussed in the textbook [@1]. Nevertheless, the problem has received much attention in recent years [@2; @3; @4; @5; @6; @7]. The reason may be that the Schrödinger equation for the problem can be solved analytically, and thus it serves as a good example for manifesting the notions of adiabatic geometric phase, nonadiabatic geometric phase for cyclic and noncyclic motions [@8; @9; @10; @11; @12; @13]. Moreover, it is relevant to some problems in condensed matter physics [@7]. Cyclic solutions with special initial conditions were widely discussed in the above cited papers, for both spin $1/2$ and higher ones. It is well known that the nonadiabatic geometric phase for such solutions is always proportional to the solid angle subtended by the trace of the spin (more exactly, the mean value of the spin). Because the nonadiabatic geometric phase is a geometric object, and because the result holds for any cyclic solution of spin $1/2$ in an arbitrarily varying magnetic field [@13; @14], one may become confident that it is also true for higher spin. It is indeed true for special cyclic solutions in a rotating magnetic field as just mentioned. For more general cyclic solutions and more general magnetic fields, however, the result was neither proved nor refuted. In fact, the nonadiabatic geometric phase for solutions with more general initial conditions was calculated by some authors only for spin $1/2$ [@7; @14]. That for higher spin was, however, not studied to our knowledge. In this paper we will consider both neutral and charged particles. In the next section we consider neutral particles with general spin and with magnetic moment moving in a rotating magnetic field. The Schrödinger equation for the problem can be solved exactly by making use of a time-dependent unitary transformation. Solutions with special initial conditions are cyclic and has been studied in detail [@2]. When the parameters of the system are appropriately chosen, all solutions are cyclic. These solutions were not discussed in detail previously. We calculate the nonadiabatic geometric phase for such solutions. The solid angle subtended by the trace of the spin is also calculated explicitly. It turns out that the nonadiabatic geometric phase contains an extra term in addition to the ordinary one proportional to the solid angle. For spin $1/2$ the extra term vanishes automatically. This may be the reason why it was not found previously. For higher spin, however, it depends on the initial condition and does not vanish in general. At this stage one may wonder when this extra term does not appear for an arbitrarily varying magnetic field. This is investigated in Sec. III. We prove that a sufficient condition is that the initial state is an eigenstate of ${\bf s}\cdot {\bf e}_0$ where ${\bf s}$ is the spin operator and ${\bf e}_0$ is some unit vector. Though this conclusion is known in the literature [@layton; @gao], our proof seems more straightforward and simpler. In a recent work, we have studied a charged particle moving in a central potential plus a strong rotating magnetic field [@15]. It can describe the valence electron of an alkaline atom or that of the hydrogen atom under the influence of the external magnetic field. The Schrödinger equation may be reduced to a Schrödinger-like one with a time-independent effective Hamiltonian by using an explicit time-dependent unitary transformation. Thus the evolution operator for the original Schrödinger equation was explicitly obtained, which involves no chronological product. Cyclic solutions are obtained if one takes the eigenstates of the effective Hamiltonian as initial states. These eigenstates and the nonadiabatic geometric phases of the corresponding cyclic solutions were all worked out explicitly. The nonadiabatic geometric phase turns out to be a linear combination of the two solid angles subtended by the traces of the orbit and spin angular momenta. We also studied the case without a central potential [@15] and generalized it to the relativistic case [@16]. Here we are interested in the more general cyclic solutions of the alkaline atomic electron in the strong rotating magnetic field. As pointed out in Ref. [@15], these are available if the parameters of the system are appropriately chosen. However, the nonadiabatic geometric phases for such solutions were not calculated there. These are now calculated in Sec. IV. The two solid angles subtended by the traces of the orbit and spin angular momenta are also calculated explicitly. It turns out that the nonadiabatic geometric phase in this case also contains extra terms in addition to the linear combination of the two solid angles. In Sec. V we consider the alkaline atomic electron moving in an arbitrarily varying strong magnetic field. We prove that the nonadiabatic geometric phase for cyclic solutions with special initial conditions is a linear combination of the two solid angles. In other words, no extra term appears. A brief summary is given in Sec. VI. A formula used in the text is proved in the appendix. Neutral particles in a rotating magnetic field ============================================== Before the calculations begin, let us remark here some differences between spin $1/2$ and higher ones. First, for any state of spin $1/2$, say, an initial state $\Psi_0$, one can always find a unit vector ${\bf e}_0$ such that ${\bf s}\cdot{\bf e}_0\Psi_0=(1/2)\Psi_0$. In fact, ${\bf e}_0=2(\Psi_0,{\bf s}\Psi_0)$ is the unit vector to be found. From this fact and the result of Sec. III, the previous conclusion for spin $1/2$ that the nonadiabatic geometric phase is always proportional to the solid angle follows immediately. For higher spin, on the other hand, the situation is rather different. For a given state $\Psi_0$, in general one cannot find a unit vector ${\bf e}_0$ such that ${\bf s}\cdot{\bf e}_0\Psi_0=m_s\Psi_0$ ($m_s=s, s-1,\ldots, -s$). Let us give a simple example for spin $3/2$. We denote the eigenstate of $s_z$ as $\chi^0_{m_s}$, with eigenvalue $m_s$. Now consider the state $\Psi_0 = a \chi^0_{3/2} + b \chi^0_{-3/2}$, where $|a|^2 + |b|^2= 1$ for normalization. The mean value of the spin in this state is $(\Psi_0, {\bf s}\Psi_0)= (3 |a|^2 -3/2){\bf e}_z$ where ${\bf e}_z$ is the unit vector in the $z$ direction. By varying $a$, the absolute value of the above mean value may take any real number in the interval $[0, 3/2]$. Suppose that one could find a unit vector ${\bf e}_0$ such that ${\bf s}\cdot{\bf e}_0\Psi_0=m_s\Psi_0$ ($m_s = \pm3/2, \pm 1/2$), then the mean value of the spin would be $(\Psi_0, {\bf s}\Psi_0)= m_s{\bf e}_0$, and the absolute value is $|m_s|$, which is obviously in contradiction with the above one. Second, even if the mean value of ${\bf s}$ in $\Psi_0$ is specified, say, $(\Psi_0, {\bf s}\Psi_0)=m_s{\bf e}_z$, one cannot assert that $s_z\Psi_0=m_s\Psi_0$ (the inverse is of course true) unless $m_s=\pm s$ (this is automatically true for spin $1/2$). For example, for spin $3/2$, we have infinitely many states $\chi=\chi^0_{1/2}$ and $\chi' =e^{i\delta_1}\sqrt{2/3}\chi^0_{3/2}+e^{i\delta_2}(1/\sqrt 3)\chi^0_{-3/2}$ that lead to the mean value $\langle{\bf s}\rangle={\bf e}_z/2$, where $\delta_1$ and $\delta_2$ are arbitrary real numbers. Consider a uniform magnetic field ${\bf B}(t)$ that has a constant magnitude $B$ and rotates around some fixed axis at a constant angle $\theta_B$ and with a constant frequency $\omega$. The rotating axis is chosen as the $z$ axis of the coordinate system, so the magnetic field is $$\label{1} {\bf B}(t)=B{\bf n}(t), \quad {\bf n}(t)=(\sin\theta_B\cos\omega t,\;\sin\theta_B\sin\omega t,\; \cos\theta_B)$$ where $B$ and $\omega$ are taken to be positive without loss of generality. Then consider a neutral particle with spin $s$ ($s=1/2,1,3/2,\ldots$) and magnetic moment ${\bbox\mu}=\mu{\bf s}/s$, where ${\bf s}$ is the spin operator in the unit of $\hbar$, satisfying $[s_i,s_j] =i\epsilon_{ijk}s_k$. In the above magnetic field, it has the time-dependent Hamiltonian $$\label{2} H(t)=-{\bbox\mu}\cdot{\bf B}(t)=-\epsilon(\mu)\hbar\omega_B{\bf s\cdot n}(t),$$ where $\omega_B=|\mu|B/s\hbar$ is positive and $\epsilon(\mu)$ is the sign function. The motion is governed by the Schrödinger equation $$\label{3} i\hbar\partial_t\Psi=H(t)\Psi.$$ To solve this equation, we make a unitary transformation [@2] $$\label{4} \Psi(t)=W(t)\psi(t), \quad W(t)=\exp(-i\omega t s_z),$$ then $\psi(t)$ satisfies a Schrödinger-like equation $$\label{5} i\hbar\partial_t\psi=H_{\text{eff}}\psi,$$ where the effective Hamiltonian reads $$\label{6} H_{\text{eff}}=H(0)-\hbar\omega s_z =-\epsilon(\mu)\hbar\omega_B{\bf s\cdot n}(0)-\hbar\omega s_z.$$ This effective Hamiltonian is time independent, so that the Schrödinger-like equation (\[5\]) is readily integrable. For the following convenience, we define the new quantities \[7\] $$\label{7a} \omega_S=[\omega_B^2+\omega^2+2\epsilon(\mu)\omega_B\omega \cos\theta_B]^{1/2},$$ $$\label{7b} \sin\theta_S={\omega_B\sin\theta_B\over\omega_S},\quad \cos\theta_S={\omega_B\cos\theta_B+\epsilon(\mu)\omega\over\omega_S},$$ $$\label{7c} {\bf n}_S=(\sin\theta_S,\; 0,\; \cos\theta_S).$$ In terms of these new quantities, we have $$\label{8} H_{\text{eff}}=-\epsilon(\mu)\hbar\omega_S{\bf s}\cdot{\bf n_S}.$$ Therefore the Schrödinger-like equation (\[5\]) is solved as $$\label{9} \psi(t)=U_{\text{eff}}(t)\psi(0),\quad U_{\text{eff}}(t)=\exp[i\epsilon(\mu)\omega_S t\;{\bf s}\cdot{\bf n}_S].$$ With the obvious relation $\Psi(0)=\psi(0)$, the Schrödinger equation (\[3\]) is solved as \[10\] $$\label{10a} \Psi(t)=U(t)\Psi(0),$$ where $$\label{10b} U(t)=W(t)U_{\text{eff}}(t)=\exp(-i\omega t s_z)\exp[i\epsilon(\mu)\omega_S t\; {\bf s}\cdot{\bf n}_S].$$ If one begins with an initial state $\Psi(t_0)$ at the time $t_0$ \[but note that the time dependence of the magnetic field is still given by Eq. (\[1\])\], then the solution reads $$\label{11} \Psi(t)=W(t)U_{\text{eff}}(t-t_0)W^\dagger(t_0)\Psi(t_0).$$ Since the evolution operator involves no chronological product, it is convenient for practical calculations. In the following discussions we will take the initial time to be $t_0=0$ for convenience. First of all let us calculate the mean value of ${\bf s}$ in an arbitrary state. We define $$\label{12} {\bf v}(t)={\bbox(}\Psi(t), {\bf s}\Psi(t){\bbox )},$$ and denote ${\bf v}_0={\bf v}(0)$. Using Eq. (\[10\]) we have $${\bf v}(t)={\bbox(}\Psi(0), U^\dagger_{\text{eff}}(t)W^\dagger(t) {\bf s}W(t)U_{\text{eff}}(t)\Psi(0){\bbox )}.$$ It is not difficult to show that $$\label{13} W^\dagger(t){\bf s}W(t)=\exp(i\omega t s_z){\bf s}\exp(-i\omega t s_z)=(s_x\cos\omega t-s_y\sin\omega t,\; s_x\sin\omega t+s_y\cos\omega t,\; s_z).$$ The following formula was proved in the appendix. $$\begin{aligned} \label{14} U^\dagger_{\text{eff}}(t){\bf s}U_{\text{eff}}(t) &=&\exp[-i\epsilon(\mu)\omega_S t\;{\bf s}\cdot{\bf n}_S]{\bf s} \exp[i\epsilon(\mu)\omega_S t\;{\bf s} \cdot{\bf n}_S] \nonumber\\ &=& [{\bf s}-({\bf s} \cdot{\bf n}_S){\bf n}_S] \cos[\epsilon(\mu)\omega_S t]-({\bf n}_S \times{\bf s}) \sin[\epsilon(\mu)\omega_S t]+({\bf s} \cdot{\bf n}_S){\bf n}_S.\end{aligned}$$ Using these two formulas we can calculate ${\bf v}(t)$ once ${\bf v}_0$ is given. Let us define, $$\label{15} {\bf g}(t)=[{\bf v}_0-({\bf v}_0 \cdot{\bf n}_S){\bf n}_S] \cos[\epsilon(\mu)\omega_S t] -({\bf n}_S \times{\bf v}_0) \sin[\epsilon(\mu)\omega_S t]+({\bf v}_0 \cdot{\bf n}_S){\bf n}_S,$$ which is an ordinary vector, not an operator. Then the result reads $$\label{16} {\bf v}(t)={\bbox(}g_x(t)\cos\omega t-g_y(t)\sin\omega t,\; g_x(t)\sin\omega t+g_y(t)\cos\omega t,\; g_z(t){\bbox )}.$$ This is a rather complicated result, but the physical picture is clear. We observe that the three terms in ${\bf g}(t)$ are perpendicular to one another, and $$\label{17} |{\bf v}_0-({\bf v}_0 \cdot{\bf n}_S){\bf n}_S|=|{\bf n}_S \times {\bf v}_0|=\sqrt{|{\bf v}_0|^2-({\bf v}_0 \cdot{\bf n}_S)^2} \equiv v_{0\perp},$$ so we define three unit vectors orthogonal to one another: $$\label{18} {\bf e}^S_x=[{\bf v}_0-({\bf v}_0 \cdot{\bf n}_S){\bf n}_S] /v_{0\perp},\quad {\bf e}^S_y={\bf n}_S \times{\bf v}_0 /v_{0\perp}, \quad {\bf e}^S_z={\bf n}_S,$$ which constitute a right-handed frame. In this frame ${\bf g}(t)$ takes the form $$\label{19} {\bf g}(t)=v_{0\perp}\cos[\epsilon(\mu)\omega_S t]{\bf e}^S_x -v_{0\perp} \sin[\epsilon(\mu)\omega_S t]{\bf e}^S_y+({\bf v}_0 \cdot{\bf n}_S){\bf e}^S_z.$$ This is nothing different from Eq. (\[15\]). However, in this form it help us recognize the physical picture of the motion, and obviously yields $|{\bf v}(t)|=|{\bf g}(t)|=|{\bf v}_0|$ as expected. Now to get the vector ${\bf v}(t)$, one just rotates ${\bf v}_0$ around ${\bf e}^S_z={\bf n}_S$ through an angle $-\epsilon(\mu)\omega_S t$ (positive angle corresponds to anti-clockwise rotation) to get ${\bf g}(t)$, and then rotates ${\bf g}(t)$ around ${\bf e}_z$ through an angle $\omega t$. The resulted motion involves nutation as well as rotation, and the motion is not periodic in general. Note that a cyclic state leads to a periodic ${\bf v}(t)$, but the inverse is not necessarily true. Therefore to obtain a cyclic solution, one should first find a periodic ${\bf v}(t)$. Two cases with periodic ${\bf v}(t)$ are available. First, if the parameters of the system are such that $\omega_S/\omega$ is a rational number, then both $\omega t$ and $\omega_S t$ may simultaneously become integral multiples of $2\pi$ at some latter time $T$, and we have ${\bf v}(T)={\bf v}_0$, independent of the initial condition. Second, if the initial condition is such that $$\label{20} {\bf v}_0=m_s{\bf n}_S, \quad m_s=s,s-1,\ldots,-s,$$ we have ${\bf g}(t)=m_s{\bf n}_S$, and $$\label{21} {\bf v}(t)=m_s {\bbox (} \sin\theta_S\cos\omega t,\; \sin\theta_S\sin\omega t,\;\cos\theta_S{\bbox)}.$$ In this case ${\bf v}(t)$ only makes rotation. It is obviously periodic. In the following we will see that both cases indeed correspond to cyclic solutions. It seems that no other cyclic solution can be found. Cyclic solutions of the second kind (with special initial condition) have been previously discussed in detail [@2]. For comparison we briefly review the result. First we give the eigenstates of ${\bf s}\cdot{\bf n}_S$. It is not difficult to show that $$\label{22} {\bf s}\cdot{\bf n}_S=\exp(-i\theta_S s_y)s_z\exp(i\theta_S s_y).$$ We denote the eigenstates of $s_z$ with eigenvalues $m_s$ ($m_s=s,s-1,\ldots,-s$) as $\chi^0_{m_s}$. Then the eigenstates of ${\bf s}\cdot{\bf n}_S$ with eigenvalues $m_s$ are obviously $$\label{23} \chi_{m_s}=\exp(-i\theta_S s_y)\chi_{m_s}^0=\sum_{m'_s}D^{s}_{m'_s m_s}(0,\theta_S,0)\chi_{m'_s}^0,$$ where the $D$’s are Wigner functions. We take the initial state to be $$\label{24} \Psi_{m_s}(0)=\chi_{m_s}.$$ This leads to Eqs. (\[20\]) and (\[21\]). The solution is indeed cyclic, and the nonadiabatic geometric phase in a period $\tau=2\pi/\omega$ was found to be [@2] $$\label{29} \gamma_{m_s}=-m_s\Omega_S, \quad \text{mod $2\pi$},$$ where $\Omega_S=2\pi(1-\cos\theta_S)$. We denote the solid angle subtended by the trace of ${\bf v}(t)$ by $\Omega_{\bf v}$. Because ${\bf v}(t)$ is given by Eq. (\[21\]) in the present case, we have $$\Omega_{\bf v}=\epsilon(m_s)\Omega_S,\quad \text{mod $4\pi$},$$ and consequently $$\label{30} \gamma_{m_s}=-|m_s|\Omega_{\bf v}, \quad \text{mod $2\pi$}.$$ Therefore the geometric nature of the result is quite obvious. It should be remarked that the spin angular momentum precesses synchronously with the magnetic field, but at a different angle with the rotating axis. Now we consider cyclic solutions of more general forms. If the parameters of the system are such that $\omega_S/\omega$ is a rational number, then all solutions are cyclic, as shown below. We denote $\omega_S/\omega=K_S/K$, where $K_S$ and $K$ are natural numbers, prime to each other. Let $T=K\tau$, then $\omega T=2\pi K$ and $\omega_S T=2\pi K_S$. In this case ${\bf v}(t)$ is periodic with period $T$. An arbitrary initial condition can be written as $$\label{31} \Psi(0)=\sum_{m_s} c_{m_s}\chi_{m_s},$$ where the coefficients $c_{m_s}$ are arbitrary except satisfying $\sum_{m_s} |c_{m_s}|^2=1$ such that $\Psi(0)$ is normalized. Note that $\Psi(0)$ can also be expanded in terms of the complete set $\{\chi^0_{m_s}\}$, it is easy to find that \[32\] $$\label{32a} \Psi(T)=\exp(i\delta)\Psi(0),$$ where $$\label{32b} \delta=s[\epsilon(\mu)2\pi K_S-2\pi K],\quad \text{mod $2\pi$}.$$ Therefore the state is indeed cyclic, and $\delta$ is the total phase change which is independent of the initial condition. Using the relation $$\label{26} W^\dagger(t)H(t)W(t)=H(0)=H_{\text{eff}}+\hbar\omega s_z,$$ and note that $H_{\text{eff}}$ commutes with $U_{\text{eff}}(t)$, $s_z$ commutes with $W(t)$, we have $$\langle H(t)\rangle={\bbox(}\Psi(0), H_{\text{eff}}\Psi(0){\bbox )} + \hbar\omega v_z(t) =-\epsilon(\mu)\hbar\omega_S{\bf v}_0\cdot{\bf n}_S +\hbar\omega v_z(t).$$ Thus the dynamic phase is $$\label{34} \beta=-\hbar^{-1}\int_0^T \langle H(t)\rangle\;dt=\epsilon(\mu)2\pi K_S{\bf v}_0\cdot{\bf n}_S-2\pi K\cos\theta_S {\bf v}_0\cdot{\bf n}_S.$$ This depends on the initial condition as expected. Finally we obtain the nonadiabatic geometric phase $$\label{35} \gamma=\delta-\beta=\epsilon(\mu)2\pi K_S(s-{\bf v}_0\cdot{\bf n}_S) -2\pi K(s-\cos\theta_S {\bf v}_0\cdot{\bf n}_S),\quad \text{mod $2\pi$}.$$ In the special case when ${\bf v}_0=m_s {\bf n}_S$ this is consistent with the previous result (note that $T=K\tau$). The next task is to calculate geometrically the solid angle $\Omega_{\bf v}$ subtended by the trace of ${\bf v}(t)$, and compare it with $\gamma$. It is easy to show that $$\label{36} \Omega_{\bf v}={1 \over |{\bf v}_0|}\int_0^T {v_x(t)\dot v_y(t)-v_y(t)\dot v_x(t) \over |{\bf v}_0|+v_z(t)}\; dt.$$ Because of the complicated results (\[15\]) and (\[16\]), it would be difficult to calculate this straightforwardly. Let us try to get around the difficulty. It is easy to show that $$\label{37} v_x(t)\dot v_y(t)-v_y(t)\dot v_x(t)=g_x(t)\dot g_y(t)-g_y(t)\dot g_x(t)+\omega[g_x^2(t)+g_y^2(t)].$$ On account of the relations $|{\bf v}(t)|=|{\bf g}(t)|=|{\bf v}_0|$ and $g_z(t)=v_z(t)$, we have $g_x^2(t)+g_y^2(t)=|{\bf v}_0|^2-v_z^2(t)$. Therefore $$\label{38} \Omega_{\bf v}={1 \over |{\bf g}_0|}\int_0^T {g_x(t)\dot g_y(t)-g_y(t)\dot g_x(t) \over |{\bf g}_0|+g_z(t)}\; dt +{1 \over |{\bf v}_0|}\int_0^T \omega[|{\bf v}_0|-v_z(t)]\; dt,$$ where ${\bf g}_0\equiv {\bf g}(0)={\bf v}_0$. The second integral can be calculated easily, and the first is recognized as the solid angle subtended by the trace of ${\bf g}(t)$, which is very easy to calculate in the coordinate frame expanded by ${\bf e}^S_x$, ${\bf e}^S_y$ and ${\bf e}^S_z$ \[cf Eq. (\[19\])\]. The final result is $$\label{39} \Omega_{\bf v}=-\epsilon(\mu)2\pi K_S\left(1-{{\bf v}_0\cdot{\bf n}_S\over |{\bf v}_0|}\right)+2\pi K\left(1-\cos\theta_S{{\bf v}_0\cdot{\bf n}_S\over |{\bf v}_0|}\right).$$ Here the first term is due to the rotation around ${\bf e}^S_z={\bf n}_S$, and the second is due to the further rotation around ${\bf e}_z$. Compared with Eq. (\[35\]), we find the relation $$\label{40} \gamma=-|{\bf v}_0|\Omega_{\bf v}+(s-|{\bf v}_0|) [\epsilon(\mu)2\pi K_S-2\pi K],\quad \text{mod $2\pi$}.$$ Therefore $\gamma$ contains two terms. The first is the familiar one that is proportional to $\Omega_{\bf v}$. The second is an extra term. If $s=1/2$, it is easy to show that $|{\bf v}_0|=1/2$ for any initial state, then the extra term vanishes automatically, and the above relation reduces to $\gamma=-(1/2) \Omega_{\bf v}$, which is known to be valid in an arbitrary magnetic field [@13; @14]. For higher spin, $s-|{\bf v}_0|$ is in general not an integer, and the extra term cannot be dropped. For the special initial condition (\[24\]), the above relation reduces to the result (\[30\]). We will show in the next section that Eq. (\[30\]) holds in an arbitrarily varying magnetic field as long as the initial state is an eigenstate of ${\bf s}\cdot{\bf e}_0$ with eigenvalue $m_s$, where ${\bf e}_0$ is some unit vector. For the rotating magnetic field at hand, Eq. (\[30\]) holds as long as $|{\bf v}_0|=m_s$. This is a looser restriction on the initial condition. We do not know whether this is true in a more general magnetic field. To conclude this section we remark that the relation (\[40\]) holds when $|{\bf v}_0|=0$. This can be easily verified by comparing Eq. (\[40\]) with Eq. (\[35\]) in this special case. Moreover, from Eq. (\[34\]) we see that the dynamic phase vanishes. Therefore one may regard the total phase in this case as pure geometric, though $\Omega_{\bf v}$ is not well defined. Neutral particles in an arbitrarily varying magnetic field ========================================================== As seen in the last section, the relation $\gamma\propto\Omega_{\bf v}$ does not always hold for spin higher than $1/2$. Thus it may be of interest to ask when it would be valid in an arbitrarily varying magnetic field. In this section we will show that a sufficient condition is that the initial state is an eigenstate of ${\bf s}\cdot {\bf e}_0$ where ${\bf e}_0$ is some unit vector. As discussed at the beginning of Sec. II, given an arbitrary state $\Psi(t)$ of spin $1/2$, one can always find a unit vector ${\bf e}(t)$ such that ${\bf s}\cdot{\bf e}(t)\Psi(t)=(1/2)\Psi(t)$. This holds at all times. For higher spin, however, no similar conclusion is available. Nevertheless, we will show that if an eigenvalue equation ${\bf s}\cdot{\bf e}_0\Psi(0)=m_s\Psi(0)$ holds for the initial state $\Psi(0)$, a similar one with some appropriate unit vector ${\bf e}(t)$ would hold at all later times. The latter equation is of crucial importance since it enables us to explicitly determine the state $\Psi(t)$ in terms of ${\bf e}(t)$ up to a phase factor. If ${\bf e}(t)$ returns to ${\bf e}(0)$ at some later time $T$, we obtain a cyclic solution. We write down the Schrödinger equation in an arbitrarily varying magnetic field ${\bf B}(t)=B(t){\bf n}(t)$: $$\label{41} i\hbar\partial_t\Psi=H(t)\Psi=-\hbar\omega_B(t){\bf s}\cdot {\bf n}(t)\Psi.$$ There are two differences from the one in Sec. II. First, here $\omega_B(t)=\mu B(t)/s\hbar$ is time dependent, and its sign may changes with time \[so we do not use $|\mu|$ in defining $\omega_B(t)$\]. Second, the unit vector ${\bf n}(t)$ is not given by Eq. (\[1\]), but varies arbitrarily. We would assume that the magnetic field varies continuously. We take the initial state $\Psi(0)$ of the system to be an eigenstate of ${\bf s} \cdot{\bf e}_0$ with eigenvalue $m_s$ where ${\bf e}_0$ is some unit vector, that is $$\label{42} {\bf s}\cdot{\bf e}_0\Psi(0)=m_s\Psi(0),\quad m_s=s,s-1,\ldots,-s.$$ Let us define a vector ${\bf e}(t)$ by the following differential equation and initial condition. $$\label{43} \dot {\bf e}(t)=-\omega_B(t){\bf n}(t)\times {\bf e}(t),\quad {\bf e}(0)={\bf e}_0.$$ Obviously, $|{\bf e}(t)|$ is time independent, so ${\bf e}(t)$ is a unit vector at any time. We are going to prove that $$\label{44} {\bf s}\cdot{\bf e}(t)\Psi(t)=m_s\Psi(t)$$ holds at all later times. This can be easily done by induction. By definition, Eq. (\[44\]) is valid at $t=0$. We assume that it is valid at time $t$, what we need to do is to show that it is also true at time $t+\Delta t$ where $\Delta t$ is an infinitesimal increment of time. In fact, using Eqs. (\[41\]) and (\[43\]) we have \[45\] $$\label{45a} \Psi(t+\Delta t)=\Psi(t)+i\omega_B(t){\bf s}\cdot{\bf n}(t) \Psi(t)\Delta t,$$ $$\label{45b} {\bf e}(t+\Delta t)={\bf e}(t)-\omega_B(t){\bf n}(t)\times{\bf e}(t) \Delta t.$$ After some simple algebra, the conclusion is achieved. Because ${\bf e}(t)$ is a unit vector, we can write $$\label{46} {\bf e}(t)={\bbox (}\sin\theta_e(t)\cos\phi_e(t),\; \sin\theta_e(t)\sin\phi_e(t),\; \cos\theta_e(t){\bbox )}.$$ It is not difficult to show that $$\label{47} {\bf s}\cdot{\bf e}(t)=\exp[-i\phi_e(t)s_z]\exp[-i\theta_e(t)s_y] s_z \exp[i\theta_e(t)s_y]\exp[i\phi_e(t)s_z].$$ Therefore the eigenstate of ${\bf s}\cdot{\bf e}(t)$ with eigenvalue $m_s$ is $$\label{48} \Psi(t)=\exp[i\alpha(t)]\exp[-i\phi_e(t)s_z]\exp[-i\theta_e(t)s_y] \chi^0_{m_s},$$ where $\alpha(t)$ is a phase that cannot be determined by the eigenvalue equation. However, $\alpha(t)$ is not arbitrary. To satisfy the Schrödinger equation, it should be determined by the other variables $\theta_e(t)$ and $\phi_e(t)$. In fact, the above equation yields $$\label{49} {\bbox(}\Psi(t),\Psi(t+\Delta t){\bbox)}=1+i\dot\alpha(t)\Delta t -im_s\cos[\theta_e(t)]\dot\phi_e(t)\Delta t.$$ On the other hand, from Eq. (\[45a\]) we have $$\label{50} {\bbox(}\Psi(t),\Psi(t+\Delta t){\bbox)}=1+i\omega_B(t){\bf v}(t) \cdot{\bf n}(t)\Delta t,$$ where ${\bf v}(t)$ is defined by Eq. (\[12\]). Comparing the two results we obtain $$\label{51} \dot\alpha(t)=m_s\cos[\theta_e(t)]\dot\phi_e(t)+\omega_B(t){\bf v}(t) \cdot{\bf n}(t).$$ The motion of ${\bf e}(t)$ is determined by the magnetic field. If the magnetic field is such that ${\bf e}(t)$ returns to its initial value at the time $T$, that is $$\label{52} \theta_e(T)=\theta_e(0),\quad \phi_e(T)=\phi_e(0)+2\pi K,$$ where $K$ is an integer, then we get a cyclic solution. In fact, it is easy to show that \[53\] $$\label{53a} \Psi(T)=\exp(i\delta)\Psi(0),$$ where $$\label{53b} \delta=\alpha(T)-\alpha(0)-2\pi m_s K, \quad \text{mod $2\pi$}$$ is the total phase change. Using Eqs. (\[51\]) and (\[52\]), it can be recast in the form $$\label{54} \delta=-m_s\int_0^T[1-\cos\theta_e(t)]\dot\phi_e(t)\;dt+\int_0^T \omega_B(t){\bf v}(t)\cdot{\bf n}(t)\;dt.$$ The second term is obviously the dynamic phase $\beta$. Therefore the nonadiabatic geometric phase is $$\label{55} \gamma=-m_s\Omega_{\bf e},\quad \text{mod $2\pi$}$$ where $$\label{56} \Omega_{\bf e}=\int_0^T[1-\cos\theta_e(t)]\dot\phi_e(t)\;dt$$ is the solid angle subtended by the trace of ${\bf e}(t)$. Finally notice that ${\bf v}(t)$ satisfies the same equation as ${\bf e}(t)$, and ${\bf v}_0=m_s{\bf e}_0$ which can be easily verified, we have ${\bf v}(t)=m_s{\bf e}(t)$. Consequently, Eq. (\[55\]) can be recast in the form $$\label{58} \gamma=-|m_s|\Omega_{\bf v},\quad \text{mod $2\pi$}.$$ This is the final result of this section. Though $\gamma$ and $\Omega_{\bf v}$ cannot be explicitly calculated, the above relation holds regardless of the form of the magnetic field. For $m_s=\pm s$, this result was previously obtained in Ref. [@layton], and for general $m_s$ in Ref. [@gao], both by different methods from ours, but our method seems more straightforward and simpler. It should be noted that ${\bf v}(t)=0$ when $m_s=0$. In this case $\Omega_{\bf v}$ is not well defined. However, the final result (\[58\]) remains correct because it gives the same result $\gamma=0$ as given by Eq. (\[55\]). This remark also applies to the result (\[30\]) in Sec. II and similar ones in the following sections. To conclude this section, we remark that for any cyclic motion one can always appropriately choose the coordinate axes such that $\theta_e(t)$ does not take on the values $0$ or $\pi$ during the cycle under consideration. This avoids any uncontinuous jump or ill definition of $\phi_e(t)$, and renders the above demonstration sound enough. Alkaline atomic electron in a strong rotating magnetic field ============================================================ In this section we consider the valence electron of the alkaline atom or that of the hydrogen atom moving in a strong rotating magnetic field given by Eq. (\[1\]). This is described by the Schrödinger equation \[59\] $$\label{59a} i\hbar\partial_t\Psi=H(t)\Psi,$$ where $$\label{59b} H(t)=H_0+\mu_{\rm B}B({\bf l}+2{\bf s})\cdot{\bf n}(t) =H_0+\hbar\omega_B({\bf l}+2{\bf s})\cdot{\bf n}(t),$$ in which $$\label{59c} H_0={{\bf p}^2\over 2M}+V(r)$$ is the Hamiltonian of the electron in the central potential of the nucleus (and the other electrons in the inner shells for alkaline atoms), $M$ and $\mu_{\rm B}$ are respectively the reduced mass and the Bohr magneton of the electron, $\omega_B=\mu_{\rm B}B/\hbar>0$, ${\bf l}={\bf r}\times{\bf p}/\hbar$ is the orbit angular momentum operator in unit of $\hbar$, and ${\bf s}$ the spin as before (here $s=1/2$). The applicability of this equation was discussed in Ref. [@15]. The above Schrödinger equation can be solved in a way similar to that in Sec. II. This was discussed in detail in Ref. [@15]. The solution is $$\label{60} \Psi(t)=U(t)\Psi(0),\quad U(t)=W(t)U_{\text{eff}}(t),$$ where $$\label{61} W(t)=\exp(-i\omega t j_z),\quad U_{\rm eff}(t)=\exp(-iH_{\text{ eff}}t/\hbar),$$ where $j_z$ is the $z$-component of the total angular momentum (in unit of $\hbar$) ${\bf j=l+s}$, and $$\label{62} H_{\text{eff}}=H_0+\hbar\omega_L{\bf l}\cdot{\bf n}_L +\hbar\omega_S {\bf s}\cdot{\bf n}_S$$ is the effective Hamiltonian. The parameters in the effective Hamiltonian are defined as \[63\] $$\label{63a} \omega_L=(\omega_B^2+\omega^2-2\omega_B\omega\cos\theta_B)^{1/2},$$ $$\label{63b} \omega_S=(4\omega_B^2+\omega^2-4\omega_B\omega\cos\theta_B)^{1/2};$$ $$\label{64} {\bf n}_L=(\sin\theta_L,\; 0,\; \cos\theta_L),\quad {\bf n}_S =(\sin\theta_S,\; 0,\; \cos\theta_S),$$ where \[65\] $$\label{65a} \sin\theta_L={\omega_B\sin\theta_B\over\omega_L},\quad \cos\theta_L={\omega_B\cos\theta_B-\omega\over\omega_L},$$ $$\label{65b} \sin\theta_S={2\omega_B\sin\theta_B\over\omega_S},\quad \cos\theta_S={2\omega_B\cos\theta_B-\omega\over\omega_S}.$$ Using the above solution we can calculate the mean values of ${\bf l}$ and ${\bf s}$ in an arbitrary state. We define $$\label{66} {\bf u}(t)={\bbox(}\Psi(t), {\bf l}\Psi(t){\bbox )},\quad {\bf v}(t)={\bbox(}\Psi(t), {\bf s}\Psi(t){\bbox )},$$ and denote ${\bf u}_0={\bf u}(0)$ and ${\bf v}_0={\bf v}(0)$. On account of the fact that ${\bf l}$, ${\bf s}$ and $H_0$ commute with one another, the results can be obtained by calculations similar to those performed in Sec. II. We define \[67\] $$\label{67a} {\bf f}(t)=[{\bf u}_0-({\bf u}_0 \cdot{\bf n}_L){\bf n}_L] \cos\omega_L t +({\bf n}_L \times{\bf u}_0) \sin\omega_L t+({\bf u}_0 \cdot{\bf n}_L){\bf n}_L,$$ $$\label{67b} {\bf g}(t)=[{\bf v}_0-({\bf v}_0 \cdot{\bf n}_S){\bf n}_S] \cos\omega_S t +({\bf n}_S \times{\bf v}_0) \sin\omega_S t+({\bf v}_0 \cdot{\bf n}_S){\bf n}_S,$$ then the results read \[68\] $$\label{68a} {\bf u}(t)={\bbox(}f_x(t)\cos\omega t-f_y(t)\sin\omega t,\; f_x(t)\sin\omega t+f_y(t)\cos\omega t,\; f_z(t){\bbox )},$$ $$\label{68b} {\bf v}(t)={\bbox(}g_x(t)\cos\omega t-g_y(t)\sin\omega t,\; g_x(t)\sin\omega t+g_y(t)\cos\omega t,\; g_z(t){\bbox )}.$$ Note that the second term in ${\bf g}(t)$ has a different sign from the one in Sec. II. The physical picture is rather similar to the previous one. We denote the common eigenstates of $\{H_0,\;{\bf l}^2,\;l_z\}$ as $\zeta_{nlm}^0$, with eigenvalues $\{\epsilon_{nl},\; l(l+1),\; m\}$, where $\epsilon_{nl}$ are the energy spectrum of the electron in the absence of the magnetic field. Then the common eigenstates of the operators $\{H_0,\;{\bf l}^2,\;{\bf l}\cdot{\bf n}_L,\;{\bf s}\cdot {\bf n}_S\}$ with eigenvalues $\{\epsilon_{nl},\; l(l+1),\; m,\; m_s\}$ (now $m_s=\pm 1/2$) are $$\label{69} \varphi_{nlmm_s}=\zeta_{nlm}\chi_{m_s},$$ where $\chi_{m_s}$ is given by Eq. (\[23\]), and $$\label{70} \zeta_{nlm}=\exp(-i\theta_L l_y)\zeta_{nlm}^0 =\sum_{m'}D^l_{m'm}(0,\theta_L,0)\zeta_{nlm'}^0.$$ The $\varphi_{nlmm_s}$ are also eigenstates of $H_{\text{eff}}$ with the eigenvalues $$\label{71} E_{nlmm_s}=\epsilon_{nl}+m\hbar\omega_L+m_s\hbar\omega_S.$$ However, these eigenvalues are not observable since $H_{\text{eff}}$ is not a physical quantity. The above states are complete. At a given time, any state of the system can be expressed as a linear combination of them. As shown in Ref. [@15], a solution with the initial condition $$\label{72} \Psi_i(0)=\varphi_i=\varphi_{nlmm_s}$$ is a cyclic one. Here for convenience we use one subscript $i$ to represent all the quantum numbers $nlmm_s$. The nonadiabatic geometric phase was shown to be $$\label{73} \gamma_i=-m\Omega_L-m_s\Omega_S, \quad \text{mod $2\pi$}.$$ Here $\Omega_L=2\pi(1-\cos\theta_L)$ and $\Omega_S=2\pi(1-\cos\theta_S)$. We denote the solid angles subtended by the traces of the orbit and spin angular momenta by $\Omega_{\bf u}$ and $\Omega_{\bf v}$, respectively. For the present case ${\bf u}(t)$ and ${\bf v}(t)$ have forms similar to Eq. (\[21\]), so we have $$\label{75} \gamma_{i}=-|m|\Omega_{\bf u}-|m_s|\Omega_{\bf v}, \quad \text{mod $2\pi$},$$ and the geometric nature of the result is quite obvious. For the alkaline atomic electron at hand, $|m_s|$ can be replaced by $1/2$ since $m_s=\pm 1/2$. However, the above result is valid for a charged particle of general spin $s$ moving in the central potential plus the strong rotating magnetic field. In Sec. V we will show that this result holds in an arbitrarily varying strong magnetic field as long as the initial state is appropriately chosen. However, we will show in the following that for more general cyclic solutions the nonadiabatic geometric phase contains extra terms in addition to the linear combination of the two solid angles. In the above we see that the initial condition $\Psi_{i}(0)=\varphi_i$ leads to a cyclic solution in the general case. If the parameters of the system satisfy some appropriate conditions, more cyclic solutions are available. In fact, if $\omega_B$, $\omega$ and $\theta_B$ are such that both $\omega_L/\omega$ and $\omega_S/\omega$ are rational numbers, we will see that any solution with the initial condition $$\label{76} \Psi(0)=\sum_{mm_s}c_{mm_s}\varphi_{nlmm_s}$$ is a cyclic one, where the coefficients $c_{mm_s}$ are arbitrary except satisfying $\sum_{mm_s}|c_{mm_s}|^2=1$ such that the initial state is normalized. Suppose that $\omega_L/\omega=K_2/K_1$, $\omega_S/\omega=K_4/K_3$, where all the $K$’s are natural numbers, with $K_2$ and $K_1$ prime to each other, and the same for $K_4$ and $K_3$. We denote the least common multiple of $K_1$ and $K_3$ as $K$, and write $$\label{77} \omega_L/\omega=K_L/K, \quad \omega_S/\omega=K_S/K,$$ where $K_L=KK_2/K_1$ and $K_S=KK_4/K_3$ are both natural numbers. In this case, $\omega T$, $\omega_L T$ and $\omega_S T$ are all integral multiples of $2\pi$, where $T=K\tau$ is now the period of ${\bf u}(t)$ and ${\bf v}(t)$. It is not difficult to show that \[78\] $$\label{78a} \Psi(T)=\exp(i\delta)\Psi(0),$$ where $$\label{78b} \delta=-\epsilon_{nl}T/\hbar-l(2\pi K+2\pi K_L) -s(2\pi K+2\pi K_S),\quad \text{mod $2\pi$}.$$ Thus the solution is actually cyclic and $\delta$ is the total phase change which is independent of the initial condition. Currently we would like to keep the general value $s$ such that the result may be applied to a charged particle with more general spin. The dynamic phase can be calculated in a way similar to that in Sec. II, the result is $$\label{79} \beta=-\epsilon_{nl}T/\hbar-2\pi K_L{\bf u}_0\cdot{\bf n}_L-2\pi K_S{\bf v}_0\cdot{\bf n}_S-2\pi K(\cos\theta_L {\bf u}_0\cdot{\bf n}_L+\cos\theta_S {\bf v}_0\cdot{\bf n}_S).$$ This depends on the initial condition as expected. The nonadiabatic geometric phase is $$\begin{aligned} \label{80} \gamma=\delta-\beta=&&-2\pi K(l-\cos\theta_L{\bf u}_0\cdot{\bf n}_L) -2\pi K(s-\cos\theta_S{\bf v}_0\cdot{\bf n}_S)\nonumber\\ &&-(l-{\bf u}_0\cdot{\bf n}_L)2\pi K_L-(s-{\bf v}_0\cdot{\bf n}_S)2\pi K_S,\quad \text{mod $2\pi$}.\end{aligned}$$ On the other hand, the solid angles subtended by the traces of the orbit and spin angular momenta are \[81\] $$\label{81a} \Omega_{\bf u}=2\pi K_L\left(1-{{\bf u}_0\cdot{\bf n}_L\over |{\bf u}_0|}\right)+2\pi K\left(1-\cos\theta_L{{\bf u}_0\cdot{\bf n}_L\over |{\bf u}_0|}\right),$$ $$\label{81b} \Omega_{\bf v}=2\pi K_S\left(1-{{\bf v}_0\cdot{\bf n}_S\over |{\bf v}_0|}\right)+2\pi K\left(1-\cos\theta_S{{\bf v}_0\cdot{\bf n}_S\over |{\bf v}_0|}\right).$$ The calculations that lead to these results are similar to those in Sec. II. From the above results and Eq. (\[80\]) it is easy to find that $$\label{82} \gamma=-|{\bf u}_0|\Omega_{\bf u}-|{\bf v}_0|\Omega_{\bf v}+ (|{\bf u}_0|-l)(2\pi K+2\pi K_L)+ (|{\bf v}_0|-s)(2\pi K+2\pi K_S),\quad \text{mod $2\pi$}.$$ Therefore, in addition to a linear combination of $\Omega_{\bf u}$ and $\Omega_{\bf v}$, we get two extra terms in $\gamma$. This result holds for a charged particle with spin $s$ moving in the central potential plus the magnetic field. The first extra term can be dropped only when $|{\bf u}_0|$ is an integer and the second when $|{\bf v}_0|-s$ is an integer. For the alkaline atomic electron, $|{\bf v}_0|=s=1/2$, so the second extra term vanishes. Because $l$ is an integer, we have for the alkaline atomic electron the final result $$\label{83} \gamma=-|{\bf u}_0|\Omega_{\bf u}-{\textstyle\frac 12}\Omega_{\bf v}+ |{\bf u}_0|(2\pi K+2\pi K_L),\quad \text{mod $2\pi$}.$$ For the special initial condition (\[72\]), the above results reduce to Eq. (\[75\]). In the next section we will show that the result (\[75\]) holds in an arbitrarily varying magnetic field as long as the initial state is a common eigenstate of $\{ H_0,\; {\bf l}^2,\; {\bf l}\cdot{\bf d}_0,\; {\bf s}\cdot{\bf e}_0\}$ with eigenvalues $\{\epsilon_{nl},\; l(l+1),\; m,\; m_s\}$, where ${\bf d}_0$ and ${\bf e}_0$ are some unit vectors. For the rotating magnetic field at hand, it holds as long as $|{\bf u}_0|=|m|$ and $|{\bf v}_0|=|m_s|$ (the second is automatically satisfied for $s=1/2$), which is a looser restriction on the initial state. We do not know whether this is true for a more general magnetic field. To conclude this section we point out that the condition (\[77\]) can be realized with $\omega_L/\omega=1$ and $\omega_S/\omega=2$, if one chooses $\omega_B/\omega=\sqrt{3/2}$ and $\cos\theta_B=\sqrt 3/2\sqrt 2$ [@15]. Alkaline atomic electron in an arbitrarily varying strong magnetic field ======================================================================== Now we consider the alkaline atomic electron moving in an arbitrarily varying strong magnetic field. We write down the Schrödinger equation: \[84\] $$\label{84a} i\hbar\partial_t\Psi=H(t)\Psi,$$ where $$\label{84b} H(t)=H_0+\hbar\omega_B(t)({\bf l}+2{\bf s})\cdot{\bf n}(t).$$ Compared with that in Sec. IV, there are two differences. First, $\omega_B(t)=\mu_{\rm B}B(t)/\hbar$ is time dependent and may be either positive or negetive. Second, ${\bf n}(t)$ is an arbitrarily varying unit vector. It is difficult to obtain any specific solution of the above equation. However, for solutions of special initial conditions, we can establish a relation between $\gamma$ and the solid angles $\Omega_{\bf u}$ and $\Omega_{\bf v}$. Obviously, the operators in the set $\{ H_0,\; {\bf l}^2,\; {\bf l}\cdot{\bf d}_0,\; {\bf s}\cdot{\bf e}_0\}$ commute with one another, where ${\bf d}_0$ and ${\bf e}_0$ are some unit vectors, thus they can have a complete set of common eigenstates. We take the initial state $\Psi(0)$ of the system to be such a common eigenstate, that is \[85\] $$\label{85a} H_0\Psi(0)=\epsilon_{nl}\Psi(0),\quad {\bf l}^2\Psi(0)=l(l+1)\Psi(0),$$ $$\label{85b} {\bf l}\cdot{\bf d}_0\Psi(0)=m\Psi(0),\quad {\bf s}\cdot {\bf e}_0\Psi(0)=m_s\Psi(0).$$ For the alkaline atomic electron, the last condition need not be assumed, since one can always find a unit vector ${\bf e}_0$ such that it holds with $m_s=1/2$ or $-1/2$. However, we prefer to assume it so that the result may be valid for charged particles of more general spin. We define two vectors ${\bf d}(t)$ and ${\bf e}(t)$ by the following differential equations and initial conditions \[86\] $$\label{86a} \dot {\bf d}(t)=\omega_B(t){\bf n}(t)\times {\bf d}(t),\quad {\bf d}(0)={\bf d}_0,$$ $$\label{86b} \dot {\bf e}(t)=2\omega_B(t){\bf n}(t)\times {\bf e}(t),\quad {\bf e}(0)={\bf e}_0.$$ Obviously, both $|{\bf d}(t)|$ and $|{\bf e}(t)|$ are time independent, so ${\bf d}(t)$ and ${\bf e}(t)$ are unit vectors at any time. One can prove that the following eigenvalue equations hold at all times. \[87\] $$\label{87a} H_0\Psi(t)=\epsilon_{nl}\Psi(t),\quad {\bf l}^2\Psi(t)=l(l+1)\Psi(t),$$ $$\label{87b} {\bf l}\cdot{\bf d}(t)\Psi(t)=m\Psi(t),\quad {\bf s}\cdot {\bf e}(t)\Psi(t)=m_s\Psi(t).$$ Because ${\bf d}(t)$ and ${\bf e}(t)$ are unit vectors, we can write \[88\] $$\label{88a} {\bf d}(t)={\bbox (}\sin\theta_d(t)\cos\phi_d(t),\; \sin\theta_d(t)\sin\phi_d(t),\; \cos\theta_d(t){\bbox )},$$ $$\label{88b} {\bf e}(t)={\bbox (}\sin\theta_e(t)\cos\phi_e(t),\; \sin\theta_e(t)\sin\phi_e(t),\; \cos\theta_e(t){\bbox )}.$$ As in Sec. III, $\Psi(t)$ can be written as $$\label{89} \Psi(t)=\exp[i\alpha(t)]\exp[-i\phi_d(t)l_z]\exp[-i\theta_d(t)l_y] \zeta^0_{nlm} \exp[-i\phi_e(t)s_z]\exp[-i\theta_e(t)s_y] \chi^0_{m_s},$$ where $\alpha(t)$ is determined by the equation $$\label{90} \dot\alpha(t)=m\cos[\theta_d(t)]\dot\phi_d(t)+ m_s\cos[\theta_e(t)]\dot\phi_e(t)-\hbar^{-1}\langle H(t)\rangle,$$ where the expectation value $\langle H(t)\rangle$ is calculated in the state $\Psi(t)$. The motions of ${\bf d}(t)$ and ${\bf e}(t)$ are determined by the magnetic field. If the latter is such that both ${\bf d}(t)$ and ${\bf e}(t)$ return to their initial values at time $T$, then we get a cyclic solution. The nonadiabatic geometric phase can be shown to be $$\label{94} \gamma=-m\Omega_{\bf d}-m_s\Omega_{\bf e},\quad \text{mod $2\pi$}$$ where $$\label{95} \Omega_{\bf d}=\int_0^T[1-\cos\theta_d(t)]\dot\phi_d(t)\;dt, \quad \Omega_{\bf e}=\int_0^T[1-\cos\theta_e(t)]\dot\phi_e(t)\;dt$$ are the solid angles subtended by the traces of ${\bf d}(t)$ and ${\bf e}(t)$. By similar reasoning to that in Sec. III, we have ${\bf u}(t)=m{\bf d}(t)$ and ${\bf v}(t)=m_s{\bf e}(t)$, and Eq. (\[94\]) can be written as $$\label{97} \gamma=-|m|\Omega_{\bf u}-|m_s|\Omega_{\bf v},\quad \text{mod $2\pi$}.$$ This is the final result of this section. It holds for a charged particle with spin $s$ moving in the central potential plus the arbitrarily varying strong magnetic field. For the alkaline atomic electron, $|m_s|$ may be replaced by $1/2$. Summary ======= In this paper we have studied the nonadiabatic geometric phases of neutral or charged particles moving in a time-dependent magnetic field. In Sec. II we consider a neutral particle with general spin and with magnetic moment moving in a rotating magnetic field. The nonadiabatic geometric phase for special cyclic solutions is proportional to the solid angle subtended by the trace of the spin \[cf Eq. (\[30\])\]. This is well known. However, for more general cyclic solutions, we find that the nonadiabatic geometric phase contains an extra term. The main result of this section is Eq. (\[40\]). The extra term vanishes automatically for spin $1/2$, consistent with the known conclusion for spin $1/2$ that Eq. (\[30\]) is valid in an arbitrary magnetic field. For higher spin, however, the extra term depends on the initial condition. In Sec. III we show that the result (\[30\]) is valid for special cyclic solutions of higher spin in an arbitrarily varying magnetic field. In Sec. IV we consider a charged particle moving in a central potential plus a strong rotating magnetic field. This may describe the valence electron in an alkaline atom or that in a hydrogen atom. For special cyclic solutions, the nonadiabatic geometric phase is a linear combination of the two solid angles subtended by the traces of the orbit and spin angular momenta \[cf Eq. (\[75\])\]. This is also a previously known result. For the more general cyclic solutions, however, extra terms are also involved in the geometric phase. The main results of this section are Eqs. (\[82\]) and (\[83\]). In Sec. V we prove that the result (\[75\]) is valid for special cyclic solutions of charged particles moving in a central potential plus an arbitrarily varying strong magnetic field. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the National Natural Science Foundation of the People’s Republic of China. Appendix {#appendix .unnumbered} ======== Here we prove Eq. (\[14\]) in a very simple way. We define $$\eqnum{A1}\label{A1} {\bf F}(\phi)=\exp(i\phi\;{\bf s}\cdot{\bf n}_S){\bf s} \exp(-i\phi\;{\bf s} \cdot{\bf n}_S).$$ Differentiation of this equation with respect to $\phi$ yields $$\eqnum{A2}\label{A2} {\bf F}'(\phi)={\bf n}_S\times {\bf F}(\phi),$$ and $$\eqnum{A3}\label{A3} {\bf F}''(\phi)={\bf n}_S\times [{\bf n}_S\times{\bf F}(\phi)] =[{\bf F}(\phi)\cdot{\bf n}_S]{\bf n}_S-{\bf F}(\phi).$$ From Eq. (\[A2\]) we have $[{\bf F}(\phi)\cdot{\bf n}_S]'={\bf F}'(\phi)\cdot{\bf n}_S=0$, so that ${\bf F}(\phi)\cdot{\bf n}_S= {\bf F}(0)\cdot{\bf n}_S={\bf s}\cdot{\bf n}_S$. Then Eq. (\[A3\]) becomes $$\eqnum{A4}\label{A4} {\bf F}''(\phi)+{\bf F}(\phi)=({\bf s}\cdot {\bf n}_S){\bf n}_S.$$ The solution of this equation is obviously $$\eqnum{A5}\label{A5} {\bf F}(\phi)={\bf a}\cos\phi+{\bf b}\sin\phi+({\bf s}\cdot {\bf n}_S){\bf n}_S,$$ where ${\bf a}$ and ${\bf b}$ are constant vectors. From Eqs. (\[A1\]) and (\[A2\]) we have $$\eqnum{A6}\label{A6} {\bf F}(0)={\bf s}, \quad {\bf F}'(0)={\bf n}_S\times{\bf s}.$$ This determines ${\bf a}$ and ${\bf b}$, so we arrive at $$\eqnum{A7}\label{A7} {\bf F}(\phi)=[{\bf s}-({\bf s}\cdot {\bf n}_S){\bf n}_S] \cos\phi+({\bf n}_S\times{\bf s})\sin\phi+({\bf s}\cdot {\bf n}_S){\bf n}_S.$$ Eq. (\[14\]) can be obtained by substituting $\phi=-\epsilon(\mu) \omega_S t$ into the above result. [99]{} L. D. Landau and E. M. Lifshitz, [*Quantum Mechanics*]{}, 3rd ed. (Pergamon, Oxford, 1977). S.-J. Wang, Phys. Rev. A [**42**]{}, 5107 (1990). A. G. Wagh and V. C. Rakhecha, Phys. Lett. A [**170**]{}, 71 (1992). G.-J. Ni, S.-Q. Chen, and Y.-L. Shen, Phys. Lett. A [**197**]{}, 100 (1995). G.-J. Ni and S.-Q. Chen, [*Advanced Quantum Mechanics*]{} (Fudan Univ. Press, Shanghai, 2000) (in Chinese). A. G. Wagh and V. C. Rakhecha, Phys. Rev. A [**48**]{}, R1729 (1993). S.-L. Zhu, Z. D. Wang and Y.-D. Zhang, Phys. Rev. B [**61**]{}, 1142 (2000). M. V. Berry, Proc. R. Soc. Lond. A [**392**]{}, 45 (1984); J. Anandan and L. Stodolsky, Phys. Rev. D [**35**]{}, 2597 (1987). Y. Aharonov and J. Anandan, Phys. Rev. Lett. [**58**]{}, 1593 (1987). J. Samuel and R. Bhandari, Phys. Rev. Lett. [**60**]{}, 2339 (1988). T. F. Jordan, Phys. Rev. A [**38**]{}, 1590 (1988). Y.-S. Wu and H.-Z. Li, Phys. Rev. B [**38**]{}, 11907 (1988). H.-Z. Li, [*Global Properties of Simple Physical Systems–Berry’s Phase and Others*]{} (Shanghai Scientific & Technical, Shanghai, 1998) (in Chinese). D. J. Fernández C, L. M. Nieto, M. A. del Olmo and M. Santander, J. Phys. A [**25**]{}, 5151 (1992). E. Layton, Y. Huang and S-I Chu, Phys. Rev. A [**41**]{}, 42 (1990). X.-C. Gao, J.-B. Xu and T.-Z. Qian, Phys. Lett. A [**152**]{}, 449 (1991). Q.-G. Lin, Phys. Rev. A [**63**]{}, 012108 (2001). Q.-G. Lin, J. Phys. A [**34**]{}, 1903 (2001). [^1]: not for correspondence [^2]: E-mail: qg\[email protected], qg\[email protected] [^3]: published in J. Phys. A [**35**]{} (2002) 377-391.
--- abstract: 'Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for [*simple question answering*]{}; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks [@weston2014memory] because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance.' author: - | Antoine Bordes\ Facebook AI Research\ 770 Broadway\ New York, NY. USA\ [[email protected]]{}\ Nicolas Usunier\ Facebook AI Research\ 112, avenue de Wagram\ 75017 Paris, France\ [[email protected]]{}\ Sumit Chopra, Jason Weston\ Facebook AI Research\ 770 Broadway\ New York, NY. USA\ [{spchopra, jase}@fb.com]{}\ bibliography: - 'qawemb.bib' title: 'Large-scale Simple Question Answering with Memory Networks' --- Introduction ============ Open-domain Question Answering (QA) systems aim at providing the exact answer(s) to questions formulated in natural language, without restriction of domain. While there is a long history of QA systems that search for textual documents or on the Web and extract answers from them (see e.g. [@voorhees2000overview; @dumais2002web]), recent progress has been made with the release of large Knowledge Bases (KBs) such as [[Freebase]{}]{}, which contain consolidated knowledge stored as atomic facts, and extracted from different sources, such as free text, tables in webpages or collaborative input. Existing approaches for QA from KBs use learnable components to either transform the question into a structured KB query [@berant-EtAl:2013:EMNLP] or learn to embed questions and facts in a low dimensional vector space and retrieve the answer by computing similarities in this embedding space [@bordes-chopra-weston:2014:EMNLP2014]. However, while most recent efforts have focused on designing systems with higher reasoning capabilities, that could jointly retrieve and use multiple facts to answer, the simpler problem of answering questions that refer to a single fact of the KB, which we call [*Simple Question Answering*]{} in this paper, is still far from solved. Hence, existing benchmarks are small; they mostly cover the head of the distributions of facts, and are restricted in their question types and their syntactic and lexical variations. As such, it is still unknown how much the existing systems perform outside the range of the specific question templates of a few, small benchmark datasets, and it is also unknown whether learning on a single dataset transfers well on other ones, and whether such systems can learn from different training sources, which we believe is necessary to capture the whole range of possible questions. Besides, the actual need for reasoning, i.e. constructing the answer from more than a single fact from the KB, depends on the actual structure of the KB. As we shall see, for instance, a simple preprocessing of [[Freebase]{}]{}tremendously increases the coverage of simple QA in terms of possible questions that can be answered with a single fact, including list questions that expect more than a single answer. In fact, the task of simple QA itself might already cover a wide range of practical usages, if the KB is properly organized. This paper presents two contributions. First, as an effort to study the coverage of existing systems and the possibility to train jointly on different data sources via multitasking, we collected the first large-scale dataset of questions and answers based on a KB, called [[SimpleQuestions]{}]{}. This dataset, which is presented in Section \[sec:fbq\], contains more than $100$k questions written by human annotators and associated to [[Freebase]{}]{}facts, while the largest existing benchmark, [[WebQuestions]{}]{}, contains less than $6$k questions created automatically using the Google suggest API. Second, in sections \[sec:memnn\] and \[sec:training\], we present an embedding-based QA system developed under the framework of Memory Networks (MemNNs) [@weston2014memory; @sukhbaatar2015weakly]. Memory Networks are learning systems centered around a memory component that can be read and written to, with a particular focus on cases where the relationship between the input and response languages (here natural language) and the storage language (here, the facts from KBs) is performed by embedding all of them in the same vector space. The setting of the simple QA corresponds to the elementary operation of performing a single lookup in the memory. While our model bares similarity with previous embedding models for QA [@bordes2014open; @bordes-chopra-weston:2014:EMNLP2014], using the framework of MemNNs opens the perspective to more involved inference schemes in future work, since MemNNs were shown to perform well on complex reasoning toy QA tasks [@weston2014memory]. We discuss related work in Section \[sec:related\]. We report experimental results in Section \[sec:expes\], where we show that our model achieves excellent results on the benchmark [[WebQuestions]{}]{}. We also show that it can learn from two different QA datasets to improve its performance on both. We also present the first successful application of transfer learning for QA. Using the [[Reverb]{}]{}KB and QA datasets, we show that [[Reverb]{}]{}facts can be added to the memory and used to answer [*without retraining*]{}, and that MemNNs achieve better results than some systems designed on this dataset. 0 Memory Networks [@weston2014memory] – or MemNNs – are a new class of models that combine neural networks with a memory component that can be read and written to. They have been primarily designed for question answering (QA): given a question, they answer by first querying their memory to retrieve relevant supporting facts and then using those facts to formulate a response. Their training process teach them how to use their memory for answering. Existing implementations [@weston2014memory; @weston2015towards; @sukhbaatar2015weakly] have shown that MemNNs are able to learn to perform complex reasoning on various tasks and with various degrees of supervision, making them a very promising method for eventually understanding language at scale. However, most results involving MemNNs so far have been obtained on synthetic datasets. Those have multiple advantages, especially in the way that one can clearly control their difficulty, which make them very interesting test beds for learning machines. However, they are not grounded in real situations or real human motivations and do not use real language, which tend to limitate the impact of the nice results of MemNNs. Hence, MemNNs are currently facing three main challenges to be able to eventually scale up to understanding of language. They should be able to: - handle more complex reasoning patterns; - operate on data in real natural language; - use a very large memory designed to encode background and common-sense knowledge. This paper studies the last two points through the task of Simple QA. We define Simple QA as the task of finding the correct answer to any question in natural language in a Knowledge Base (KB) when the reasoning process is basic and only requires to retrieve and use a single supporting fact to answer correctly. Despite simple, this problem can be large-scale since answering a question might require to search over a memory of millions of entries. It can also allow to answer a lot of different questions. More specifically, we consider two main questions regarding MemNNs for simple QA: - How should the memory be structured to allow a feast and reliable retrieval of supporting facts? - How can we train MemNNs in this context where the supervision is weak and incomplete? Can we fruitfully learn from multiple sources? We show that: 1. MemNNs can reach state-of-the-art with no prior knowledge, just KB + lexicon 2. MemNNS are robust to the addition of new facts to their memory, coming from another KB. They can also answer using them, without having to be retrained. 3. KB must be highly connected to perform best: (i) remove mediator nodes to speed up inference and increase connectivity (ii) group facts to reduce search space (iii) increase KB size to have even more connections 4. The more training sources are provided the better the system becomes, with especially a much better coverage in terms of questions that can be answered. 5. We introduce a new data for simple QA, [[SimpleQuestions]{}]{}. Interestingly, our conclusions 3., 4. and 5. do not concern only MemNNs but any QA system and should then benefit to many. Simple Question Answering {#sec:fbq} ========================= Knowledge Bases contain facts expressed as triples , where and are entities and describes the type of (directed) link between these entities. The simple QA problem we address here consist in finding the answer to questions that can be rephrased as queries of the form , asking for all objects linked to by . The question [*What do Jamaican people speak ?*]{}, for instance, could be rephrased as the [[Freebase]{}]{}query . In other words, fetching a single fact from a KB is sufficient to answer correctly. The term [*simple QA*]{} refers to the simplicity of the reasoning process needed to answer questions, since it involves a single fact. However, this does not mean that the QA problem is easy per se, since retrieving this single supporting fact can be very challenging as it involves to search over millions of alternatives given a query expressed in natural language. Table \[tab:ex\] shows that, with a KB with many types of relationships like [[Freebase]{}]{}, the range of questions that can be answered with a single fact is already very broad. Besides, as we shall see, modiying slightly the structure of the KB can [*make some QA problems simpler*]{} by adding direct connections between entities and hence allow to bypass the need for more complex reasoning. Knowledge Bases --------------- We use the KB [[Freebase]{}]{}[^1] as the basis of our QA system, our source of facts and answers. All [[Freebase]{}]{}entities and relationships are typed and the lexicon for types and relationships is closed. [[Freebase]{}]{}data is collaboratively collected and curated, to ensure a high reliability of the facts. Each entity has an internal identifier and a set of strings that are usually used to refer to that entity in text, termed [*aliases*]{}. We consider two extracts of [[Freebase]{}]{}, whose statistics are given in Table \[tab:kbs\]. [[FB2M]{}]{}, which was used in [@bordes-chopra-weston:2014:EMNLP2014], contains about $2$M entities and $5$k relationships. [[FB5M]{}]{}, is much larger with about $5$M entities and more than $7.5$k relationships. We also use the KB [[Reverb]{}]{}as a secondary source of facts to study how well a model trained to answer questions using [[Freebase]{}]{}facts could be used to answer using [[Reverb]{}]{}’s as well, without being trained on [[Reverb]{}]{}data. This is a pure setting of [*transfer learning*]{}. [[Reverb]{}]{}is interesting for this experiment because it differs a lot from [[Freebase]{}]{}. Its data was extracted automatically from text with minimal human intervention and is highly unstructured: entities are unique strings and the lexicon for relationships is open. This leads to many more relationships, but entities with multiple references are not deduplicated, ambiguous referents are not resolved, and the reliability of the stored facts is much lower than in [[Freebase]{}]{}. We used the full extraction from [@ReVerb2011], which contains $2$M entities and $600$k relationships. The SimpleQuestions dataset --------------------------- Existing resources for QA such as [[WebQuestions]{}]{}[@berant-EtAl:2013:EMNLP] are rather small (few thousands questions) and hence do not provide a very thorough coverage of the variety of questions that could be answered using a KB like [[Freebase]{}]{}, even in the context of simple QA. Hence, in this paper, we introduce a new dataset of much larger scale for the task of simple QA called [[SimpleQuestions]{}]{}.[^2] This dataset consists of a total of 108,442 questions written in natural language by human English-speaking annotators each paired with a corresponding fact from [[FB2M]{}]{}that provides the answer and explains it. We randomly shuffle these questions and use 70% of them (75910) as training set, 10% as validation set (10845), and the remaining 20% as test set. Examples of questions and facts are given in Table \[tab:ex\]. We collected [[SimpleQuestions]{}]{}in two phases. The first phase consisted of shortlisting the set of facts from [[Freebase]{}]{}to be annotated with questions. We used [[FB2M]{}]{}as background KB and removed all facts with undefined relationship type i.e. containing the word . We also removed all facts for which the (subject, relationship) pair had more than a threshold number of objects. This filtering step is crucial to remove facts which would result in trivial uninformative questions, such as, [*Name a person who is an actor?*]{}. The threshold was set to 10. In the second phase, these selected facts were sampled and delivered to human annotators to generate questions from them. For the sampling, each fact was associated with a probability which defined as a function of its relationship frequency in the KB: to favor variability, facts with relationship appearing more frequently were given lower probabilities. For each sampled facts, annotators were shown the facts along with hyperlinks to [freebase.com](freebase.com) to provide some context while framing the question. Given this information, annotators were asked to phrase a question involving the subject and the relationship of the fact, with the answer being the object. The annotators were explicitly instructed to phrase the question differently as much as possible, if they encounter multiple facts with similar relationship. They were also given the option of skipping facts if they wish to do so. This was very important to avoid the annotators to write a boiler plate questions when they had no background knowledge about some facts. Memory Networks for Simple QA {#sec:memnn} ============================= A Memory Network consists of a memory (an indexed array of objects) and a neural network that is trained to query it given some inputs (usually questions). It has four components: [*Input map*]{} ($I$), [ *Generalization*]{} ($G$), [*Output map*]{} ($O$) and [*Response*]{} ($R$) which we detail below. 0 follows: - an input feature map step that converts inputs into the internal feature representation. In our case, $I$ preprocesses KB facts to add them to the memory and questions to prepare them for answering. - a generalization step that updates the memory given the new input. We use this step when new facts are added to the memory after the training phase so that they can be readily used to answer new questions, without having to retrain the whole model. - an output feature map step that produces new outputs given an input and the memory. In our case, this is the main answering stage that fetch the relevant fact from the memory given a question. - a response step that converts the output into the response format desired; it actually returns the answer(s) for us. But first, we describe the MemNNs workflow used to set up a model for simple QA. This proceeds in three steps: #### 1. Storing Freebase: this first phase parses [[Freebase]{}]{}(either [[FB2M]{}]{} or [[FB5M]{}]{}depending on the setting) and stores it in memory. It uses the [*Input*]{} module to preprocess the data. #### 2. Training: this second phase trains the MemNN to answer question. This uses [*Input*]{}, [*Output*]{} and [*Response*]{} modules, the training concerns mainly the parameters of the embedding model at the core of the [*Output*]{} module. #### 3. Connecting Reverb: this third phase adds new facts coming from [[Reverb]{}]{}to the memory. This is done after training to test the ability of MemNNs to handle new facts without having to be re-trained. It uses the [*Input*]{} module to preprocess [[Reverb]{}]{}facts and the [ *Generalization*]{} module to connect them to the facts already stored. After these three stages, the MemNN is ready to answer any question by running the $I$, $O$ and $R$ modules in turn. We now detail the implementation of the four modules. Input module {#sec:MemNNinput} ------------ This module preprocesses the three types of data that are input to the network: [[Freebase]{}]{}facts that are used to populate the memory, questions that the system need to answer, and [[Reverb]{}]{}facts that we use, in a second phase, to extend the memory. #### Preprocessing Freebase The [[Freebase]{}]{}data is initially stored as atomic facts involving single entities as subject and object, plus a relationship between them. However, this storage needs to be adapted to the QA task in two aspects. First, in order to answer list questions, which expect more than one answer, we redefine a fact as being a triple containing a subject, a relationship, and the set of all objects linked to the subject by the relationship. This [*grouping*]{} process transforms atomic facts into grouped facts, which we simply refer to as [*facts*]{} in the following. Table \[tab:kbs\] shows the impact of this grouping: on [[FB2M]{}]{}, this decreases the number of facts from $14$M to $11$M and, on [[FB5M]{}]{}, from $22$M to $12$M. Second, the underlying structure of [[Freebase]{}]{}is a hypergraph, in which more than two entities can be linked. For instance dates can be linked together with two entities to specify the time period over which the link was valid. The underlying triple storage involves [*mediator nodes*]{} for each such fact, effectively making entities linked through paths of length 2, instead of 1. To obtain direct links between entities in such cases, we created a single fact for these facts by removing the intermediate node and using the second relationship as the relationship for the new condensed fact. This step reduces the need for searching the answer outside the immediate neighborhood of the subject referred to in the question, widely increasing the scope of the simple QA task on [[Freebase]{}]{}. On [[WebQuestions]{}]{}, a benchmark not primarily designed for simple QA, removing mediator nodes allows to jump from around $65$% to $86$% of questions that can be answered with a single fact. #### Preprocessing Freebase facts A fact with $k$ objects ${y}= ({s}, {r}, \{{o}_1, ..., {o}_k\})$ is represented by a bag-of-symbol vector ${f}({y})$ in ${\mathbb{R}}^{N_S}$, where $N_S$ is the number of entities and relationships. Each dimension of ${f}({y})$ corresponds to a relationship or an entity (independent of whether it appears as subject or object). The entries of the subject and of the relationship have value $1$, and the entries of the objects are set to $1/k$. All other entries are $0$. #### Preprocessing questions A question ${q}$ is mapped to a bag-of-ngrams representation ${g}({q})$ of dimension ${\mathbb{R}}^{N_V}$ where $N_V$ is the size of the vocabulary. The vocabulary contains all individual words that appear in the questions of our datasets, together with the aliases of [[Freebase]{}]{}entities, each alias being a single n-gram. The entries of ${g}({q})$ that correspond to words and n-grams of ${q}$ are equal to $1$, all other ones are set to $0$. #### Preprocessing Reverb facts In our experiments with [[Reverb]{}]{}, each fact ${y}= ({s}, {r}, {o})$ is represented as a vector $h({y})\in{\mathbb{R}}^{N_S+N_V}$. This vector is a bag-of-symbol for the subject ${s}$ and the object ${o}$, and a bag-of-words for the relationship ${r}$. The exact composition of $h$ is provided by the [*Generalization*]{} module, which we describe now. Generalization module --------------------- This module is responsible for adding new elements to the memory. In our case, the memory has a multigraph structure where each node is a [[Freebase]{}]{}entity and labeled arcs in the multigraph are [[Freebase]{}]{}relationships: after their preprocessing, all [[Freebase]{}]{}facts are stored using this structure. We also consider the case where new facts, with a different structure (i.e. new kinds of relationship), are provided to the MemNNs by using [[Reverb]{}]{}. In this case, the generalization module is then used to connect [[Reverb]{}]{}facts to the [[Freebase]{}]{}-based memory structure, in order to make them usable and searchable by the MemNN. To link the subject and the object of a [[Reverb]{}]{}fact to [[Freebase]{}]{}entities, we use precomputed entity links [@lin2012entity]. If such links do not give any result for an entity, we search for [[Freebase]{}]{}entities with at least one alias that matches the [[Reverb]{}]{}entity string. These two processes allowed to match $17$% of [[Reverb]{}]{}entities to [[Freebase]{}]{}ones. The remainder of entities were encoded using bag-of-words representation of their strings, since we had no other way of matching them to [[Freebase]{}]{}entities. All [[Reverb]{}]{}relationships were encoded using bag-of-words of their strings. Using this approximate process, we are able to store each [[Reverb]{}]{}fact as a bag-of-symbols (words or [[Freebase]{}]{}entities) all already seen by the MemNN during its training phase based on [[Freebase]{}]{}. We can then hope that what had been learned there could also be successfully used to query [[Reverb]{}]{}facts. Output module {#sec:outputModule} ------------- The output module performs the memory lookups given the input to return the [*supporting facts*]{} destined to eventually provide the answer given a question. In our case of simple QA, this module only returns a single supporting fact. To avoid scoring all the stored facts, we first perform an approximate entity linking step to generate a small set of candidate facts. The supporting fact is the candidate fact that is most similar to the question according to an embedding model. #### Candidate generation To generate candidate facts, we match $n$-grams of words of the question to aliases of [[Freebase]{}]{}entities and select a few matching entities. All facts having one of these entities as subject are scored in a second step. We first generate all possible $n$-grams from the question, removing those that contain an interrogative pronoun or $1$-grams that belong to a list of stopwords. We only keep the $n$-grams which are an alias of an entity, and then discard all $n$-grams that are a subsequence of another $n$-gram, except if the longer $n$-gram only differs by [*in*]{}, [*of*]{}, [*for*]{} or [*the*]{} at the beginning. We finally keep the two entities with the most links in [[Freebase]{}]{}retrieved for each of the five longest matched $n$-grams. #### Scoring Scoring is performed using an embedding model. Given two embedding matrices ${\bf W}_V \in {\mathbb{R}}^{d\times N_V}$ and ${\bf W}_S \in {\mathbb{R}}^{d\times N_S}$, which respectively contain, in columns, the $d$-dimensional embeddings of the words/$n$-grams of the vocabulary and the embeddings of the [[Freebase]{}]{}entities and relationships, the similarity between question ${q}$ and a [[Freebase]{}]{}candidate fact ${y}$ is computed as: $${S}_{QA}({q}, {y}) = \cos({\bf W}_V {g}({q}), {\bf W}_S{f}({y}))\,,$$ with $ \cos()$ the cosine similarity. When scoring a fact ${y}$ from [[Reverb]{}]{}, we use the same embeddings and build the matrix ${\bf W}_{VS} \in {\mathbb{R}}^{d\times (N_V+N_S)}$, which contains the concatenation in columns of ${\bf W}_V$ and ${\bf W}_S$, and also compute the cosine similarity: $${S}_{RVB}({q}, {y}) = \cos({\bf W}_V {g}({q}), {\bf W}_{VS}{h}({y}))\,.$$ The dimension $d$ is a hyperparameter, and the embedding matrices ${\bf W}_V$ and ${\bf W}_S$ are the parameters learned with the training algorithm of Section \[sec:training\]. Response module --------------- In Memory Networks, the [*Response*]{} module post-processes the result of the [*Output*]{} module to compute the intended answer. In our case, it returns the set of objects of the selected supporting fact. Training {#sec:training} ======== This section details how we trained the scoring function of the [ *Output*]{} module using a multitask training process on four different sources of data. First, in addition to the new [[SimpleQuestions]{}]{}dataset described in Section \[sec:fbq\], we also used [[WebQuestions]{}]{}, a benchmark for QA introduced in [@berant-EtAl:2013:EMNLP]: questions are labeled with answer strings from aliases of [[Freebase]{}]{}entities, and many questions expect multiple answers. Table \[tab:data\] details the statistics of both datasets. We also train on automatic questions generated from the KB, that is [[FB2M]{}]{}or [[FB5M]{}]{}depending on the setting, which are essential to learn embeddings for the entities not appearing in either [[WebQuestions]{}]{}or [[SimpleQuestions]{}]{}. Statistics of [[FB2M]{}]{}or [[FB5M]{}]{}are given in Table \[tab:kbs\]; we generated one training question per fact following the same process as that used in [@bordes-chopra-weston:2014:EMNLP2014]. Following previous work such as [@paralex], we also use the indirect supervision signal of pairs of question paraphrases. We used a subset of the large set of paraphrases extracted from [[WikiAnswers]{}]{}and introduced in [@fader2014open]. Our [[Paraphrases]{}]{}dataset is made of $15$M clusters containing 2 or more paraphrases each. Multitask training ------------------ As in previous work on embedding models and Memory Networks [@bordes-chopra-weston:2014:EMNLP2014; @bordes2014open; @weston2014memory], the embeddings are trained with a ranking criterion. For QA datasets the goal is that in the embedding space, a supporting fact is more similar to the question than any other [*non-supporting*]{} fact. For the paraphrase dataset, a question should be more similar to one of its paraphrases than to any another question. The multitask learning of the embedding matrices ${\bf W}_V$ and ${\bf W}_S$ is performed by alternating stochastic gradient descent (SGD) steps over the loss function on the different datasets. For the QA datasets, given a question/supporting fact pair $({q}, {y})$ and a non-supporting fact ${y}'$, we perform a step to minimize the loss function $$\ell_{QA}({q}, {y}, {y}') = \big[\gamma - {S}_{QA}({q}, {y}) + {S}_{QA}({q}, {y}') \big]_+\,,$$ where $[.]_+$ is the positive part and $\gamma$ is a margin hyperparameter. For the paraphrase dataset, the similarity score between two questions ${q}$ and ${q}'$ is also the cosine between their embeddings, i.e. ${S}_{QQ}({q}, {q}') = \cos({\bf W}_V {g}({q}), {\bf W}_{V}{g}({q}'))$, and given a paraphrase pair $({q}, {q}')$ and another question ${q}''$, the loss is: $$\ell_{QQ}({q}, {q}', {q}'') = \big[\gamma - {S}_{QQ}({q}, {q}') + {S}_{QQ}({q}, {q}'') \big]_+\,.$$ The embeddings (i.e. the columns of ${\bf W}_V$ and ${\bf W}_S$) are projected onto the $L_2$ unit ball after each update. At each time step, a sample from the paraphrase dataset is drawn with probability $0.2$ (this probability is arbitrary). Otherwise, a sample from one of the three QA datasets, chosen uniformly at random, is taken. We use the WARP loss [@wsabie] to speed up training, and Adagrad [@duchi2011adaptive] as SGD algorithm multi-threaded with [ HogWild!]{} [@recht2011hogwild]. Training takes 2-3 hours on 20 threads. 0 Training sources ---------------- The challenge of simple QA is to obtain a high coverage of answerable questions, and in particular to avoid overfitting specific question patterns that might appear on a particular dataset. To that end, we train on four different sources of data. The first two datasets are the new [[SimpleQuestions]{}]{}dataset described at length in Section \[sec:fbq\] and [[WebQuestions]{}]{}, the standard benchmark for QA. Introduced in [@berant-EtAl:2013:EMNLP], [[WebQuestions]{}]{}is decomposed in train/validation/test sets with respectively $3,000$, $778$ and $2,032$ trivia questions. Questions are labeled with answer strings from aliases of [[Freebase]{}]{}entities, and many questions expect multiple answers. Table \[tab:data\] details the statistics of these datasets. We also train on two additional datasets: a large set of synthetic questions/supporting fact pairs generated from [[Freebase]{}]{}, and the set of paraphrases introduced by [@fader2014open]. Automatic questions generated from the KB, that is [[FB2M]{}]{}or [[FB5M]{}]{}depending on the setting, are essential to learn non-random embeddings for the entities not seen during training. Statistics of [[FB2M]{}]{}or [[FB5M]{}]{}are given in Table \[tab:kbs\]; we generated one training question per fact following the same process used in [@bordes-chopra-weston:2014:EMNLP2014]. Following many previous works such as [@paralex] or [@bordes2014open], we also study the influence of multi-tasking the training with an indirect supervision signal made of pairs of question paraphrases. We used a subset of the large set of question paraphrases extracted from the [[WikiAnswers]{}]{}website and introduced in [@fader2014open]. Our final [[Paraphrases]{}]{}dataset is made of 15,189,550 clusters containing 2 or more paraphrases each. Distant supervision ------------------- Unlike for [[SimpleQuestions]{}]{}or the synthetic QA data generated from [[Freebase]{}]{}, for [[WebQuestions]{}]{}only answer strings are provided for questions: the supporting facts are unknown. In order to generate the supervision, we use the candidate fact generation algorithm of Section \[sec:outputModule\]. For each candidate fact, the aliases of its objects are compared to the set of provided answer strings. The fact(s) which can generate the maximum number of answer strings from their objects’ aliases are then kept. If multiple facts are obtained for the same question, the ones with the minimal number of objects are considered as supervision facts. This last selection avoids favoring irrelevant relationships that would be kept only because they point to many objects but would not be specific enough. If no answer string could be found from the objects of the initial candidates, the question is discarded from the training set. Future work should investigate the process of weak supervised training of MemNNs recently introduced in [@sukhbaatar2015weakly] that allows to train them without any supervision coming from the supporting facts. Generating negative examples ---------------------------- As in [@bordes-chopra-weston:2014:EMNLP2014; @bordes2014open], learning is performed with gradient descent, so that negative examples (non-supporting facts or non-paraphrases) are generated according to a randomized policy during training. For paraphrases, given a pair $(q, q')$, a non-paraphrase pair is generated as $(q, q'')$ where $q''$ is a random question of the dataset, not belonging to the cluser of $q$. For question/supporting fact pairs, we use two policies. The default policy to obtain a non-supporting fact is to corrupt the answer fact by exchanging its subject, its relationship or its object(s) with that of another fact chosen uniformly at random from the KB. In this policy, the element of the fact to corrupt is chosen randomly, with a small probability (0.3) of corrupting more than one element of the answer fact. The second policy we propose, called [ *candidates as negatives*]{}, is to take as non-supporting fact a randomly chosen fact from the set of candidate facts. While the first policy is standard in learning embeddings, the second one is more original, and, as we see in the experiments, gives slightly better performance. Related Work {#sec:related} ============ The first approaches to open-domain QA were search engine-based systems, where keywords extracted from the question are sent to a search engine, and the answer is extracted from the top results [@yahya2012natural; @unger2012template]. This method has been adapted to KB-based QA [@yahya2012natural; @unger2012template], and obtained competitive results with respect to semantic parsing and embedding-based approaches. Semantic parsing approaches [@cai-yates:2013:ACL2013; @berant-EtAl:2013:EMNLP; @kwiatkowski-EtAl:2013:EMNLP; @berant2014semantic; @fader2014open] perform a functional parse of the sentence that can be interpreted as a KB query. Even though these approaches are difficult to train at scale because of the complexity of their inference, their advantage is to provide a deep interpretation of the question. Some of these approaches require little to no question-answer pairs [@paralex; @reddy2014large], relying on simple rules to tranform the semantic interpretation to a KB query. Like our work, embedding-based methods for QA can be seen as simple MemNNs. The algorithms of [@bordes2014open; @weston2014memory] use an approach similar to ours but are based on [[Reverb]{}]{}rather than [[Freebase]{}]{}, and relied purely on bag-of-word for both questions and facts. The approach of [@yang2014joint] uses a different representation of questions, in which recognized entities are replaced by an [ *entity*]{} token, and a different training data using entity mentions from [Wikipedia]{}. Our model is closest to the one presented in [@bordes-chopra-weston:2014:EMNLP2014], which is discussed in more details in the experiments. Experiments {#sec:expes} =========== This section provides an extensive evaluation of our MemNNs implementation against state-of-the-art QA methods as well as an empirical study of the impact of using multiple training sources on the prediction performance. Evaluation and baselines ------------------------ Table \[tab:data\] details the dimensions of the test sets of [[WebQuestions]{}]{}, [[SimpleQuestions]{}]{} and [[Reverb]{}]{}which we used for evaluation. On [[WebQuestions]{}]{}, we evaluate against previous results on this benchmark [@berant-EtAl:2013:EMNLP; @yao2014information; @berant2014semantic; @bordes-chopra-weston:2014:EMNLP2014; @yang2014joint] in terms of F1-score as defined in [@berant2014semantic], which is the average, over all test questions, of the F1-score of the sets of predicted answers. Since no previous result was published on [[SimpleQuestions]{}]{}, we only compare different versions of MemNNs. [[SimpleQuestions]{}]{}questions are labeled with their entire [[Freebase]{}]{}fact, so we evaluate in terms of path-level accuracy, in which a prediction is correct if the subject and the relationship were correctly retrieved by the system. The [[Reverb]{}]{}test set, based on the KB of the same name and introduced in [@paralex] is used for evaluation only. It contains $691$ questions. We consider the task of re-ranking a small set of candidate answers, which are [[Reverb]{}]{}facts and are labeled as correct or incorrect. We compare our approach to the original system [@paralex], to [@bordes2014open] and to the original MemNNs [@weston2014memory], in terms of accuracy, which is the percentage of questions for which the top-ranked candidate fact is correct. Experimental setup ------------------ All models were trained with at least the dataset made of synthetic questions created from the KB. The hyperparameters were chosen to maximize the F1-score on [[WebQuestions]{}]{}validation set, independently of the testing dataset. The embedding dimension and the learning rate were chosen among $\{64, 128, 256\}$ and $\{1, 0.1, ..., 1.0e-4\}$ respectively, and the margin $\gamma$ was set to $0.1$. For each configuration of hyperparameters, the F1-score on the validation set was computed regularly during learning to perform early stopping. We tested additional configurations for our algorithm. First, in the [*Candidates as Negatives*]{} setting (negative facts are sampled from the candidate set, see Section \[sec:training\]), abbreviated [ Cands As Negs]{}, the experimental protocol is the same as in the default setting but the embeddings are initialized with the best configuration of the default setup. Second, our model shares some similarities with an approach studied in [@bordes-chopra-weston:2014:EMNLP2014], in which the authors noticed important gains using a subgraph representation of answers. For completeness, we also added such a subgraph representation of objects. In that setting, called [*Subgraph*]{}, each object ${o}$ of a fact is itself represented as a bag-of-entities that encodes the immediate neighborhood of $o$. This [*Subgraph*]{} model is trained similarly as our main approach and only the results of a post-hoc ensemble combination of the two models (where the scores are added) are presented. We also report the results obtained by an ensemble of the 5 best models on validation (subgraph excepted); this is denoted [*5 models*]{}. [|c|c|c|c|c|c|c|c|c|]{} & [[WebQuestions]{}]{}& [[SimpleQuestions]{}]{}& [[Reverb]{}]{}\ & [F1-score (%)]{} & [Accuracy (%)]{} & [ Accuracy (%)]{}\ \ & 1.9 & 4.9 & 35\ & 31.3 & n/a & n/a\ & n/a & n/a & 54\ & 29.7 & n/a & [**73**]{}\ & 35.3 & n/a & n/a\ & 39.2 & n/a & n/a\ & 39.9 & n/a & n/a\ & 41.3 & n/a & n/a\ & n/a & n/a & 72\ \ & & [Cands]{} & [Ensemble]{} &\ & [WQ]{} & [SIQ]{} & [PRP]{} & [As Negs]{} & &\ [[FB2M]{}]{}& yes & yes & yes & – & – & 36.2 & 62.7 & n/a\ [[FB5M]{}]{}& – & – & – & – & – & 18.7 & 44.5 & 52\ [[FB5M]{}]{}& – & – & yes & – & – & 22.0 & 48.1 & 62\ [[FB5M]{}]{}& – & yes & – & – & – & 22.7 & 61.6 & 52\ [[FB5M]{}]{}& – & yes & yes & – & – & 28.2 & 61.2 & 64\ [[FB5M]{}]{}& yes & – & – & – & – & 40.1 & 46.6 & 58\ [[FB5M]{}]{}& yes & – & yes & – & – & 40.4 & 47.4 & 61\ [[FB5M]{}]{}& yes & yes & – & – & – & 41.0 & 61.7 & 52\ [[FB5M]{}]{}& yes & yes & yes & – & – & 41.0 & 62.1 & 67\ [[FB5M]{}]{}& yes & yes & yes & yes & – & 41.2 & 62.2 & 65\ [[FB5M]{}]{}& yes & yes & yes & yes & 5 models & 41.9 & [**63.9**]{} & [*68*]{}\ [[FB5M]{}]{}& yes & yes & yes & yes & Subgraph & [**42.2**]{} & 62.9 & 62\ 0 Comparative Results –1 ----------------------- Table \[tab:res\] presents our experimental results: we display the performance of many previous work along with those of our MemNNs architecture trained in various conditions. MemNNs can reach state-of-the-art on [[WebQuestions]{}]{}by either using a subgraph model or an ensemble of 5 models. This is 3 more points that the previous best performing embedding model on the same benchmark [@bordes-chopra-weston:2014:EMNLP2014]. Each evolution we propose here compared to [@bordes-chopra-weston:2014:EMNLP2014] contributes to this increase: 1. Removing [*mediator*]{} nodes in [[Freebase]{}]{}and relaxing the string matching allow to reach more answers within 1-hop. No need for 2-hops anymore and many more questions become “simple”. This grants a much faster inference. 2. Grouping facts is also important for scalability (it reduces the candidate set size by an order of magnitude) and accuracy. MemNNs here can reach 36.2 with the same KB as used in [@bordes-chopra-weston:2014:EMNLP2014] and no subgraph. Since [[SimpleQuestions]{}]{}does not seem to bring anything when testing on [[WebQuestions]{}]{}, this can be compared to 35.3, the performance of [@bordes-chopra-weston:2014:EMNLP2014] without subgraph, except that we do not have to perform inference within 2-hops. With similar KB, our model scales better and is actually more accurate. 3. Since our model scales well, we can apply it easily to a bigger version of [[Freebase]{}]{}, [[FB5M]{}]{}, which has a much higher connectivity. This allows to bring much more answers within the immediate neighborhood of the entities identified by string-matching in the questions, making much more questions easily solvable. On [[WebQuestions]{}]{}, not specifically designed for being a simple QA dataset,  85% of the questions can be solved by retrieving a single fact. It was not obvious, but for MemNNs, exploding the size of the KB, leads to much better performance (not on [[SimpleQuestions]{}]{}, but its design one was based on [[FB2M]{}]{}). All these remarks do not only apply to embedding models or Memory Networks but to any QA systems: the more direct connections exist within the KB, the easier the task of answering becomes. They can enjoy multi-tasking in the sense that a MemNN trained on multiple sources ([[WebQuestions]{}]{}, [[SimpleQuestions]{}]{}and [[Paraphrases]{}]{}) is as good or slightly better on the various test sets compared to models trained on the corresponding training sets alone. There is no destructive interaction. Even if [[WebQuestions]{}]{}and [[SimpleQuestions]{}]{}are boh based on [[Freebase]{}]{}, they seem quite different since training on one does not transfer very well into improved performance on the other. On [[WebQuestions]{}]{}and [[SimpleQuestions]{}]{}, the weak signal coming from [[Paraphrases]{}]{}allows to improve slightly only when the corresponding training set is not provided. Otherwise the supervised signal, being much stronger, seems to cancel that. Hence, [[Paraphrases]{}]{}are especially interesting to get a broader coverage also discuss (in footnote?):\ [@wang2014overview] (45.3 on WQ) and [@yao-scratch-qa-naacl2015] (44.3 on WQ): no publications, very few details. Results ------- #### Comparative results The results of the comparative experiments are given in Table \[tab:res\]. On the main benchmark [[WebQuestions]{}]{}, our best results use all data sources, the bigger extract from [[Freebase]{}]{}and the [Cands As Negs]{} setting. The two ensembles achieve excellent results, with F1-scores of $41.9\%$ and $42.2\%$ respectively. The best published competing approach [@yang2014joint] has an F1-score of $41.3\%$, which is comparable to a single run of our model ($41.2\%$). On the new [[SimpleQuestions]{}]{}dataset, the best models achieve $62-63\%$ accuracy, while the supporting fact is in the candidate set for about $86\%$ of [[SimpleQuestions]{}]{}questions. This shows that MemNNs are effective at re-ranking the candidates, but also that simple QA is still not solved. Our approach bares similarity to [@bordes-chopra-weston:2014:EMNLP2014] - [*using path*]{}. They use [[FB2M]{}]{}, and so their result ($35.3\%$ F1-score on [[WebQuestions]{}]{}) should be compared to our $36.2\%$. The models are slightly different in that they replace the entity string with the subject entity in the question representation and that we use the cosine similarity instead of the dot product, which gave consistent improvements. Still, the major differences come from how we use [[Freebase]{}]{}. First, the removal of the mediator nodes allows us to restrict ourselves to single supporting facts, while they search in paths of length 2 with a heuristic to select the paths to follow (otherwise, inference is too costly), which makes our inference simpler and more efficient. Second, using grouped facts, we integrate multiple answers during learning (through the distant supervision), while they use a grouping heuristic at test time. Grouping facts also allows us to scale much better and to train on [[FB5M]{}]{}. On [[WebQuestions]{}]{}, not specifically designed as a simple QA dataset, $86\%$ of the questions can now be answered with a single supporting fact, and performance increases significantly (from $36.2\%$ to $41.0\%$ F1-score). Using the bigger [[FB5M]{}]{}as KB does not change performance on [[SimpleQuestions]{}]{}because it was based on [[FB2M]{}]{}, but the results show that our model is robust to the addition of more entities than necessary. #### Transfer learning on Reverb In this set of experiments, all [[Reverb]{}]{}facts are added to the memory, without any retraining, and we test our ability to rerank answers on the companion QA set. Thus, Table \[tab:res\] (last column) presents the result of our model [*without training*]{} on [[Reverb]{}]{}  [*against methods specifically developed on that dataset*]{}. Our best results are $67\%$ accuracy (and $68\%$ for the ensemble of $5$ models), which are better than the $54\%$ of the original paper and close to the state-of-the-art $73\%$ of [@bordes2014open]. These results show that the Memory Network approach can integrate and use new entities and links. #### Importance of data sources The bottom half of Table \[tab:res\] presents the results on the three datasets when our model is trained with different data sources. We first notice that models trained on a single QA dataset perform poorly on the other datasets (e.g. $46.6\%$ accuracy on [[SimpleQuestions]{}]{}for the model trained on [[WebQuestions]{}]{}only), which shows that the performance on [[WebQuestions]{}]{}does not necessarily guarantee high coverage for simple QA. On the other hand, training on both datasets only improves performance; in particular, the model is able to capture all question patterns of the two datasets; there is no “negative interaction”. While paraphrases do not seem to help much on [[WebQuestions]{}]{}and [[SimpleQuestions]{}]{}, except when training only with synthetic questions, they have a dramatic impact on the performance on [[Reverb]{}]{}. This is because [[WebQuestions]{}]{}and [[SimpleQuestions]{}]{}questions follow simple patterns and are well formed, while [[Reverb]{}]{}questions have more syntactic and lexical variability. Thus, paraphrases are important to avoid overfitting on specific question patterns of the training sets. Conclusion ========== This paper presents an implementation of MemNNs for the task of large-scale simple QA. Our results demonstrate that, if properly trained, MemNNs are able to handle natural language and a very large memory (millions of entries), and hence can reach state-of-the-art on the popular benchmark [[WebQuestions]{}]{}. We want to emphasize that many of our findings, especially those regarding how to format the KB, do not only concern MemNNs but potentially any QA system. This paper also introduced the new dataset [[SimpleQuestions]{}]{}, which, with $100$k examples, is one order of magnitude bigger than [[WebQuestions]{}]{}: we hope that it will foster interesting new research in QA, simple or not. [^1]: [www.freebase.com](www.freebase.com) [^2]: The dataset is available from <http://fb.ai/babi>.
--- abstract: 'We report on a study of the inelastic scattering properties of (001) and (111) Ho$_2$Ti$_2$O$_7$ single crystals at room temperature. Structural and compositional analysis along with absorption measurement confirms single crystalline phase of all samples. Room temperature polarized Raman measurements were performed on crystals in non-resonant and two different resonant conditions by using six different laser excitation lines. Lorentzian model fitting analysis is performed on all measured spectra in order to identify the difference in the Raman scattering cross-section in resonant and non-resonant conditions. Variations in the fitting parameters on account of different polarization configurations and crystallographic orientations has helped identifying their symmetry if present. Several possible scattering pathways are discussed in order to qualitatively explain the anomalous scattering results in Ho$_2$Ti$_2$O$_7$.' author: - Naween Anand - 'L. J. van de Burgt' - 'Q. Huang' - Jade Holleman - Haidong Zhou - Stephen A McGill - Christianne Beekman bibliography: - 'HTORaman.bib' title: | Probing Anomalous Inelastic Scattering in Spin-ice Ho$_2$Ti$_2$O$_7$ through\ Resonant Raman Spectroscopy --- INTRODUCTION ============ Rare-earth metal titanates, RE$_2$Ti$_2$O$_7$, belong to the family of geometrically frustrated magnetic insulators crystallizing in the pyrochlore structure.[@Ramirez; @GREEDAN] Due to the existance of numerous exotic ground states resulting from competing interactions among spin and orbital degrees of freedom, pyrochlores have drawn enormous interest from the scientific community in recent years. In particular, Ho$_2$Ti$_2$O$_7$ (HTO), a prototype spin-ice system has been estabilished as a weak ferromagnetically frustrated pyrochlore with dominating spin dipolar interaction and competing antiferromagnetic superexchange interaction between neighboring Ho$^{3+}$ ions.[@Bramhartog; @BramField] The magnetic nature of the ground state and the essence of competing interactions in HTO has been studied extensively at low temperatures through neutron scattering, specific heat capacity, and muon-spin resonance measurements, indicating the existence of a magnetic phase with short-range spin ice correlations due to the incompatibility of local and global symmetries.[@Bramhartog; @Harrisbramwell; @Ramirez] Raman spectroscopy provides another technique to investigate disorder, phonon anharmonicity, phonon-spin and phonon-crystal field interactions in pyrochlore-type materials. The inelastic scattering of incident photons of fixed polarization due to spins and crystal field coupled phonons allow us to deduce information about the crystal anisotropy, nature of coupling interactions and dynamical characteristics of ground eigenstates. Several temperature dependent Raman spectroscopic studies have been performed, underlining intriguing vibrational band features in rare-earth pyrochlores. However, there have been inconsistencies about their origins in existing literature. Spin-ice pyrochlore Dy$_2$Ti$_2$O$_7$, in addition to typical six Raman modes belonging to the pyrochlore family, has shown several weak modes at the lower and higher frequency ends.[@Bi; @Lummen; @Mkaczka] Their temperature dependent infrared studies suggested strong spin-phonon coupling and the intrinsic charge localization was proposed to result from the nearest neighbour ferromagnetic interaction in a geometrically frustrated configuration of Dy$_2$Ti$_2$O$_7$ spin ice.[@Bi] However, the intensity profile of polarization and temperature dependent Raman measurements on Dy$_2$Ti$_2$O$_7$, when combined with the absorption and luminescence spectra, indicate the crystal field transition between stark-split levels.[@Lummen; @Mkaczka] Another non-magnetic pyrochlore Lu$_2$Ti$_2$O$_7$ also shows similar characteristics in terms of phonon lineshape and locations in the temperature dependent Raman studies. This insinuates that the origin of such features could be non-magnetic and beyond any particular crystal-field effects. Therefore, instead of spin-phonon coupling or crystal-field transitions, it was proposed that either second-order Raman scatterings or infrared and silent modes rendered Raman activity due to lowering of local symmetry could result in additional features.[@Saha; @Vandenborre] Spectroscopic results for spin-liquid pyrochlore Tb$_2$Ti$_2$O$_7$ implies unusually strong crystal field-phonon coupling along with phonon-phonon anharmonic interaction and small spin-phonon coupling as the possibe origin of atypical features in the spectra.[@Sanjuan] Another geometrically frustrated spin-glass like pyrochlore Y$_2$Ru$_2$O$_7$ has been reported to possess strong spin-phonon coupling through their temperature dependent infrared and Raman measurements.[@BAE; @Lee]. While all such pyrochlore studies have shown many common and few unusual spectral features, none of them have cited any of the resonant Raman modes in their temperature dependent inelastic or quasi-elastic scattering experiments. By tuning the incident photon energy near resonance of localized RE$^{3+}$ atomic levels, one can probe the system into different intermediate eigenstate and track down changes in the scattering phenomena, allowing us to examine the nature of coupling between phonons and other degrees of freedom. In this article, room temperature polarized Raman spectroscopy is performed on HTO single crystals (SC) with (111) and (001) orientations, probed by multiple incident laser excitations near and far from the resonant energies of localized Ho$^{3+}$ atomic levels. It aims to investigate the remarkably different scattering cross-sections in the resonant and non-resonant conditions and attempts to describe it qualitatively on account of key roles played by other coupling interactions such as spin-exchange, magnetic excitations and crystal field. EXPERIMENTAL DETAILS ==================== The single-crystal samples of HTO were grown using the optical floating-zone method. The starting materials Ho$_{2}$O$_{3}$ and TiO$_{2}$ powders were mixed in a stoichiometric ratio and then annealed in air at 1450$^{\circ}$ for 40 h before growth in an image furnace. The growth was achieved with a pulling speed of 6 mm/h under 5 atm oxygen pressure. The crystals were oriented by Laue back diffraction. The structural and compositional analysis on samples were performed by Oxford diffraction Xcaliber2 KMW150CCD and JEOL 7401 FE-SEM with EDAX Genesis XM4 spectroscopy respectively in order to verify the growth integrity and inspect for possible stoichiometric imbalance. X-ray diffraction spectra confirms the cubic symmetry of crystals (*a=b=c=*10.1 A$^{\circ}$; $\alpha$ =$\beta$= $\gamma$ =90$^{\circ}$) and no indication of impurity phases has been found. EDS compositional analysis as well confirms 1:1 stoichiometric ratio between Ho and Ti atoms. The room temperature polarized and unpolarized Raman spectra were measured using Horiba JY LabRam HR800 Raman spectrograph in the back scattered geometry which was coupled to three lasers to supply excitation wavelengths at 785 nm, 633 nm, 514 nm, 488 nm, 458 nm and 364 nm. It uses appropriate bandpass and edge filters to couple the laser beam into the optical axis of Olympus BX30M microscope, equipped with 50x objective and eventually filters out the scattered laser light before the Raman signal enters the spectrograph. LabRam HR800 was equipped with 600 and 1800 lines/mm gratings providing resolution of about 2–3 cm$^{-1}$ in the measurement region. The grating stabilized diode laser providing 785 nm laser excitation was operated at 80 mW (15 mW at the sample) whereas the Melles-Griot 633 nm Helium-Neon laser was operated at 17 mW output power (6 mW at the sample). Coherent I-308 argon ion laser system providing 514 nm, 488 nm, 458 nm and 364 nm laser excitation lines was operated at about 20–30 mW of average power output. The room temperature absorption measurements on polished HTO SC were performed using an Ocean Optics USB2000 spectrometer in the range of 10,000–29,000 cm$^{-1}$ (345–1000 nm). The spectrograph was collected using 600 lines/mm grating, had 25 micron entrance slit-width giving a spectral resolution of about 1.5 nm FWHM (Full Width at Half Maximum) in the measurement region. MEARUREMENTS AND RESULTS ======================== HTO has a cubic structure (lattice parameter 10.1 A$^{\circ}$), crystallizing in Fd$\bar{3}$m space group with eight formula units in a unit cell. The eight-coordinated Ho$^{3+}$ ions are located at *16c* sites whereas six-coordinated Ti$^{4+}$ ions are located at *16d* sites as shown in panel A) of Fig. \[HTO\], both forming separate networks of corner sharing teterahedra. The oxygen anions of one kind occupy *48f* sites coordinating with two Ho$^{3+}$ and two Ti$^{4+}$ ions whereas the oxygen anions of other kind occupy *8a* sites being tetrahedrally coordinated with four Ho$^{3+}$ ions, also shown in panel A) of Fig. \[HTO\].[@GardnerJ] Based on lattice parameters, atomic Wyckoff positions and lattice symmetry as shown in panel B) of Fig. \[HTO\], the entire set of degrees of freedom is expressed in terms of following irreducible point group representation at the center of the Brillouin zone. $$\begin{aligned} \Gamma_{3N}= {} & \textcolor{red}{1A_{1g}+2E_{g}+12T_{2g}} \textcolor{blue}{+ 24T_{1u}}\\ & \textcolor{green}{+ 6T_{1g}+3A_{2u}+6E_{u}+12T_{2u}} \end{aligned}$$ Here *N* denotes total number of atoms in the primitive cell which is 22 (4 Ho, 4 Ti, 12 *f*-type and 2 *a*-type O), also shown in panel B) of Fig. \[HTO\]. All modes in red color represent raman active modes (total 15 modes) while all in blue are infrared active modes (total 24 modes including 3 acoustic modes). The rest of the 27 modes in green are optically inactive modes. *E$_{g}$* and *T$_{2g}$* modes are double and triple degenerate, respectively, so one should be able to locate a total of 6 distinct first order Raman active modes in any unpolarized Raman measurement. In addition, all Raman active vibrational modes consist of oxygen atom dynamics only (1 *A$_{1g}$*, 1 *E$_{g}$* and 3 *T$_{2g}$* modes from oxygen anion at *48f* sites, and 1 *T$_{2g}$* mode from oxygen anion at *8a* sites). As both cation sites possess inversion symmetry, any cation vibrational mode should be Raman inactive. Non-resonant unpolarized Raman scattering ----------------------------------------- Unpolarized Raman measurements were performed on HTO SC with (111) and (001) orientations using 785 nm, 514 nm, 488 nm and 364 nm laser excitation lines and results are shown in Fig. \[unpol\]. There have been several experimental and first-principle studies on vibrational properties of HTO SC showing inconsistencies with one another and variations depending on the sample quality.[@Lummen; @Mkaczka; @KUMAR; @Ruminy; @Kushwaha] Although, Raman spectra for (001) and (111) crystals were very similar, using multiple laser lines on two differently oriented crystals has helped us clearly identify all the six first order Raman active modes. More details on their symmetries are discussed later in the polarized Raman measurements section. All Raman spectra on (111) HTO SC in Fig. \[unpol\] show consistency in terms of mode locations and relative intensities except for the 364 nm laser line. The spectrum for 364 nm excitation highlights two additional Raman modes clearly resolved above the background at 450 cm$^{-1}$ and 570 cm$^{-1}$, which are not obvious in other spectra, and they have not experimentally observed previously.[@Lummen; @Mkaczka] Inset plot shows 364 nm Raman spectrum for (001) HTO SC where modes at 310 cm$^{-1}$ and 330 cm$^{-1}$ are partially resolved, not obvious in other specta due to thermal broadening of phonons at room temperature. Based on prior studies, modes at 330 cm$^{-1}$ and 520 cm$^{-1}$ have been assigned *E$_{g}$* and *A$_{1g}$* symmetries respectively whereas modes with *T$_{2g}$* symmetry are located at 220 cm$^{-1}$, 310 cm$^{-1}$, 450 cm$^{-1}$ and 570 cm$^{-1}$.[@Lummen; @Mkaczka] The mode at 720 cm$^{-1}$ has *A$_{1g}$* symmetry, which results due to higher order scattering process, however common in rare-earth pyrochlores, details on such higher order processes are pending. Absorption spectra and resonant polarized Raman scattering ---------------------------------------------------------- Room temperature absorption measurements were performed on (111) and (001) HTO SC to locate Ho$^{3+}$ atomic levels. This was essential to find appropriate Raman laser lines to probe the system into resonance. Such intra-atomic resonance alters the nature of spin-orbit coupling and exchange interaction between localized spins, which helps us to understand the mechanism of phonon scattering mediated by magnons and spin-disorders.[@Merlin; @Guntherodt; @Zeyher; @Sugai; @Chubukov; @Rubhausen; @Blumberg] Fig. \[Absorption\] shows the absorption spectrum for (111) HTO SC where transitions between several excited states and their crystal-field (CF) split levels have been displayed. The spectrum looks in good agreement with previously reported absorption results.[@MACALIK] Based on proposed atomic energy level scheme,[@Martin; @Carnall; @dieke] identification of several relevant transitions from the $^{5}I_{8}$ ground state (S=2, L=6, J=8) to other excited states has been made. The 633 nm laser line excites the system into $^5F_5$ state (S=2, L=3, J=5) whereas the 458 nm laser line excites into $^3K_8$ spin-orbit manifold (S=1, L=7, J=8). These spin-orbit coupled states consist of a finite number of closely spaced crystal-field energy levels as seen in the absorption spectrum as sharp spikes. Since, the excited states with different orbital and total angular momentum affect the overlap integral with neighboring oxygen anions, and the exchange interaction between Ho$^{3+}$ ions in the tetrahedral network, significant changes in the resonant Raman scattering cross section are expected. ![image](Absorption.jpg){width="\textwidth"} Polarized resonant Raman measurements have been performed on (001) and (111) HTO SC in back-scattered geometry for several polarizer-analyzer configurations using 633 nm laser line as shown in Fig. \[633\]. Panel A) shows spectra for (001) SC with $\vec{E}_{in}$ parallel to \[010\] axis while panel B) shows spectra for (111) SC with $\vec{E}_{in}$ parallel to \[1$\bar{1}$0\] axis. The analyzer transmission axis is rotated in 30$^{\circ}$ steps for both measurements where 0$^{\circ}$ spectrum represents $\vec{E}_{in}$ being parallel to the analyzer transmission axis. All spectra have been fitted with Lorentzian model using HORIBA Scientific’s LabSpec 6 software platform and 0$^{\circ}$ fitted curve is included for both crystals. Similar spectra and their fittings with $\vec{E}_{in}$ parallel to the other perpendicular axis \[11$\bar{2}$\] and at 45$^{\circ}$ away from \[11$\bar{2}$\] for (111) SC and then $\vec{E}_{in}$ parallel to \[100\] and \[110\] for (001) SC were also collected and shown in the supplementary section in Fig. \[111633f\] and Fig. \[001633f\], respectively. In addition to few weak ones, all collected spectra for every polarizer-analyzer configuration show twelve distinguished anomalous modes at 180 cm$^{-1}$, 300 cm$^{-1}$, 390 cm$^{-1}$, 420 cm$^{-1}$, 520 cm$^{-1}$, 620 cm$^{-1}$, 680 cm$^{-1}$, 710 cm$^{-1}$, 750 cm$^{-1}$, 820 cm$^{-1}$, 900 cm$^{-1}$ and 945 cm$^{-1}$ with extremely relaxed symmetry. All modes above 300 cm$^{-1}$ show monotonic decrease in the oscillator strength with varying depolarization ratio. While comparing 0$^{\circ}$ (parallel polarized) with 90$^{\circ}$ (perpendicular polarized) spectrum, none of the modes including weak ones completely disappear or appear for either of the (001) and (111) HTO SC for any of the six crystallographic measurement directions of the incident polarization $\vec{E}_{in}$. Detailed analysis on their behaviors are discussed in the next section. In order to further investigate such anomalous scattering phenomena, another resonant laser line of 458 nm was used on the same samples under identical experimental conditions. Measurement results are shown in Fig. \[458\_1\] for low frequency region (100–900 cm$^{-1}$) and in Fig. \[458\_2\] for high frequency region (1000–2000 cm$^{-1}$) along with 0$^{\circ}$ spectrum fitted using Lorentzian model. The low frequency spectra in Fig. \[458\_1\] looks very similar to non-resonant Raman spectra as seen with other laser lines in Fig. \[unpol\]. Polarization selection rules for Raman scattering for the Fd$\bar{3}$m space group suggest that in back-scattered configuration for (001) SC, the perpendicular polarized output intensity should only contain *T$_{2g}$* modes while parallel polarized output intensity profile should contain both *A$_{1g}$* and *E$_{g}$* modes. Comparing with panel A) in Fig. \[458\_1\], the peak at 310 cm$^{-1}$ in 90 deg spectra must have *T$_{2g}$* symmetry. Similarly, the 520 cm$^{-1}$ and 330 cm$^{-1}$ modes have *A$_{1g}$* and *E$_{g}$* symmetry, respectively. Panel B) of Fig. \[458\_1\] does not resolve the symmetry of modes as clearly since the Raman tensor is transformed with respect to another set of basis vectors, which has multiple non zero off-diagonal elements and hence, intensity profiles are mixed. However one can see first order Raman modes with some selectivity in this frequency range. While the low frequency Raman spectra show expected selectivity among fundamental Raman modes, spectra in the high frequency region (Fig. \[458\_2\]) show multiple new modes with extremely relaxed selectivity. Fitting results identify at least twelve anomalous modes with negligible selectivity with respect to the incident polarization $\vec{E}_{in}$ or relative orientation of the analyzer transmission axis. These modes are located around 1150 cm$^{-1}$, 1200 cm$^{-1}$, 1260 cm$^{-1}$, 1310 cm$^{-1}$, 1360 cm$^{-1}$, 1460 cm$^{-1}$, 1540 cm$^{-1}$, 1590 cm$^{-1}$, 1650 cm$^{-1}$, 1710 cm$^{-1}$, 1825 cm$^{-1}$ and 1870 cm$^{-1}$. Many of these modes are better resolved and show more sensitivity to analyzer orientation in panel B) for (111) HTO SC measurements. Measurements are repeated for six different crystallographic directions of the incident polarization $\vec{E}_{in}$ in (001) and (111) HTO SC. Those spectra are included in the supplementary section in Fig. \[111458\], Fig. \[111458f\], Fig. \[001458\] and Fig. \[001458f\]. They are quite similar in terms of mode locations and relative intensity as analyzer is rotated. Other similarly grown single crystals from the pyrochlore family were also measured under identical experimental conditions in order to exclude the possibility of experimental set-up induced artifacts such as unstable or nonmonochomatic laser line source, filter leakage, damaged polarizer-analyzer optic axis and optical grating induced errors. Measurement results are shown in Fig. \[all\] for two different laser lines. The non-resonant Raman results for 514 nm is included in the supplementary section in Fig. \[514nonpol\], which looks very similar in terms of mode location line shape and relative intensity profile for all rare-earth pyrochlores. In fact 633 nm spectra in panel A) and 458 nm spectra in panel B) look very similar to 514 nm spectra for all pyrochlores except Ho$_2$Ti$_2$O$_7$. This observation without any ambiguity indicates the significant change in Raman scattering cross-section under resonance condition for HTO pyrochlore. Measurements have been repeated on several batches of pyrochlore samples with different sample sizes and roughnesses. All spectra seem to be independent of sample variation. The Raman spectra using 633 nm line were collected at several filter-attenuation settings for (111) HTO SC as shown in supplemental section in Fig. \[Intensity\]. Output signal at two major peaks (820 cm$^{-1}$ and 945 cm$^{-1}$) has been analyzed and shown in the inset graph. A linear trend between varying input intensity and corresponding output intensity suggests the absence of any non linear processes such as multiphoton absorption, stimulated Raman, thermal effects on sample itself or non-linear optical effects from the optical components in the set-up. Prolonged exposure of samples to the laser excitation does not result in any obvious spectral or background changes ruling out the possibilities of any sample surface damage or well cited fluorescence background issues with resonant Raman technique.[@Yaney; @Kostamovaara; @McCain; @Mazilu; @smithz; @Matousek] These intriguing modes resulting from the inelastic scattering of incident photon could be an outcome of crystal vibrations interacting with spin disorder or non-trivial spin-correlations in the excited state or crystal field.[@Merlin; @Guntherodt; @Zeyher; @GRUNBERG; @SCHLEGEL; @TEKIPPE] As discussed in the next section, several rare-earth magnetic compounds exhibit such scattering phenomena and, when the resonance conditions are met, do receive resonance enhancement of several orders of magnitude. On account of similar previous research studies, the next section provides a qualitative discussion on such possibilities and the future research problems that need to be further explored. Moreover, rare-earth atomic level database suggested that resonance conditions were not possible to achieve with any accessible laser lines for all other available pyrochlores.[@Weber; @CAVALLI; @Martin; @Carnall; @dieke] ANALYSIS AND DISCUSSION ======================= Several Raman studies on rare-earth pyrochlores have reported spectral features originating due to the transition between the crystal field split states of $^{5}I_{8}$ ground state manifold in the non-resonant condition.[@Mkaczka; @Lummen; @Sanjuan] Any electronic transition within the same $\left|L,J\right\rangle$ states are generally dipole forbidden and, unless CF states are coupled with other degrees of freedom such as phonons or spin disorder, such transitions do not satisfy the momentum conservation rule. This explains their extremely weak intensity pattern in comparison to Raman modes.[@Mkaczka; @Lummen; @Sanjuan] In HTO, the overall CF splitting of $^{5}I_{8}$ ground state is about 630 cm$^{-1}$[@Rosenkranz] and, 633 nm (15,800 cm$^{-1}$) laser line excites electrons from the ground state to the $^{5}F_{5}$ states. If excited electrons relax back to any of the $^{5}I_{8}$ CF states, the energy of such fluorescent output photons should lie in the range of 15,170–15,800 cm$^{-1}$ as shown in the panel A) of Fig. \[electronicbandstructure\]. One must notice that the strong Raman Stokes lines around 710 cm$^{-1}$, 820 cm$^{-1}$ and 945 cm$^{-1}$ as shown in Fig. \[633\] lie outside this range. Similarly, in the case of 458 nm (21,835 cm$^{-1}$) laser line, electrons get excited into the $^{3}K_{8}$ states and relaxation back to any of the $^{5}I_{8}$ ground states should generate photons in the energy range of 21,205–21,835 cm$^{-1}$ as shown in the panel B) of Fig. \[electronicbandstructure\]. All strong Stokes modes between 1150 cm$^{-1}$ and 1870 cm$^{-1}$, shown in Fig. \[458\_2\], lie outside the aforementioned energy range. This exercise convinces us that observed spectra in resonant Raman scattering cases as shown in Fig. \[633\] and Fig. \[458\_2\] can not be explained though hot luminescence relaxation mechanism. ![image](ana_633.pdf){width="\textwidth"} ![image](ana_458.pdf){width="\textwidth"} All spectra of HTO under resonance conditions were fitted using Lorentzian model to fully characterize the dynamical parameters of anomalous modes. Results are summarized in Fig. \[ana\_633\] for 633 nm spectra and in Fig. \[ana\_458\] for 458 nm spectra, showing the polarization dependence of all strong and well resolved modes for crystals of (001) and (111) orientations. The incident polarization $\vec{E}_{in}$ is parallel to \[010\] for (001) SC whereas for (111) SC, $\vec{E}_{in}$ is parallel to \[1$\bar{1}$0\]. The *x*-axis denotes the angle between $\vec{E}_{in}$ and analyzer transmission axis. For both analysis, strength of all modes are normalized with respect to the sum of all oscillator strengths. In the case of 633 nm Raman spectra, except few high frequency modes, majority of modes lie in the range where the first order fundamental modes of HTO are theoretically predicted[@KUMAR; @Ruminy; @Kushwaha] and experimentally found if they are optically active[@Lummen; @Mkaczka]. However, it must be emphasized here that although some of these modes may have similar vibrational frequencies, none of them behave as first order Raman modes when compared to the spectra shown in Fig. \[unpol\]. All modes below 300 cm$^{-1}$ show no change in oscillator strength but all higher modes decrease in strength monotonically as the analyzer rotates from parallel polarization to perpendicular polarization configuration. When oscillator strengths are normalized as shown in Fig. \[ana\_633\], modes at 180 cm$^{-1}$, 300 cm$^{-1}$, 390 cm$^{-1}$, 420 cm$^{-1}$, 520 cm$^{-1}$ and 945 cm$^{-1}$ seem to prefer perpendicular polarization configuration whereas others at 620 cm$^{-1}$, 680 cm$^{-1}$, 710 cm$^{-1}$, 750 cm$^{-1}$, 820 cm$^{-1}$ and 900 cm$^{-1}$ prefer parallel configuration for both crystals. The 458 nm spectral analysis has been conducted separately in two obvious frequency ranges. The low frequency spectra in Fig. \[458\_1\] show expected trends in terms of symmetry, lineshape and relative intensity profile, just like first order Raman modes shown in Fig. \[unpol\]. However, general trends of all modes in higher frquency range as shown in Fig. \[458\_2\] are remarkably different. Since, these resonance induced anomalous modes are not reported in any previous studies, we have performed detailed spectral analysis for high frequency region and results are shown in Fig. \[ana\_458\]. While strength of all modes decreases monotonically as analyzer rotates towards perpendicular polarization configuration, the normalized strengths for many modes show very weak polarization dependence. Modes at 1150 cm$^{-1}$, 1200 cm$^{-1}$, 1825 cm$^{-1}$ and 1870 cm$^{-1}$ prefer parallel configuration whereas modes at 1360 cm$^{-1}$ and 1460 cm$^{-1}$ prefer perpendicular polarization configuration for both crystals. While modes at 1540 cm$^{-1}$, 1590 cm$^{-1}$ and 1710 cm$^{-1}$ show no apparent polarization dependence within the error bars for any crystal, modes at 1260 cm$^{-1}$, 1310 cm$^{-1}$ and 1650 cm$^{-1}$ show different behavior for (001) and (111) crystals. Overall behavior of these phonon modes suggests that while they all show higher strength in parallel configuration, there is certain selectivity or polarization preference among them. Their specifics possibly lie within the details of the polarizability tensor and the involved chemical bonds in the excited states when the resonance condition is met. In systems like cuprates with strong electron-phonon coupling and rare-earth chalcogenides, phosphates or vanadates in which phonons are known to be coupled with spins and crystal fields, phonon symmetry gets modified and revised selection rules for phonon excitations come into effect.[@SugaiShunji; @Merlin; @Guntherodt; @Zeyher] In addition, if one also considers the possibility of higher order Raman processes under the condition of resonance enhancement, a significant change in the single-phonon Raman scattering cross-section could be expected. This would essentially allow the spectrum to exhibit additional features, potentially coming from the entire Brillouin zone (BZ), as long as the momentum conservation is obeyed in the process. Two-phonon peaks could exhibit larger intensity than the single-phonon peaks due to the forbidden selection rule of Raman-inactive modes for scattering.[@SugaiShunji] The fact that Ho$_2$Ti$_2$O$_7$ contains 26, 44, 33 and 33 distinct modes of vibrations around $\Gamma$, *L*, *X* and *W* symmetry point respectively, as shown in the supplemental section in Fig. \[BZ\], one can not rule out the possibility of higher order scattering processes being in effect. However, in the absence of details on phonon density of states and vibrational band dispersion inside the Brillouin zone, any prediction on higher order Raman scattering is highly speculative. Using O-18 or Ho-163 isotopes for growth could help identifying the *k*-point origin of these modes and their associated symmetry inside the Brillouin-zone. Nevertheless, higher order phonon modes in many materials have been reported to possess extremely non-Lorentzian lineshape[@Gillet; @Slawomir; @Tenne] while all our spectra for any six different crystallographic directions have been fitted using well defined Lorenzian modes. In addition, anomalous modes in Fig. \[633\] and Fig. \[458\_2\] do not seem to obey the crystal symmetry imposed selection rule and none of the overtones of first order Raman modes are located in the spectra. Although certain rare-earth compounds such as Ytterbium chalcogenides prefer parallel polarized configuration for higher-order scattering, irrespective of the crystal orientation as also observed in our measurements,[@VITINS; @Vitinsj; @MerlinHumphreys] all other spectral features suggest that higher order scattering phenomena may not be the leading factor causing strikingly different Raman cross-section. In rare-earth chalcogenides such as EuX (X = O, S, Se, Te), optical phonons from the zone boundary has been reported as the dominant signal in the first-order Raman scattering cross-section. The momentum conservation in such scattering events is accomplished through flipping of electron spins in the spin-disordered paramagnetic phase, resulting into the magnon excitation.[@GRUNBERG; @SCHLEGEL; @TEKIPPE] In principle, this mechanism still allows all the modes to maintain the Lorentzian lineshape, as happens in our case, with just another spin-dependent scaling prefactor affecting the overall intensity profile. The fact that measured scattering intensity in such chalcogenides overlaps with the weighted one-phonon and one-magnon density of states reflects the simultaneous excitation of phonon-magnon quasi-particles.[@Merlin; @Guntherodt; @Zeyher] Similar spin flipping mechanism is recently reported in both spin-ice Titanium pyrochlore where monopole dynamics at high temperature is quantitaively described through possible coupling between crystal-field and optical phonon excitations followed by the phonon-mediated spin-flipping, as observed through quasielastic neutron scattering.[@RuminyFennell]. More details about spin density of states in pyrochlore structures could pave the pathway for concurrent excitation of phonon-magnon quasi-particle pair. A systematic temperature dependent resonant Raman scattering experiment could help to further investigate these anomalous modes since the lineshape of modes evolve very differently with temperature if the phonon is coupled with crystal field, or some spin-like degree of freedom, as compared to just simply undergoing anharmonic phonon-phonon interaction.[@Mkaczka] On the other hand, a magnetic field dependent resonant Raman studies would allow the tuning of phonon-magnon coupling, resulting into a modified field-dependent spin-flipping mecahnism in the paramagnetic disordered phase. Cooling below the critical temperature of some of the ferromagnetic chalcogenides results into a strong quenching of the scatttering intensity along with a nonlinear peak-shift near critical temperature. Moreover, the symmetry of the spin system dictates the symmetry of the scattered phonon as observed through zone folding of phonon branches over magnetic Brillouin zone.[@Snow; @Nawrocki; @RAY] Finding such key signatures in Ho$_2$Ti$_2$O$_7$ through temperature and field dependent Raman studies will further strengthen the concept of inelastic scattering through concurrent phonon-magnon excitation. In addition, more rare-earth pyrochlores, spin-ice Dy$_2$Ti$_2$O$_7$ in particular, should be tested under resonant conditions and verified if anomalous phonon scattering is correlated with the high temperature monopole dynamics of the frustrated 2-in/2-out spin-ice paramagnetic disordered phase. CONCLUSIONS =========== Room-temperature polarized Raman measurement were performed on (001) and (111) HTO SC along with few other rare-earth pyrochlores near resonance and away from resonance conditions. In addition to previously well cited first-order Raman modes, several new phonon modes have been identified. These anomalous phonon modes have strikingly different Raman cross-section from the non-resonant condition. Measurement under identical experimental conditions, but being at different resonant excited state, through separate laser excitations has also resulted into very different Raman cross-section, strongly indicating the role played by coupling interaction with other degrees of freedom such as, spin and crystal-field. Systematic Lorentzian model fitting routine for different polarization configuration and crystallographic orientations has helped in identifying some selectivity among the anomalous phonon modes. ACKNOWLEDGEMENT =============== The authors wish to thank the support from NSF with grant No. DMR-1350002 (Q.H. and H.D.Z.). We also acknowledge Material Characterization Laboratory in the Department of Chemistry & Biochemistry, Florida State University for providing instrumentation. ***Supplemental Material:*Probing Anomalous Inelastic Scattering in Spin-ice Ho$_2$Ti$_2$O$_7$ through Resonant Raman Spectroscopy** ![image](111633f.pdf){width="80.00000%"} ![image](001633f.pdf){width="80.00000%"} ![image](111458.pdf){width="80.00000%"} ![image](111458f.pdf){width="80.00000%"} ![image](001458.pdf){width="80.00000%"} ![image](001458f.pdf){width="80.00000%"} ![image](514nonpol.pdf){width="\textwidth"} Non-resonant Raman scattering is very similar for all pyrochlores with 514 nm line, all showing characteristic phonon modes as predicted by factor group analysis. Several experimental and theoretical studies on rare-earth pyrochlores has shown similar results.[@Lummen; @Mkaczka; @KUMAR; @Ruminy; @Kushwaha; @Bi; @Saha; @Sanjuan; @BAE; @Lee] However when resonance is met using 633 nm and 458 nm laser lines for Ho$_2$Ti$_2$O$_7$, strikingly different Raman cross-section is observed as explained in main text. ![image](Intensity.pdf){width="\textwidth"} Resonant Raman scattering measurements on Ho$_2$Ti$_2$O$_7$ (111) SC is performed using 633 nm line at several filter-attenuation settings. Output signal at two major peaks has been analyzed and shown in the inset graph. A linear trend between varying input intensity and corresponding output intensity suggests the absence of any non linear processes such as multiphoton absorption, stimulated Raman, thermal effects on sample itself or non-linear optical effects from the optical components in the set-up. ![image](BZ.jpg){width="\textwidth"} $$\Gamma_{3N}= 1A_{1g}^{1}+2E_{g}^{2}+12T_{2g}^{3}+ 24T_{1u}^{3}+ 6T_{1g}^{3}+3A_{2u}^{1}+6E_{u}^{2}+12T_{2u}^{3}$$ $$\textit{L}= 32L_{1+}^{4}+12L_{2+}^{4}+88L_{3+}^{8}+12L_{1-}^{4}+32L_{2-}^{4}+88L_{3-}^{8}$$ $$\textit{W}= 192W_{1}^{12}+204W_{2}^{12}$$ $$\textit{X}= 60X_{1}^{6}+30X_{2}^{6}+48X_{3}^{6}+60X_{4}^{6}$$
--- abstract: 'We derive a class of macroscopic differential equations that describe collective adaptation, starting from a discrete-time stochastic microscopic model. The behavior of each agent is a dynamic balance between adaptation that locally achieves the best action and memory loss that leads to randomized behavior. We show that, although individual agents interact with their environment and other agents in a purely self-interested way, macroscopic behavior can be interpreted as game dynamics. Application to several familiar, explicit game interactions shows that the adaptation dynamics exhibits a diversity of collective behaviors. The simplicity of the assumptions underlying the macroscopic equations suggests that these behaviors should be expected broadly in collective adaptation. We also analyze the adaptation dynamics from an information-theoretic viewpoint and discuss self-organization induced by information flux between agents, giving a novel view of collective adaptation.' author: - Yuzuru Sato - Eizo Akiyama - 'James P. Crutchfield' bibliography: - 'SDCA\_pre.bib' title: Stability and Diversity in Collective Adaptation --- Introduction ============ Collective behavior in groups of adaptive systems is an important and cross-cutting topic that appears under various guises in many fields, including biology, neurosciences, computer science, and social science. In all these adaptive systems, individual agents interact with one another and modify their behaviors according to the information they receive through those interactions. Often, though, collective behaviors emerge that are beyond the individual agent’s perceptual capabilities and that sometimes frustrate satisfying the local goals. With competitive interactions dynamic adaptation can produce rich and unexpected behaviors. This kind of mutual adaptation has been discussed, for example, in studies of biological group interaction [@Win80; @Hof88; @Cama01a], interactive learning [@Bat87; @Ros87; @Tai99], large-scale adaptive systems [@Simo96a; @Broo95a; @Holl92a], and learning in games [@Bor97; @Fud98]. Here we develop a class of coupled differential equations for mutual adaptation in agent collectives—systems in which agents learn how to act in their environment and with other agents through reinforcement of their actions. We show that the adaptive behavior in agent collectives, in special cases, reduces to a generalized form of multipopulation replicator equations and, generally, can be viewed as a kind of information-theoretic self-organization in a collective adaptive system. Suppose that many agents interact with an environment and each independently attempts to adjust its behavior to the environment based on its sensory stimuli. The environment consists of other agents and other exogenous influences. The agents could be humans, animals, or machines, but we make no assumptions about their detailed internal structures. That is, the central hypothesis in the following is that collective adaptation is a dynamical behavior driven by agents’ environment-mediated interactions. By separating the time scales of change in the environment, of agents’ adaptation, and of agent-agent interactions, our models describe, not the deterministic decision-making itself, but the temporal change in the probability distribution of choices. Related Work ------------ This approach should be compared and contrasted with game theoretic view [@Neu44]. First, classical game theory often assumes that players have knowledge of the entire environmental structure and of other players’ decision-making processes. Our adaptive agents, however, have no knowledge of a game in which they might be playing. Thus, unlike classical game theory, in our setting there is no bird’s eye view for the entire collective that is available to the agents. Agents have only a myopic model of the environment, since any information external to them is given implicitly via the reinforcements for their action choices. Second, although we employ game-theoretic concepts such as Nash equilibria, we focus almost exclusively on *dynamics*—transients, attractors, and so on—of collective adaptation, while, naturally, making contact with the *statics* familiar from game theory. Finally, despite the differences, game structures can be introduced as a set of parameters corresponding to approximated static environments. While replicator dynamics were introduced originally for evolutionary game theory [@Tay78; @Tay79; @Weib95a], the relationship between learning with reinforcement and replicator equations has been discussed only recently [@Bor97; @Fud98]. Briefly stated, in our model the state space represents an individual agent’s probability distribution to choose actions and the adaptation equations describe the temporal evolution of choice probabilities as the agents interact. Here, we extend these considerations to collective adaptation, introducing the theory behind a previously reported model [@Sat02; @Sat03]. The overall approach, though, establishes a general framework for dynamical-systems modeling and analysis of adaptive behavior in collectives. It is important to emphasize that our framework goes beyond the multipopulation replicator equations and asymmetric game dynamics since it does not require a static environment (cf. Ref. [@Akiy00a; @Akiy02a] for dynamic environments) and it includes the key element of the temporal loss of memory. We model adaptation in terms of the distribution of agents’ choices, developing a set of differential equations that are a continuous-time limit of a discrete-time stochastic process; cf. Ref. [@Ito79]. We spend some time discussing the origin of action probabilities, since this is necessary to understand the model variables and also to clarify the limits that we invoke to arrive at our model. One is tempted to give a game-theoretic interpretation of the model and its development. For example, the mixed strategies in game play are often interpreted as weights over all (complete plans of) actions. However, the game-theoretic view is inappropriate for analyzing local, myopic adaptation and the time evolution of collective behavior. Another interpretation of our use of action probabilities comes from regarding them as frequencies of action choices. In this view, one needs long-time trials so that the frequencies take on statistical validity for an agent. Short of this, they would be dominated by fluctuations, due to undersampling. In particular, one requires that stable limit distributions exist. Moreover, the underlying deterministic dynamics of adaptation should be ergodic and have strong mixing properties. Finally, considering agent-agent interactions, one needs to assume that their adaptation is very slow compared to interaction dynamics. For rapid, say, real-time adaptation, these assumptions would be invalid. Nonetheless, they are appropriate for long-term reinforcement, as found in learning motion through iterated exercise and learning customs through social interaction. Synopsis -------- The approach we take is ultimately phenomenological. We are reminded of the reaction-diffusion models of biological morphogenesis introduced originally in Ref. [@Tur52]. There, the detailed processes of biological development and pattern formation were abstracted, since their biochemical basis was (and still is) largely unknown, and a behavioral phenomenology was developed on this basis. Similarly, we abstract the detailed and unknown perceptual processes that underlie agent adaptation and construct a phenomenology that captures adaptive behavior at a larger scale, in agent collectives. The phenomenology that we develop for this is one based on communications systems. Agents in a collective are confronted with the same three problems of communication posed by Weaver in the founding work of information theory—*The Mathematical Theory of Communication* [@Sha49]: (a)“How accurately can the symbols of communication be transmitted?”, (b)“How precisely do the transmitted symbols convey the desired meaning?” and (c)“How effectively does the received meaning affect conduct in the desired way?”. Shannon solved the first problem developing his theory of error-free transmission [@Sha49]. In their vocabulary adaptive agents are *information sources*. Each (a) receives information transmitted from the external environment, which includes other agents, (b) interprets the received information and modifies its internal model accordingly, and then, (c) making decisions based on the internal model, generates future behavior. We will show that this information-theoretic view provides useful tools for analyzing collective adaptation and also an appropriate description for our assumed frequency dynamics. Using these we derive a new state space based on the self-informations of agent’s actions and this allows one to investigate the dynamics of uncertainty in collective adaptation. It will become clear, though, that the assumption of global information maximization has limited relevance here, even for simple mutual adaptation in a static environment. Instead, self-organization that derive from the information flux between agents gives us a new view of collective adaptation. To illustrate collective adaptation, we present several simulations of example environments; in particular, those having frustrated agent-agent interactions [@McC45]. Interestingly, for two agents with perfect memory interacting via zero-sum rock-scissors-paper interactions the dynamics exhibits Hamiltonian chaos [@Sat02]. With memory loss, though, the dynamics becomes dissipative and displays the full range of nonlinear dynamical behaviors, including limit cycles, intermittency, and deterministic chaos [@Sat03]. The examples illustrate that Nash equilibria often plays little or no role in collective adaptation. They are fixed points determined by the intersections of nullclines of the adaptation dynamics and sometimes the dynamics is explicitly excluded from reaching Nash equilibria, even asymptotically. Rather, it turns out that the network describing the switching between deterministic actions is a dominant factor in structuring the state-space flows. From it, much of the dynamics, including the origins of chaos becomes intuitively clear. In the next section (Sec. \[Sec:Dynamics\]), we develop a dynamical system that models adaptive behavior in collectives. In Sec. \[Sec:InfoSpace\] we introduce an information-theoretic view and coordinate-transformation for adaptation dynamics and discuss self-organization induced by information flux. To illustrate the rich range of behaviors, in the Sec. \[Sec:Examples\] we give several examples of adaptive dynamics based on non-transitive interactions. Finally, in Sec. \[Sec:TheEnd\] we interpret our results and suggest future directions. Dynamics for Collective Adaptation {#Sec:Dynamics} ================================== Before developing the full equations for a collective of adaptive agents, it is helpful to first describe the dynamics of how an individual agent adapts to the constraints imposed by its environment using the memory of its past behaviors. We then build up a description of how multiple agents interact, focusing only on the additional features that come from interaction. The result is a set of coupled differential equations that determine the behavior of adaptive agent collectives and are amenable to various kinds geometric, statistical, and information-theoretic analyses. Individual Agent Adaptation --------------------------- Here we develop a continuous-time model for adaptation in an environment with a single adaptive agent. Although the behavior in this case is relatively simple, the single-agent case allows us to explain several basic points about dynamic adaptation, without the complications of a collective and agent-agent interactions. In particular, we discuss how and why we go from a discrete-time stochastic process to a *continuous-time* limit. We also describe an agent’s effective internal model of the environment and how we model its adaptation process via a *probability distribution* of action choices. An agent takes one of $N$ possible *actions*: $i = 1, 2, \ldots, N$ at each time step $\tau$. Let the probability for the agent to chose action $i$ be $x_i(\tau)$, where $\tau$ is the number of steps from the initial state $x_i(0)$. The agent’s state vector—its *choice distribution*—at time $\tau$ is ${\bf x}(\tau)=(x_1(\tau),x_2(\tau),\ldots,x_N(\tau))$, where $\Sigma_{n=1}^N x_n(\tau) = 1$. In the following we call the temporal behavior of ${\bf x} (\tau)$ as the *dynamics of adaptation*. Let $r_i(\tau)$ denote the reinforcement the agent receives for its taking action $i$ at step $\tau$. Denote the collection of these by the vector ${\bf r}(\tau) = (r_1(\tau), \ldots, r_N(\tau))$. The agent’s memories—denoted ${\bf Q}(\tau)=(Q_1(\tau),\ldots,Q_N(\tau))$ —of past rewards from its actions are updated according to $$Q_i(\tau+1)-Q_i(\tau) = \frac 1T\left[\delta_{i}(\tau) r_{i}(\tau) - \alpha Q_i (\tau)\right] ~, \label{SingleMemoryUpdate}$$ where $$\delta_{i}(\tau) = \left\{ \begin{array}{l} 1, ~\mbox{action $i$ chosen at step $\tau$}\\ 0, ~\mbox{otherwise}\\ \end{array} \right.$$ with $i = 1, \ldots, N$ and $Q_i(0)= 0$. $T$ is a constant that sets the agent-environment interaction time scale. $\alpha \in [0,1)$ controls the agent’s memory loss rate. For $\alpha=0$, the agent has a perfect memory as the sum of the past reinforcements; for $\alpha>0$ the memory is attenuated in that older reinforcements have less effect on the current $Q_i$s and more recent reinforcements are given larger weight. One imagines that the agent constructs a histogram of past reinforcements and this serves as a simple internal memory of its environment. An agent chooses its next action according to its choice distribution which is updated from the reinforcement memory according to: $$x_i (\tau) = \frac{e^{\beta Q_i (\tau)}} {\sum_{n=1}^{N} e^{\beta Q_n (\tau)}} ~, \label{SingleVector}$$ where $i=1, 2, \ldots, N$. $\beta\in [0,\infty]$ controls the adaptation rate: how much the choice distribution is changed by the memory of past reinforcements. For example, if $\beta = 0$, the choice distribution is unaffected by past reinforcements. Specifically, it becomes independent of $\bf Q$ and one has $x_i (\tau) = 1/N$. In this case, the agent chooses actions with uniform probability and so behaves completely randomly. In a complementary fashion, in the limit $\beta \rightarrow \infty$, an agent chooses that action $i$ with the maximum $Q_i (\tau)$ and $x_i(\tau) \rightarrow 1$. Given Eq. (\[SingleVector\]) the time evolution of agent’s choice distribution is: $$x_i (\tau+1) = \frac{x_i (\tau) e^{\beta (Q_i(\tau+1)-Q_i(\tau))}} {\sum_{n=1}^{N} x_n (\tau) e^{\beta (Q_n(\tau+1)-Q_n (\tau))}} ~, \label{SingleVectorUpdate}$$ where $i = 1, 2, \ldots, N$. This determines how the agent adapts its choice distribution using reinforcements it has received from the environment for its past actions. This simple kind of adaptation was introduced as a principle of behavioral learning [@Skin38; @Hebb49a] and as a model of stochastic learning [@Nor72], and is sometimes referred to as reinforcement learning [@Sam67; @Sutt98a]. Arguably, it is the simplest form of adaptation in which an agent develops relationships or behavior patterns through reinforcements from external stimuli. Starting with the discrete-time model above, one can develop a continuous-time model that corresponds to the agent performing a large number of actions, iterates of Eq. (\[SingleMemoryUpdate\]), for each choice distribution update, iterate of Eq. (\[SingleVector\]). Thus, we recognize two different time scales: one for agent-environment interactions and one for adaptation of the agent’s internal model based on its internal memory. We assume that the adaptation dynamics is very slow compared to interactions and so $\bf x$ is essentially constant during interactions. (See Fig. \[fig:Timescales\].) ![The time scale ($t$) of a single agent interacting with its environment and the time scale ($\tau$) of the agent’s adaptation: $\tau \ll t$. []{data-label="fig:Timescales"}](figures/AdaptInteractTimeScales.eps) Starting from Eq. (\[SingleMemoryUpdate\]), one can show that the continuous-time dynamics of memory updates is given by the differential equations $$\dot{Q}_i(t) = R_i(t) - \alpha Q_i(t) ~, \label{SingleMemoryUpdate-Continuous}$$ with $i = 1, 2, \ldots, N$ and $Q_i (0) = 0$. (see App. \[ContinuousTimeLimits\].) Here $R_i$ is the reward the environment gives to the agent choosing action $i$: the average of $r_i (\tau)$ during the time interval between updates of $\mathbf{x}$ at $t$ and $t + dt$. From Eq. (\[SingleVector\]) one sees that the map from ${\bf Q}(t)$ to $\mathbf{x}(t)$ at time $t$ is given by $$x_i (t) = \frac{e^{\beta Q_i (t)}} {\sum_{n=1}^N e^{\beta Q_n (t)}} ~, \label{SingleVector-Continuous}$$ where $i = 1, 2, \ldots, N$. Differentiating Eq. (\[SingleVector-Continuous\]) gives the continuous-time dynamics $$\dot{x}_i(t) = \beta x_i(t) (\dot{Q}_i(t) - \sum_{n=1}^N \dot{Q}_n(t) x_n(t)) ~, \label{SingleVectorUpdate-Continuous}$$ with $i = 1, 2, \ldots, N$. Assembling Eqs. (\[SingleMemoryUpdate-Continuous\]), (\[SingleVector-Continuous\]), and (\[SingleVectorUpdate-Continuous\]), one finds the basic dynamic that governs agent behavior on the adaptation time-scale: $$\frac{\dot{x_i}}{x_i} = \beta ( R_i - R ) + \alpha ( H_i - H ) ~, \label{SingleLearningDynamics-Continuous}$$ where $i = 1, 2, \ldots, N$. Here $$R=\sum_{n=1}^N x_n R_n$$ is the net reinforcement averaged over the agent’s possible actions. And, $$H_i = -\log x_i$$ where $i = 1, 2, \ldots, N$, is the *self-information* or degree of surprise when the agent takes action $i$ [@Sha49]. The average self-information, or *Shannon entropy* of the choice distribution, also appears as $$H = \sum_{n=1}^N x_n H_n = -\sum_{n=1}^N x_n \log x_n ~.$$ These are the entropies of the agent’s choice distribution measured, not in *bits* (binary digits), but in *nats* (natural digits), since the natural logarithm is used. The entropy measures the choice distribution’s flatness, being maximized when the choices all have equal probability. Fortunately, the basic dynamic captured by Eq. (\[SingleLearningDynamics-Continuous\]) is quite intuitive, being the balance of two terms on the right-hand side. The first term describes an adaptation dynamic, whose time scale is controlled by $\beta$. The second describes the loss of memory with a time scale controlled by $\alpha$. That is, the adaptation in choice probabilities is driven by a balance between two forces: [the tendency to concentrate the choice probability based on the reinforcement ${\bf R}=(R_1, R_2, \ldots, R_N)$ and the tendency to make choices equally likely.]{} Finally, on the lefthand side, one has the logarithmic derivative of the choice probabilities: $\dot{x}_i/x_i = d/dt ~(\log x_i)$. Note that each of the terms on the righthand side is a difference between a function of a particular choice and that function’s average. Specifically, the first term $\Delta R_i \equiv R_i - R$ is the relative benefit in choosing action $i$ compared to the mean reinforcement across all choices. Other things being held constant, if this term is positive, then action $i$ is the better choice compared to the mean and $x_i$ will increase. The second term $\Delta H_i\equiv H_i - H$ is the relative informativeness of taking action $i$ compared to the average $H$, that is Shannon entropy. Thus, $x_i$ decreases in proportion to the entropy at time $t$ and so this term works to increase the uncertainty of agent’s actions, flattening the choice distribution by increasing the probability of unlikely actions. When $x_i = N^{-1}$, the distribution is flat (purely random choices), $\Delta H = 0$, and memory loss effects disappear. Mathematically, the adaptation equations have quite a bit of structure and this has important consequences, as we will see. Summarizing, the adaptation equations describe a dynamic that balances the tendency to concentrate on choices associated with the best action against the tendency to make the choices equally likely. The net result is to increase the choice uncertainty, subject to the constraints imposed by the environment via the reinforcements. Thus, the choice distribution is the least biased distribution consistent with environmental constraints and individual memory loss. We will return to discuss this mechanism in detail using information theory in the Sec. \[Sec:InfoSpace\]. ![A dynamic balance of adaptation and memory loss: Adaptation concentrates the probability distribution on the best action. Memory loss of past history leads to a distribution that is flatter and has higher entropy. []{data-label="fig:BalanceAdapMeM"}](figures/SingleAdaptation.eps) Since the reinforcement determines the agent’s interactions with the environment, there are, in fact, three different time scales operating: that for agent-environment interactions, that for each agent’s adaptation, and that for changes to the environment. However, if the environment changes very slow compared to the agent’s internal adaptation, the environment $r_{i}(t)$ can be regarded as effectively constant, as shown in Fig. \[fig:ThreeTimescales\]. ![The time scales of dynamic adaptation: Agent adaptation is slow compared to agent-environment interaction and environmental change is slower still compared to adaptation. []{data-label="fig:ThreeTimescales"}](figures/Timescales.eps) In this case $r_i(t)$ can be approximated as a static relationship between an agent’s actions and the reinforcements given by the environment. Let $r_i(t) = a_i$, where $\mathbf{a} = ( a_1, \ldots, a_N )$ are constants that are normalized: $\Sigma_{n=1}^N a_n =0$. Given this, the agent’s time-average reinforcements are $a_i$ ($R_i = a_i$) and the continuous-time dynamic simplifies to: $$\frac{\dot{x}_i}{x_i} = \beta (a_i-\sum_{n=1}^N a_n x_n) + \alpha (-\log x_i + \sum_{n=1}^N x_n\log x_n) ~, \label{SingleLearningDynamics-Continuous-Constant}$$ where $i = 1, 2, \ldots, N$. The behavior of single-agent adaptation given by Eq. (\[SingleLearningDynamics-Continuous-Constant\]) is very simple. When $\alpha$ is small, so that adaptation is dominant $x_i \rightarrow 1$, where $i$ is the action with the highest reward $a_i$, and $x_j\rightarrow 0$ for $j \neq i$. The agent receives this information from the fixed environment and its behavior is simply to choose the action with the maximum reward and the choice distribution moves to the associated simplex vertex ${\bf x}^*=(0, \ldots, 1^{\stackrel{i}{\vee}}, \ldots,0)$. In the special case when $\alpha=0$, it is known that for arbitrary $\bf a$ Eq. (\[SingleLearningDynamics-Continuous-Constant\]) moves $\mathbf{x}$ to the vertex corresponding to the maximum $a_i$ [@Hof88]. In a complementary way, when $\alpha$ is large enough to overcome the relative differences in reinforcements—that is, when $\beta/\alpha\rightarrow 0$ memory loss dominates, the agent states goes to a uniform choice distribution ($x_i = N^{-1}$) and the system converges to the simplex center. Note that in machine learning this balance between local optimization and randomized behavior, which selects non-optimal actions, is referred to as the *exploitation-exploration trade-off* [@Sutt98a]. For instance, consider an agent that takes $N = 3$ actions, $\{1,2,3\}$, in an environment described by ${\bf a}=(\frac23\epsilon, -1-\frac13\epsilon,1-\frac13\epsilon)$, with $\epsilon \in [-1, 1]$. In the perfect memory case ($\alpha=0$), the choice distribution converges to a stable fixed point $(0,0,1)$. ${\bf x}^*=(\frac13, \frac13, \frac13)$ is an unstable hyperbolic fixed point. In the memory loss case ($\alpha>0$), dynamics converges a stable fixed point inside the simplex. (These cases are illustrated in Fig. \[fig:SingleLearningTrajectories\].) ![Dynamics of single-agent adaptation: Here there are three actions, labeled $1$, $2$, and $3$, and the environment gives reinforcements according to ${\bf a}=(\frac23\epsilon, -1-\frac13\epsilon, 1-\frac13\epsilon)$. The figure shows two trajectories from simulations with $\epsilon = 0.5$ and $\beta=0.1$ and with $\alpha = 0.0$ (right) and $\alpha = 0.3$ (left). []{data-label="fig:SingleLearningTrajectories"}](figures/SingleLearningTrajectories.eps) Even when the environment is time-dependent, the agent’s behavior can track the highest-reward action as long as the time scale of environment change is slow compared to the agent’s adaptation. However, the situation is more interesting when environment change occurs at a rate near the time-scale set by adaptation. Mutual adaptation in agent collectives, the subject of the following sections, corresponds to just this situation. Other agents provide, thought their own adaptation, a dynamic environment to any given agent and if their times scales of adaptation are close the dynamics can be quite rich and difficult to predict and analyze. Two Agent Adaptation -------------------- To develop equations of motion for adaptation in an agent collective we initially assume, for simplicity, that there are only two agents. The agents, denoted $X$ and $Y$, at each moment take one of $N$ or $M$ actions, respectively. The agents states at time $t$ are ${\bf x}=(x_1, \ldots,x_N)$ and ${\bf y} = (y_1, \ldots, y_M)$, with $\Sigma_{n=1}^N x_n = \Sigma_{m=1}^M y_m = 1$. ${\bf x}(0)$ and ${\bf y}(0)$ are the initial conditions. We view the time evolution of each agent’s state vector in the simplices ${\bf x}\in \Delta_X$ and ${\bf y} \in \Delta_Y$ and the group dynamics in the *collective* state space $\Delta$ which is the product of the agent simplices: $${\bf X}=({\bf x},{\bf y}) \in \Delta = \Delta_X \times \Delta_Y ~.$$ There are again three different time scales to consider: one for agent-agent interaction, one for each agent’s internal adaptation, and one for the environment which now mediates agent interactions via the reinforcements given to the agents. Here we distinguish between the *global environment* experienced by the agents and the *external environment*, which is the global environment with the agent states removed. The external environment controls, for example, the degree of coupling between the agents. In contrast with the single-agent case, in the many agent setting each agent’s behavior produces a dynamic global environment for the other. This environment dynamics is particularly important when the adaptation time scales of each agent are close. Following the single-agent case, though, we assume that the adaptation dynamic is very slow compared to that of agent-agent interactions and that the dynamics of the external environment changes very slowly compared to that of agents’ mutual adaptation. Under these assumptions the agent state vectors $\mathbf{x}$ and $\mathbf{y}$ are effectively constant during the agent-agent interactions that occur between adaptation updates. The immediate consequence is that can describes the collective state space in terms of the frequencies of actions (the choice distributions). Additionally, the environment is essentially constant relative to changes in the states $\mathbf{x}$ and $\mathbf{y}$. Denote the agents’ memories by ${\bf Q}^X=(Q_1^X,\ldots, Q_N^X)$ for $X$ and ${\bf Q}^Y=(Q_1^Y,\ldots,Q_M^Y)$ for $Y$ and set $Q_i^X(0)=0$ and $Q_j^Y(0)=0$, for for $i=1,\ldots, N$ and $j=1,\ldots,M$. For the dynamic governing memory updates we have $$\begin{aligned} Q_i^X(\tau+1)-Q_i^X(\tau) &=& \frac 1T\left[\delta_{ij}(\tau) r^X_{ij}(\tau) - \alpha_X Q_i^X (\tau)\right] ~, \nonumber\\ Q_j^Y(\tau+1)-Q_j^Y(\tau) &=& \frac 1T\left[\delta_{ij}(\tau) r^Y_{ji}(\tau) - \alpha_Y Q_j^Y (\tau)\right] ~, \nonumber\\ \label{MultiMemoryUpdate}\end{aligned}$$ where $$\delta_{ij}(\tau) = \left\{ \begin{array}{l} 1, ~\mbox{pair of actions $(i, j)$ chosen at step $\tau$}\\ 0, ~\mbox{otherwise}\\ \end{array} \right.$$ with $i = 1, \ldots, N$, $j = 1, \ldots, M$ and $Q_i^X(0)= 0$, $Q_j^Y(0)= 0$. $T$ is a time constant. Then the continuous-time dynamics of memory updates for $X$ and $Y$ are given by the differential equations $$\begin{aligned} \dot{Q}_i^X &=& R_i^X - \alpha_X Q_i^X ~,\nonumber\\ \dot{Q}_j^Y &=& R_j^Y - \alpha_Y Q_j^Y ~, \label{MultiMemoryUpdate-Continuous}\end{aligned}$$ for $i=1, 2, \ldots, N$ and $j=1, 2, \ldots, M$. $R_i^X$ is the reward for agent $X$ choosing action $i$, averaged over agent $Y$’s actions between adaptive updates; and $R_j^Y$ is $Y$’s. The parameters $\alpha_X, \alpha_Y \in [0,1)$ control each agent’s memory loss rate, respectively. The map from $\mathbf{Q}^X (t)$ to $\mathbf{x} (t)$ and from ${\bf Q}^Y (t)$ to $\mathbf{y} (t)$ at time $t$ is $$\begin{aligned} x_i (t) &=& \frac{e^{\beta_X Q_i^X (t)}} {\sum_{n=1}^N e^{\beta_X Q_n^X(t)}} ~,\nonumber\\ y_j (t) &=& \frac{e^{\beta_Y Q_j^Y (t)}} {\sum_{m=1}^M e^{\beta_Y Q_m^Y(t)}} ~, \label{Vector-Continuous}\end{aligned}$$ for $i = 1, \ldots, N$ and $j = 1, \ldots, M$. Here $\beta_X, \beta_Y\in [0,\infty]$ control the agents’ adaptation rates, respectively. Differentiating Eq. (\[Vector-Continuous\]) with respect to $t$, the continuous-time adaptation for two agents is governed by $$\begin{aligned} \dot{x}_i &=& \beta_X x_i (\dot{Q}_i^X - \sum_{n=1}^N \dot{Q}^X_n x_n) ~,\nonumber\\ \dot{y}_j &=& \beta_Y y_j (\dot{Q}_j^Y - \sum_{m=1}^M \dot{Q}^Y_m y_m) ~, \label{VectorUpdate-Continuous}\end{aligned}$$ for $i = 1, \ldots, N$ and $j = 1, \ldots, M$. Putting together Eqs. (\[MultiMemoryUpdate-Continuous\]), (\[Vector-Continuous\]), and (\[VectorUpdate-Continuous\]), one finds the coupled adaptation equations for two agents: $$\begin{aligned} \frac{\dot{x_i}}{x_i} & = & \beta_X (R_i^X - R^X) + \alpha_X (H_i^X - H^X) ~,\nonumber\\ \frac{\dot{y_j}}{y_j} & = & \beta_Y (R_j^Y - R^Y) + \alpha_Y (H_j^Y - H^Y) ~, \nonumber\\ \label{LearningEquations}\end{aligned}$$ for $i = 1, \ldots, N$ and $j = 1, \ldots, M$ and where $$\begin{aligned} R^X= \sum_{n=1}^N x_n R_n^X,&& R^Y= \sum_{m=1}^M y_m R_m^Y ~, \nonumber\\ H^X=\sum_{n=1}^N x_n H_n^X,&& H^Y=\sum_{m=1}^M y_m H_m^Y ~. \end{aligned}$$ The interpretations of the $\Delta R = R_i - R$ and $\Delta H = H_i - H$ terms are not essentially different from those introduced to describe the single-agent case. That is, the behavior of each agent is a dynamic balance between (i) adaptation: concentrating the choice probability on the best action at $t$ and (ii) memory loss: increasing the choice uncertainty. What is new here is that there are two (and eventually more) agents attempting to achieve this balance together using information that comes from their interactions with the global environment. As given, the adaptation equations include the possibility of a time-dependent environment, which would be implemented, say, using a time-dependent reinforcement scheme. However, as with the single-agent case, it is helpful to simplify the model by assuming a static external environment and, in particular, static relationships between the agents. Assume that the external environment changes slowly compared to the dynamics of mutual adaptation, as illustrated in Fig. \[fig:ThreeTimescales\]. This implies a nearly static relationship between pairs of action choices $(i,j)$ and reinforcements $r^X_{ij}$ and $r^Y_{ji}$ for both agents. Since the environmental dynamics is very slow compared to each agents’ adaptation, $r_{ij}^X(t)$ and $r_{ji}^Y(t)$ are essentially constant during adaptation. The $r$s can be approximated then as constant: $$\begin{aligned} r_{ij}^X(t) &=& a_{ij} ~,\nonumber \\ r_{ji}^Y(t) &=& b_{ji} ~, \label{NormalRewards}\end{aligned}$$ for $i = 1, \ldots, N$ and $j = 1, \ldots, M$. $a_{ij}$ and $b_{ji}$ are normalized over $j$ and $i$ so that when summing over all actions the reinforcements vanish: $$\begin{aligned} \sum_{n=1}^N a_{nj}=0 ~,\nonumber \\ \sum_{m=1}^M b_{mi}=0 ~. \end{aligned}$$ Given the form of $\Delta R$ in the adaptation equations, this normalization does not affect the dynamics. Assume further that $\bf x$ and $\bf y$ are independently distributed. This is equivalent to agents never having a global view of the collective or their interactions with the environment (other agents). Each agent’s knowledge of the environment is uncorrelated, at each moment, with the state of the other agents. The time-average rewards for $X$ and $Y$ now become $$\begin{aligned} R_i^X &=& \sum_{m=1}^M a_{im} y_m=(A{\bf y})_i ~,\nonumber\\ R_j^Y &=&\sum_{n=1}^N b_{jn} x_n=(B{\bf x})_j ~, \label{ConstantRewards}\end{aligned}$$ for $i = 1, \ldots, N$ and $j = 1, \ldots, M$. In this restricted case, the continuous-time dynamic is given by the coupled adaptation equations $$\begin{aligned} \frac{\dot{x}_i}{x_i} & = & \beta_X [ (A{\bf y})_i-{\bf x}\cdot A{\bf y}] \nonumber\\ &+& \alpha_X [-\log x_i+\sum_{n=1}^N x_n \log x_n] ~, \nonumber\\ \frac{\dot{y}_j}{y_j} & = & \beta_Y [ (B{\bf x})_j-{\bf y}\cdot B{\bf x}] \nonumber\\ &+& \alpha_Y [-\log y_j+\sum_{m=1}^M y_m \log y_m] ~. \label{LearningEquations-Constant}\end{aligned}$$ for $i = 1, \ldots, N$ and $j = 1, \ldots, M$. $A$ is an $N \times M$ matrix and $B$ is a $M \times N$ matrix with $(A)_{ij} = a_{ij}$ and $(B)_{ji} = b_{ji}$, respectively. ${\bf x}\cdot A{\bf y}$ is the inner product between ${\bf x}$ and $A{\bf y}$ and similarly for ${\bf y}\cdot B{\bf x}$: $$\begin{aligned} {\bf x}\cdot A{\bf y}&=&\sum_{n=1}^N\sum_{m=1}^M a_{nm} x_n y_m ~,\nonumber\\ {\bf y}\cdot B{\bf x}&=&\sum_{m=1}^M\sum_{n=1}^N b_{mn} y_m x_n ~. \end{aligned}$$ Collective Adaptation --------------------- Generalizing to an arbitrary number of agents at this point should appear straightforward. It simply requires extending Eqs. (\[LearningEquations\]) to a collection of adaptive agents. Suppose there are $S$ agents labeled $s=1,2,\ldots,S$ and each agent can take one of $N^s$ actions. One describes the time evolution of the agents’ state vectors in the simplices ${\bf x}^1 \in \Delta_1$, ${\bf x}^2 \in \Delta_2$, ..., and ${\bf x}^S\in \Delta_S$. The adaptation dynamics in the higher-dimensional [*collective*]{} state space occurs within $${\bf X}=({\bf x}^1,{\bf x}^2, \ldots, {\bf x}^S) \in \Delta = \Delta_1 \times \Delta_2 \times \ldots \Delta_S ~.$$ Then we have the dynamics for collective adaptation as $$\frac{\dot{x_{i^s}^s}}{x_{i^s}^s} = \beta_s ( R_{i^s}^s - R^s) +\alpha_s (H_{i^s}^s - H^s) ~. \label{MultiLearningEquations}$$ for $i^s = 1, \ldots, N^s$ and $s = 1, \ldots, S$. $R_{i^s}^s$ and $H_{i^s}^s$ are the reinforcement and the self-information for $s$ to choose action $i^s$, respectively. Equations (\[MultiLearningEquations\]) constitute our general model for adaptation in agents collective. With three agents $X$, $Y$, and $Z$, with collective state space $${\bf X}=({\bf x}, {\bf y}, {\bf z}) \in \Delta = \Delta_X \times \Delta_Y \times \Delta_Z ~.$$ one obtains: $$\begin{aligned} \frac{\dot{x_i}}{x_i} & = & \beta_X (R_i^X - R^X) + \alpha_X [H_i^X-H^X] ~, \nonumber\\ \frac{\dot{y_j}}{y_j} & = & \beta_Y (R_j^Y - R^Y) + \alpha_Y [H_j^Y-H^Y] ~, \nonumber\\ \frac{\dot{z_k}}{z_k} & = & \beta_Z (R_k^Z - R^Z) + \alpha_Z [H_k^Z-H^Z] ~, \label{MultiLearningDynamics-Continuous-For3}\end{aligned}$$ for $i = 1, \ldots, N$, $j = 1, \ldots, M$, and $k = 1, \ldots, L$. The static environment version reduces to $$\begin{aligned} \frac{\dot{x}_i}{x_i} &=& \beta_X [(A{\bf y}{\bf z})_i - {\bf x}\cdot A{\bf y}{\bf z}] \nonumber\\ &+& \alpha_X [-\log x_i+\sum_{n=1}^N x_n \log x_n] ~, \nonumber\\ \frac{\dot{y}_j}{y_j} &=& \beta_Y [(B{\bf z}{\bf x})_j - {\bf y}\cdot B{\bf z}{\bf x}] \nonumber\\ &+& \alpha_Y [-\log y_j+\sum_{m=1}^M y_m \log y_m] ~, \nonumber\\ \frac{\dot{z}_k}{z_k} &=& \beta_Z [(C{\bf x}{\bf y})_k -{\bf z}\cdot C{\bf x}{\bf y}] \nonumber\\ &+& \alpha_Z [-\log z_k+\sum_{l=1}^L z_l \log z_l] ~, \end{aligned}$$ for $i = 1, \ldots, N$, $j = 1, \ldots, M$, and $k = 1, \ldots, L$, and with tensors $(A)_{ijk} = a_{ijk}$, $(B)_{jki} = b_{jki}$, $(C)_{kij} = c_{kij}$. Here $$(A{\bf yz})_i=\sum_{m=1}^M \sum_{l=1}^L a_{iml}y_mz_l$$ and $${\bf x}\cdot A{\bf yz}=\sum_{n=1}^N\sum_{m=1}^M \sum_{l=1}^L a_{nml} x_n y_m z_l$$ and similarly for $Y$ and $Z$. Note that the general model includes heterogeneous network settings with local interactions besides global interactions; see App. \[NetworkInteractions\]. Evolutionary Dynamics and Game Theory ------------------------------------- We now interrupt the development to discuss the connections between the model developed this far and models from population dynamics and game theory. There are interesting connections and also some important distinctions that need to be kept in mind, before we can move forward. The special case that allows us to make contact with evolutionary dynamics and game theory is the restriction to agents with perfect memory interacting in a static environment. (For further details see App. \[NashEquilibria\].) In the two agent, static external environment case we set $\alpha_X = \alpha_Y = 0$ and equal adaptation rates, $\beta_X = \beta_Y$. Under these assumptions our model, Eqs. (\[LearningEquations-Constant\]), reduces to what is either called *multipopulation replicator equations* [@Tay79] or *asymmetric game dynamics* [@Tay79; @Bor97; @Fud98]. The equations are: $$\begin{aligned} \frac{\dot{x}_i}{x_i} &=& (A{\bf y})_i-{\bf x}\cdot A{\bf y} ~,\nonumber\\ \frac{\dot{y}_j}{y_j} &=& (B{\bf x})_j-{\bf y}\cdot B{\bf x} ~. \label{MultiPopulationReplicator}\end{aligned}$$ From the perspective of game theory, one regards the interactions determined by $A$ and $B$, respectively, as $X$’s and $Y$’s *payoff matrices* for a linear game in which $X$ plays action $i$ against $Y$’s action $j$. Additionally, ${\bf x}$ and ${\bf y}$, the agent state vectors, are interpreted as the *mixed strategies*. In fact, ${\bf x} \cdot A{\bf y}$ and ${\bf y}\cdot B{\bf x}$ in Eqs. (\[MultiPopulationReplicator\]) formally satisfy von Neumann-Morgenstern utilities [@Neu44]. If they exist in the interior of the collective simplices $\Delta_X$ and $\Delta_Y$, interior Nash equilibria of the game $(A, B)$ are the fixed points determined by the intersections of the x- and y-nullclines of Eqs. (\[MultiPopulationReplicator\]). One must be careful, though, in drawing parallels between our general dynamic setting and classical game theory. In the idealized economic agents, it is often assumed that agents have knowledge of the entire game structure and of other agents’ decision-making processes. Its central methodology derives how these *rational players* should act. Our adaptive agents, in contrast, have no knowledge of a game in which they might be playing, only a myopic model of the environment and, even then, this is given only implicitly via the reinforcements the agents receive from the environment. In particular, the agents do not know whether they are playing a game or not, how many agents there are beyond themselves, or even whether other agents exist or not. Our model of dynamic adaptation under such constraints is appropriate nonetheless for many real world adaptive systems, whether animal, human, or economic agent collectives [@Kah00]. The bi-matrix game $(A, B)$ appears above as a description of the collective’s global dynamic only under the assumptions that the external environment changes very slowly. The connection with evolutionary dynamics is formal and comes from the fact that Eqs. (\[MultiPopulationReplicator\]) are the well known replicator equations of population dynamics [@Hof88]. However, the interpretation of the variables is rather different. Population dynamics views $\bf x$ and $\bf y$ as two separate, but interacting (infinite size) groups. These two populations are described as distributions of various organismal phenotypes. The equations of motion determine the evolution of these populations over generations and through interaction. In our model, in contrast, $\bf x$ and $\bf y$ represent the probability to choose actions for each agents. The equations of motion describe their dynamic adaptation to each other through interaction. Despite the similarities that one can draw in this special case, it is important to emphasize that our framework goes beyond the multipopulation replicator equations and asymmetric game dynamics. First, the reinforcement scheme $\bf R$ need not lead to linear interactions. Second, the model does not require a static environment described by a constant bi-matrix $(A, B)$. Finally, the occurrence of the memory loss term is entirely new and not found in game theory or evolutionary dynamics. Information, Uncertainty, and Dynamic Adaptation {#Sec:InfoSpace} ================================================ We now shift away from a dynamical systems view and, as promised earlier, begin to think of the agent collective as a communication network. Although, this initially will appear unrelated, we will show that there is a close connection between the dynamical and information theoretic perspectives—connections that have both mathematical and pragmatic consequences. We consider the adaptive agents in the collective to be information sources. Each agent receives information from its environment, which includes other agents. Each agent interprets the received information and modifies its behavior accordingly, changing from ${\bf x}(t)$ to ${\bf x}(t+dt)$. Each agent generates a series of messages (actions) based on its updated internal model and introduces this new behavior back into the environment. This is a different interpretation of the interaction process in the collective which we motivated up to now only as a dynamical process. Now we discuss the adaptive dynamics from information theoretic viewpoint. Dynamics in Information Space ----------------------------- In this section we introduce a new state space that directly represents the uncertainties of agent actions. First, as before, for clarity we focus on the two-agent static-environment case, Eqs. (\[LearningEquations-Constant\]). Since the components of the agents’ states are probabilities, the quantities $$\begin{aligned} \xi_i &=& -\log x_i ~,\nonumber\\ \eta_j &=& -\log y_j ~, \label{CLR1}\end{aligned}$$ are the *self-informations* of agents $X$ and $Y$ choosing actions $i$ and $j$, respectively. When $x_i$ is small, for example, the self-information $\xi_i$ is large since action $i$ is rarely chosen by agent $X$. Consider the resulting change in coordinates in ${\bf R}_+^{N}\times {\bf R}_+^{M}$: $${\bf \Xi}=(\mbox{\boldmath $\xi$}, \mbox{\boldmath $\eta$}) = (\xi_1,\ldots,\xi_N) \times (\eta_1,\ldots,\eta_M) ~.$$ The normalization conditions—$\Sigma_{n=1}^N x_n=\Sigma_{m=1}^M y_m=1$—that restrict the agent states to lie in simplices become $\Sigma_{n=1}^N e^{-\xi_n}=\Sigma_{m=1}^M e^{-\eta_m}=1$ in ${\bf \Xi}$. In this space the equations of motion become: $$\begin{aligned} \dot{\xi}_i & = & -\beta_X [(Ae^{-\mbox{\boldmath $\eta$}})_i -e^{-\mbox{\boldmath $\xi$}}\cdot Ae^{-\mbox{\boldmath $\eta$}}] - \alpha_X [\xi_i-e^{-\mbox{\boldmath $\xi$}}\cdot \mbox{\boldmath $\xi$}] ~, \nonumber\\ \dot{\eta}_j & = & -\beta_Y [ (Be^{-\mbox{\boldmath $\xi$}})_j -e^{-\mbox{\boldmath $\eta$}}\cdot Be^{-\mbox{\boldmath $\xi$}}] - \alpha_Y [\eta_j-e^{-\mbox{\boldmath $\eta$}}\cdot \mbox{{\boldmath $\eta$}}] ~, \nonumber\\ \label{InformationDynamics-Constant}\end{aligned}$$ for $i = 1, \ldots, N$ and $j = 1, \ldots, M$ and where $e^{-\mbox{\boldmath $\xi$}}=(e^{-\xi_1},\ldots,e^{-\xi_N})$ and $e^{-\mbox{\boldmath $\eta$}}=(e^{-\eta_1},\ldots,e^{-\eta_N})$. Recall that both the $\Delta R$ interaction term and the $\Delta H$ memory loss term are differences from means. This suggests yet another transformation to remove these comparisons to the mean: $$\begin{aligned} u_i & = & \xi_i - N^{-1} \sum_{n=1}^N \xi_n ~, \nonumber\\ v_j & = & \eta_j - M^{-1} \sum_{m=1}^M \eta_m ~, \label{CLR2}\end{aligned}$$ with $i = 1, \ldots, N$ and $j = 1, \ldots, M$. This leads to the normalized space in ${\bf R}^{N} \times {\bf R}^{M}$: $${\bf U}=({\bf u}, {\bf v}) = (u_1, \ldots, u_{N}) \times (v_1, \ldots, v_{M}) ~,$$ with the constraints $\sum_{n=1}^N u_n = \sum_{m=1}^M v_m = 0$. ${\bf u}$ and ${\bf v}$ are the normalized self-informations relative to their means. We refer to this space as *information space*. The combined coordinate transformation, Eq. (\[CLR2\]) composed with Eq. (\[CLR1\]), gives the well known *centered log-ratio* coordinates [@Aitc86a]. The inverse transformation is: $$\begin{aligned} x_i & = & \frac{e^{-u_i}} {\sum_{n=1}^Ne^{-u_n}} ~, \nonumber\\ y_i & = & \frac{e^{-v_i}} {\sum_{m=1}^Me^{-v_m}} ~.\end{aligned}$$ The resulting transformed adaptation equations directly model the dynamics of uncertainties of agents’ behavior: $$\begin{aligned} \dot{\bf u} & = & -\beta_X \left[ A {\bf y} - \sum_{n=1}^N (A \mathbf{y})_n \right] - \alpha_X {\bf u} ~, \nonumber \\ \dot{\bf v} & = & -\beta_Y \left[ B {\bf x} - \sum_{n=1}^N (B \mathbf{x})_n \right] - \alpha_Y {\bf v} ~. \label{EntropyEquations-Constant}\end{aligned}$$ When the interaction matrices are normalized to zero mean, $\sum_{m=1}^M a_{im}=\sum_{n=1}^{N} b_{jn}=0$, the equations simplify even further to $$\begin{aligned} \dot{\bf u} & = & -\beta_X A {\bf y} - \alpha_X {\bf u} ~, \nonumber\\ \dot{\bf v} & = & -\beta_Y B {\bf x} - \alpha_Y {\bf v} ~. \label{EntropyEquations-Constant-Normalized}\end{aligned}$$ The origin ${\bf O}=(0, 0, \ldots, 0)$ of the normalized information space ${\bf U}$ corresponds to random behavior: $({\bf x}, {\bf y})=(1/N, \ldots, 1/N, 1/M, \ldots, 1/M)$. The Shannon entropy of the choice distribution is maximized at this point. In contrast, when agents choose an action with probability $1$ the entropy vanishes and the agent state is located in $\Delta$ at the simplex vertices and in ${\bf U}$ at infinity. In Eqs. (\[EntropyEquations-Constant-Normalized\]) the first term is related to information influx to an agent from outside; i.e., from other agents and the environment. The second term is related to the information dissipation due to internal memory loss. Eqs. (\[EntropyEquations-Constant-Normalized\]) are useful for theory, for analysis in certain limits, as we will shortly demonstrate, and for numerical stability during simulation, which we will illustrate when considering example collectives below. Note that Eqs. (\[LearningEquations-Constant\]), Eqs. (\[InformationDynamics-Constant\]), and Eqs. (\[EntropyEquations-Constant\]) are topologically orbit equivalent. Self-organization Induced by Dynamics of Uncertainty ---------------------------------------------------- Equations (\[EntropyEquations-Constant\]) describe a dynamics of uncertainty between deterministic and random behavior. Information influx occurs when the agents adapt to environmental constraints and accordingly change their choice distribution. Information dissipation occurs when memory loss dominates and the agents increase their uncertainty to behave more randomly with less regard to the environmental constraints. The dissipation rate $\gamma$ of the dynamics in ${\bf U}$ is controlled entirely by the memory loss rate $\alpha$: $$\gamma = \sum_{n=1}^N \frac{\partial \dot{u}_n}{\partial u_n} + \sum_{m=1}^M \frac{\partial \dot{v}_m}{\partial v_m} = -N \alpha_X - M \alpha_Y ~. \label{eq:diss}$$ Therefore, Eqs. (\[EntropyEquations-Constant-Normalized\]) are volume preserving in ${\bf U}$ when $\alpha_X = \alpha_Y = 0$. In the case that agents behave without memory loss ($\alpha_X=\alpha_Y=0$), if the interaction specified by $(A, B)$ is zero-sum, $B=-A^T$, and if, in addition, it determines an interior Nash equilibrium $({\bf x}^*, {\bf y}^*)$ (see App. \[NashEquilibria\]), then the collective has a constant of motion: $$E = \beta_X^{-1} D({\bf x}^{\ast}\parallel{\bf x}) + \beta_Y^{-1} D({\bf y}^{\ast}\parallel{\bf y}) ~, \label{ConstantofMotion}$$ where $D({\bf p} \parallel {\bf q})=\Sigma_k p_k\log (p_k/q_k)$ is the *relative entropy* or the *information gain* which measures the similarity between probability distributions ${\bf p}$ and ${\bf q}$ [@Cove91]. (App. \[HamiltonianFormInformationSpace\] gives the derivation of Eq. (\[ConstantofMotion\]).) ![Dynamics of zero-sum interaction without memory loss:  Constant of motion $E = \beta_X^{-1} D({\bf x}^{\ast}\parallel{\bf x}) + \beta_Y^{-1} D({\bf y}^{\ast}\parallel{\bf y})$ keeps the linear sum of distance between the interior Nash equilibrium and each agent’s state. []{data-label="fig:RelativeEntropies"}](figures/RelativeEntropy.eps) Since the constant of motion $E$ is a linear sum of relative entropies, the collective maintains the information-theoretic distance between the interior Nash equilibrium and each agent’s state. Thus, in the perfect memory case ($\alpha = 0$), by the inequality $D({\bf p}\parallel{\bf q})\ge 0$, the interior Nash equilibrium cannot be reached unless the initial condition itself starts on it (Fig. \[fig:RelativeEntropies\]). This is an information-theoretic interpretation of the constant of motion noted in Ref. [@Hof96]. Moreover, when $N=M$ the dynamics has a symplectic structure in ${\bf U}$ with the Hamiltonian $E$ given in Eq. (\[ConstantofMotion\]) [@Hof96]. In this case, Eqs. (\[EntropyEquations-Constant\]) are described quite simply, $$\dot{\bf U}=J \nabla_{\bf U} E ~,$$ with a Poisson structure $J$   $$J = \left(\begin{array}{cc} O&P\\ -P^T&O\\ \end{array} \right) ~~\mbox{with}~~ P = -\beta_X \beta_Y A ~. \label{HamiltonianDynamics}$$ Again, see App. \[HamiltonianFormInformationSpace\]. When the bi-matrix interaction $(A, B)$ satisfies $B=A^T$, $E$ is a Lyapunov function of dynamics and decreases to $0$ over time [@Hof88]. In this case, each agents can adapt to environment independently and collective adaptation dynamics reach one of stable states. The Nash equilibria $({\bf x}^*, {\bf y}^*)$ may not be in the interior of the collective simplices $\Delta$. Note that symmetric neural networks have similar properties [@Hop82]. In some cases when neither $B=-A^T$ nor $B=A^T$, $E$ increases non-monotonically, the dynamics in ${\bf U}$ diverges, and the Shannon entropies of agents’ choice distribution asymptotically decreases. (See Figs. \[fig:hEntropy\] and \[fig:dEntropy\] below.) Note that in single-agent adaptation with state $\bf x$ and normalizing the environment’s reinforcements to a probability distribution ${\bf p}_e$, $D({\bf p}_e\parallel {\bf x})$ is always a Lyapunov function of the dynamics and decreases monotonically. In mutual adaptation, however, agents adapt to a dynamic environment that includes the other agents. As a result, in some cases, $E$, a linear sum of agent relative entropies, will itself exhibit nontrivial dynamics and, in addition, the uncertainties of agents’ choices will asymptotically decrease. When agents adapt with memory loss ($\alpha$ $>0$), the dynamics is dissipative. Since the memory loss terms induce information dissipation, the dynamics varies between random and deterministic behavior in the information space. Notably, when the agents attempt to achieve this balance together by interacting and, in particular, when the interaction has *nontransitive* structure, the dynamics can persistently wander in a bounded area in information space. Since, in some cases, mutual adaptation and memory loss produce successive stretching and folding, deterministic chaos can occur with a significant range of $\alpha$, even with only two agents. A schematic view of the flow in mutual adaptation is given in Fig. \[fig:horseshoe\]. In the case that the agents are completely decoupled (or, in the case that $B=A^T$ and $\alpha_X=\alpha_Y=0$ for two agents), information space locally splits into subspaces governed by effects of mutual adaptation (information influx) and memory loss (information dissipation). They correspond to unstable and stable flow directions as in single agent adaptation. However, in the case that agents are coupled via nontransitive interaction, mutual adaptation and memory loss affects with each other and horseshoe can be produced. Flow of information is multidimensional since each agent obtains information from its environment, organizes its behavior based on that information, and that local adaptation is then fed back into the environment affecting other agents. In this case, “weak” uncertainty of behavior plays an important role in organizing the collective’s behavior. Small fluctuations in decision making can be amplified through repeated mutual adaptation with competitive interactions and dynamic memory stored in collectives could exist shown by a positive metric entropy. ![Schematic view of mutual adaptation: Effect of mutual adaptation and memory loss produce unstable and stable directions. The nontransitive structure of interactions leads to state-space folding. []{data-label="fig:horseshoe"}](figures/Horseshoe.eps) Now consider many agents interacting. In the perfect memory case, when the game is zero-sum and has an interior Nash equilibrium $({\bf x}^{1*},{\bf x}^{2*}, \ldots, {\bf x}^{S*})$, following Eq. (\[ConstantofMotion\]), the following constant of motion exists: $$E = \sum_{s=1}^S \frac1\beta_s D({\bf x}^{s*}\parallel{\bf x}^s) = \sum_{s=1}^S \frac1\beta_s \left(\sum_{n^s=1}^{N^s} x_{n^s}^{s*} \log \frac{x_{n^s}^{s*}}{x_{n^s}} \right) ~. \label{MultiConstantOfMotion}$$ Although, strictly speaking, Hamiltonian dynamics and the associated symplectic structure of information space occurs only for two agents, one can describe multiple agent dynamics as a generalized Hamiltonian system [@Per90]. In the general case with $\alpha >0$, dissipative dynamics and high-dimensional chaotic flows can give rise to several unstable directions, since information influx has a network structure relative to the other agents. At least $S$ stable directions are expected since memory loss comes from each individual’s internal dynamics. Summarizing, in single-agent adaptation, information flows unidirectionally from the environment to the agent and the agent adapts its behavior to the environmental constraints. Adaptation leads to $D({\bf p}_e \parallel {\bf x})\rightarrow 0$. For mutual adaptation in an agent collective, however, information flow is multidimensional since each agent obtains information from its environment that includes the other agents. In this situation, $E$ need not be a Lyapunov function for the dynamics. As we will see, when the dynamics is chaotic, global information maximization is of doubtful utility and a dynamic view of adaptation shown in Fig. \[fig:horseshoe\] is more appropriate. When dynamic memory in collectives emerges, collective adaptation becomes a non-trivial problem. A detailed dynamical and information theoretic analysis along these lines will be reported elsewhere. In the next section, we will give several phenomenological examples that captures collective adaptation. Examples {#Sec:Examples} ======== To illustrate collective adaptation, we now give several examples of the dynamics in a static environment with two and three agents interacting via versions of Matching Pennies and Rock-Scissors-Paper, games with non-transitive structures. App. \[ReinforcementSchemesInteractionMatrices\] gives the details of the reinforcement schemes for these cases. The agents will have equal adaptation rates ($\beta_X=\beta_Y=\cdots$) and the same number of actions ($N = M = L =\cdots$). In these simplified cases, the equations of motion for two agents are given by $$\begin{aligned} \frac{\dot{x}_i}{x_i} & = & [ (A{\bf y})_i-{\bf x}\cdot A{\bf y}] + \alpha_X [-\log x_i + \sum_{n=1}^N x_n\log x_n ] ~, \nonumber\\ \frac{\dot{y}_j}{y_j} & = & [ (B{\bf x})_j-{\bf y}\cdot B{\bf x}] + \alpha_Y [-\log y_j + \sum_{m=1}^M y_m\log y_m] ~, \nonumber\\ \label{LearningEquations-Constant-Example}\end{aligned}$$ for $i, j = 1, \ldots, N$. A detailed analysis of this case with zero memory loss ($\alpha = 0$) is given in Ref. [@Hof88] in terms of asymmetric game dynamics. We will present results for zero and positive memory loss rates. We then consider three agents, for which the adaptation equations are $$\begin{aligned} \frac{\dot{x}_i}{x_i} &=& [(A{\bf y}{\bf z})_i - {\bf x}\cdot A{\bf y}{\bf z}] + \alpha_X [-\log x_i + \sum_{n=1}^N x_n\log x_n ] ~, \nonumber\\ \frac{\dot{y}_j}{y_j} &=& [(B{\bf z}{\bf x})_j - {\bf y}\cdot B{\bf z}{\bf x}] + \alpha_Y [-\log y_j + \sum_{m=1}^M y_m\log y_m] ~, \nonumber\\ \frac{\dot{z}_k}{z_k} &=& [(C{\bf x}{\bf y})_k - {\bf z}\cdot C{\bf x}{\bf y}] + \alpha_Z [-\log z_k + \sum_{l=1}^L ~z_l \log z_l] ~, \nonumber\\ \label{LearningEquations-Constant-3Agents}\end{aligned}$$ for $i, j, k = 1, \ldots, N$. We again will describe cases with and without memory loss. Computer simulations are executed in the information space $\bf U$ and the results are shown in the state space $X$. We ignore the dynamics on the boundary of the simplex and concentrate the case that all variables are greater than $0$ and less than $1$. Two Agents Adapting under Matching Pennies Interaction ------------------------------------------------------ In the matching pennies game, agents play one of two actions: heads ($H$) or tail ($T$). Agent $X$ wins when the plays do not agree; agent $Y$ wins when they do. Agent $X$’s state space is $\Delta_X = (x_1,x_2)$ with $x_i \in (0,1)$ and $x_1 + x_2 = 1$. That is, $x_1$ is the probability that agent $X$ plays heads; $x_2$, tails. Agent $Y$ is described similarly. Thus, each agent’s state space is effectively one dimensional and the collective state space $\Delta = \Delta_X \times \Delta_Y$, two dimensional. The environment for two agents interacting via the matching pennies game leads to the following matrices for Eqs. (\[LearningEquations-Constant-Example\]): $$A=\left[ \begin{array}{cc} -\epsilon_X&\epsilon_X\\ \epsilon_X &-\epsilon_X\\ \end{array} \right] ~{\rm and}~ B=\left[ \begin{array}{ccc} -\epsilon_Y&\epsilon_Y \\ \epsilon_Y &-\epsilon_Y\\ \end{array} \right] ~, \label{MPGame}$$ where $\epsilon_X \in (0.0,1.0]$ and $-\epsilon_Y \in (0.0,1.0]$. Figure \[fig:2MPInteraction\] shows a heteroclinic cycle of adaptation dynamics on the boundary of $\Delta$ when the $\alpha$s vanish. Flows on the border occur only when agents completely ignore an action at the initial state; that is, when $x_i(0)=0$ or $y_j(0)=0$ for at least one $i$ or $j$. Each vertex of the simplex is a saddle since the interaction is non-transitive. ![Flows on the boundary in Matching Pennies interaction: Actions $H$ and $T$ correspond to “heads” and “tails”, respectively. Arrows indicate the direction of adaptation dynamics on the boundary of the state space $\Delta$. []{data-label="fig:2MPInteraction"}](figures/2MPInteraction.eps) The Nash equilibrium $({\bf x}^*, {\bf y}^*)$ of the Matching Pennies game is in the center of $\Delta$: $({\bf x}^*, {\bf y}^*)=(\frac12, \frac12, \frac12, \frac12)$ and this is also a fixed point of the adaptation dynamics. The Jacobian at $({\bf x}^*, {\bf y}^*)$ is $$J=\left( \begin{array}{cc} -\frac{\alpha_X}{2}(1+\log 2) &-\frac{\epsilon_X}{2}\\ -\frac{\epsilon_Y}{2}&-\frac{\alpha_Y}{2}(1+\log 2)\\ \end{array} \right)$$ and its eigenvalues are $$\begin{aligned} \frac{4\lambda_i}{1+\log 2} &=& -(\alpha_X+\alpha_Y) \nonumber\\ &\pm& \sqrt{(\alpha_X-\alpha_Y)^2+4\epsilon_X\epsilon_Y/(1+\log 2)^2} ~. \end{aligned}$$ In the perfect memory case ($\alpha_X=\alpha_Y=0$), trajectories near $({\bf x}^*, {\bf y}^*)$ are neutrally stable periodic orbits, since $\lambda_i = \pm \frac12\sqrt{\epsilon_X\epsilon_Y}$ are pure imaginary. In the memory loss case ($\alpha_X > 0$ and $\alpha_Y>0$), $({\bf x}^*, {\bf y}^*)$ is globally asymptotically stable, since Re($\lambda_1$) and Re($\lambda_2$) are strictly negative. Examples of the trajectories in these two cases are given in Figure \[fig:2MPTrajectories\]. ![Adaptation dynamics in Matching Pennies interaction: Here $\epsilon_X = 0.5$ and $\epsilon_Y = -0.3$ with (left) $\alpha_X = \alpha_Y = 0$ and (right) $\alpha_X = 0.02$ and $\alpha_Y = 0.01$. []{data-label="fig:2MPTrajectories"}](figures/2MPTrajectories.eps) Three Agents Adapting under Even-Odd Interaction ------------------------------------------------ Now consider extending Matching Pennies for two agents so that it determines the interactions between three. Here we introduce the *Even-Odd* interaction in which there are again two actions, $H$ and $T$, but agents win according to whether or not the number of heads in the group of three plays by the agents is even or odd. The environment now is given by, for agent X, $$a_{ijk}=\left\{ \begin{array}{ll} \epsilon_X, & \mbox{number of $H$s is even}\\ -\epsilon_X, & \mbox{otherwise}\\ \end{array} \right.$$ with actions for agents $X$, $Y$, and $Z$ given by $i, j, k = \{ H, T \}$ and $\epsilon_X \in (0.0, 1.0]$. The interaction matrices $b_{jki}$ and $c_{kij}$ for agents $Y$ and $Z$, respectively, are given similarly, but with $\epsilon_Y \in (0.0, 1.0]$ and $\epsilon_Z \in [-1.0,0.0)$. App. \[ReinforcementSchemesInteractionMatrices\] gives the details of the reinforcement scheme. Following the reasoning used in Matching Pennies, the collective state space $\Delta = \Delta_X \times \Delta_Y \times \Delta_Z$ is now a solid three-dimensional cube. Figure \[fig:3MPInteraction\] shows a heteroclinic network of adaptation dynamics on the boundary of $\Delta$ when $\alpha$s vanish. Flows on $\Delta$’s boundary is shown in Fig. \[fig:3MPInteraction\]. $\Delta$ is partitioned into four prism-shaped subspaces. Each prism subspace has a heteroclinic cycle on the face that is also a face of $\Delta$. ![Flows on the state space boundary under the Even-Odd interactions: $H$ and $T$ correspond to “heads” and “tails”, respectively. Arrows indicate the direction of adaptation dynamics on $\Delta$’s boundary when the $\alpha$s vanish. []{data-label="fig:3MPInteraction"}](figures/3MPInteraction.eps) The Nash equilibrium of the Even-Odd interaction is $({\bf x}^*, {\bf y}^*, {\bf z}^*)= (\frac12, \frac12, \frac12, \frac12, \frac12, \frac12)$ at the center of $\Delta$ and this is also a fixed point of the adaptation dynamics. The Jacobian there is $$J=\left( \begin{array}{ccc} -\alpha_X & 0 &0\\ 0&-\alpha_Y&0\\ 0&0&-\alpha_Z\\ \end{array} \right) ~.$$ Its eigenvalues are $\lambda=-\alpha_X,-\alpha_Y,-\alpha_Z$. Thus, in complete memory case ($\alpha_X=\alpha_Y=\alpha_Z=0$), trajectories near $({\bf x}^*, {\bf y}^*, {\bf z}^*)$ are neutrally stable periodic orbits. With memory decay ($\alpha_X, \alpha_Y,\alpha_Z >0$), the $({\bf x}^*, {\bf y}^*, {\bf z}^*)$ is globally asymptotically stable. The hyperbolic fixed points in the top and bottom faces are unstable in all cases. Examples of the trajectories are given in Figure \[fig:3MPTrajectories\]. Notably, when a single agent (say, $Z$) has memory loss and others have perfect memory, the crossed lines given by $\{z=x=0.5$, $z=y=0.5\}$ become an invariant subspace and trajectories are attracted to points in this subspace. Thus, there are infinitely many neutrally stable points. With $\alpha_X = \alpha_Y = 0$ and $\alpha_Z=0.01$, for example, the adaptive dynamics alternates between a Matching Pennies interaction between agents $X$ and $Z$ by one between agents $Y$ and $Z$ during the transient relaxation to a point on the invariant subspace. ![Dynamics of adaptation in the Even-odd interaction: $\epsilon_X = 0.5$, $\epsilon_Y = 0.2$, and $\epsilon_Z=-0.3$ with $\alpha_X = \alpha_Y = \alpha_Z=0$ in (left) and with $\alpha_X = \alpha_Y = 0$ and $\alpha_Z=0.01$ in (right). The trajectories with several initial conditions are shown in (left). The neutral subspace is shown as the horizontal cross and the trajectory chosen illustrates the attraction to a point in this subspace in (right). []{data-label="fig:3MPTrajectories"}](figures/3MPTrajectoriesgif.eps) Two Agents Adapting under Rock-Scissors-Paper Interaction --------------------------------------------------------- In this subsection, we give an example of an environment in which agents have three actions. One of the most commonly studied games with three actions is the Rock-Scissors-Paper (RSP) game, in which an agent playing Rock beats one playing Scissors, which in turn beats an agent playing Paper, which finally beats Rock. First we examine two agents, which is a straightforward implementation of the RSP game and then extend the RSP interaction to three agents and analyze the higher-dimensional behavior. The interaction matrices for these cases are given in App. \[ReinforcementSchemesInteractionMatrices\]. Under the RSP interaction each agent has the option of playing one of three actions: “rock” (R), ‘scissors’’ (S), and “paper” (P). Agent $X$’s probability of playing these are denoted $x_1$, $x_2$, and $x_3$ and $x_1+x_2+x_3=1$. Agent $Y$ probabilities are given similarly. Thus, the agent state spaces, $\Delta_X$ and $\Delta_Y$, are each two dimensional simplices, and the collective state space $\Delta = \Delta_X \times \Delta_Y$ is four dimensional. For two agents the environment is given by the interaction matrices $$A = \left[ \begin{array}{ccc} \epsilon_X & 1 & -1\\ -1 & \epsilon_X & 1\\ 1 & -1 & \epsilon_X\\ \end{array} \right] ~{\rm and}~ B = \left[ \begin{array}{ccc} \epsilon_Y & 1 & -1\\ -1 & \epsilon_Y & 1\\ 1 & -1 & \epsilon_Y\\ \end{array} \right] ~, \label{RSPGame}$$ where $\epsilon_X, \epsilon_Y \in[-1.0,1.0]$ are the rewards for ties and normalized to $$A' = \left[ \begin{array}{ccc} \frac23\epsilon_X & 1-\frac13\epsilon_X & -1-\frac13\epsilon_X\\ -1-\frac13\epsilon_X & \frac23\epsilon_X & 1-\frac13\epsilon_X\\ 1-\frac13\epsilon_X & -1-\frac13\epsilon_X & \frac23\epsilon_X\\ \end{array} \right]$$ and $$B' = \left[ \begin{array}{ccc} \frac23\epsilon_Y & 1-\frac13\epsilon_Y & -1-\frac13\epsilon_Y\\ -1-\frac13\epsilon_Y & \frac23\epsilon_Y & 1-\frac13\epsilon_Y\\ 1-\frac13\epsilon_Y & -1-\frac13\epsilon_Y & \frac23\epsilon_Y\\ \end{array} \right] ~. \label{RSPGame_Normalized}$$ Note that the reinforcements are normalized to zero mean and that this does not affect the dynamics. The flow on $\Delta$’s boundary is shown in Fig. \[fig:2RSPInteraction\]. This represents the heteroclinic network of adaptation dynamics on $\Delta$’s edges when the $\alpha$s vanish. Each vertex is a saddle since the interaction has non-transitive structure. ![Flows on the boundary of the simplex in the Rock-Scissors-Paper interaction for two agents: $R$, $S$, and $P$ denote “rock”, “scissors”, and “paper”, respectively. The arrows indicate the direction of the adaptation dynamics on the boundary of the collective state space $\Delta$ when the $\alpha$s vanish. []{data-label="fig:2RSPInteraction"}](figures/2RSPInteraction.eps) The Nash equilibrium $({\bf x}^*,{\bf y}^*)$ is given by the centers of the simplex: $$({\bf x}^*,{\bf y}^*) = (\frac{1}{3},\frac{1}{3},\frac{1}{3}, \frac{1}{3},\frac{1}{3},\frac{1}{3}) ~.$$ This is also a fixed point of the adaptation dynamics. The Jacobian there is $$J=\left( \begin{array}{cccc} -\alpha_X & 0 &\frac{1+\epsilon_X}3&\frac23\\ 0&-\alpha_X&-\frac23&\frac{-1+\epsilon_X}{3}\\ \frac{1+\epsilon_Y}{3}&\frac23&-\alpha_Y&0\\ -\frac23&\frac{-1+\epsilon_Y}{3}&0&-\alpha_Y\\ \end{array} \right) ~.$$ Its eigenvalues are $2\lambda_i=-(\alpha_X+\alpha_Y)$ $$\pm\sqrt{(\alpha_X-\alpha_Y)^2+\frac{4\left(\epsilon_X\epsilon_Y-3 \pm\sqrt{-3(\epsilon_X+\epsilon_Y)^2}\right)}{9}} ~.$$ Thus, when $(A, B)$ is zero-sum ($\epsilon_X+\epsilon_Y=0$) and agents have complete memory ($\alpha_X=\alpha_Y=0$), trajectories near $({\bf x}^*, {\bf y}^*)$ are neutrally stable periodic orbits since all $\lambda$’s are pure imaginary. The dynamics is Hamiltonian in this case. With memory decay ($\alpha_X, \alpha_Y >0$), and $|\alpha_X-\alpha_Y|<\frac23(\epsilon_X^2+3)$, $({\bf x}^*, {\bf y}^*)$ is globally asymptotically stable. For the nonzero-sum case, we will give examples of dynamics with $\epsilon_X=0.5$, $\epsilon_Y=-0.3$, $\alpha_Y=0.01$. In this case, when $\alpha_X>\alpha_c$, $({\bf x}^*, {\bf y}^*)$ is globally asymptotically stable. At the point $\alpha_c\sim0.055008938$, period-doubling bifurcation occurs. The example of two agents adapting in the Rock-Scissors-Paper interaction adaptation dynamics illustrates various types of low-dimensional chaos. We now explore several cases. ### Hamiltonian Limit When the agent memories are perfect ($\alpha_X=\alpha_Y=0$) and the game is zero-sum ($\epsilon_X=-\epsilon_Y$), the dynamics in the information space $\bf U$ is Hamiltonian with a function consists of relative entropy $E=D({\bf x}^*\parallel {\bf x})+D({\bf y}^*\parallel{\bf y})$. The left columns of Figs. \[fig:HamilIntegrable\] and \[fig:HamilChaos\] give trajectories in the collective state space $\Delta$, while the plots given in the middle and right columns are these trajectories projected onto the individual agent simplices, $\Delta_X$ and $\Delta_Y$. The trajectories were generated using a $4$th-order symplectic integrator [@Yos90] in $\bf U$. When $\epsilon_X = -\epsilon_Y = 0.0$ it appears that the dynamics is integrable since only quasiperiodic tori exist for almost all initial conditions in our computer simulation. With some initial conditions, the tori is knotted to form trefoil. Otherwise, when $\epsilon_X = -\epsilon_Y > 0.0$, Hamiltonian chaos occurs with positive-negative pairs of Lyapunov exponents. (See Table \[Table:HamiLyap\].) The game-theoretic behavior of this example was investigated briefly in Ref. [@Sat02]. The dynamics is very rich. For example, there are infinitely many distinct behaviors near the fixed point at the center—the interior Nash equilibrium—and a periodic orbit arbitrarily close to any chaotic one. ![Quasiperiodic tori: Collective dynamics in $\Delta$ (left column) and individual dynamics projected onto $\Delta_X$ and $\Delta_Y$ respectively (right two columns). Here $\epsilon_X = - \epsilon_Y = 0.0$ and $\alpha_X = \alpha_Y = 0$. The initial condition is (A): $({\bf x},{\bf y}) = (0.26, 0.113333, 0.626667, 0.165, 0.772549, 0.062451)$ for the top and (B): $({\bf x},{\bf y}) = (0.05, 0.35, 0.6, 0.1, 0.2, 0.7)$ for the bottom. The constant of motion (Hamiltonian) is $E = 0.74446808 \equiv E_0 $. The Poincaré section used for Fig. \[fig:HamilPSection\] is given by $x_1=x_2$ and $y_1<y_2$ and is indicated here as the straight diagonal line in agent $X$’s simplex $\Delta_X$. []{data-label="fig:HamilIntegrable"}](figures/2RSPHamilTorusAgif.eps "fig:") ![Quasiperiodic tori: Collective dynamics in $\Delta$ (left column) and individual dynamics projected onto $\Delta_X$ and $\Delta_Y$ respectively (right two columns). Here $\epsilon_X = - \epsilon_Y = 0.0$ and $\alpha_X = \alpha_Y = 0$. The initial condition is (A): $({\bf x},{\bf y}) = (0.26, 0.113333, 0.626667, 0.165, 0.772549, 0.062451)$ for the top and (B): $({\bf x},{\bf y}) = (0.05, 0.35, 0.6, 0.1, 0.2, 0.7)$ for the bottom. The constant of motion (Hamiltonian) is $E = 0.74446808 \equiv E_0 $. The Poincaré section used for Fig. \[fig:HamilPSection\] is given by $x_1=x_2$ and $y_1<y_2$ and is indicated here as the straight diagonal line in agent $X$’s simplex $\Delta_X$. []{data-label="fig:HamilIntegrable"}](figures/2RSPHamilTorusBgif.eps "fig:") ![Quasiperiodic tori and chaos: Collective dynamics in $\Delta$ (left column) and individual dynamics projected onto $\Delta_X$ and $\Delta_Y$, respectively (right two columns). Here $\epsilon_X = - \epsilon_Y = 0.5$ and $\alpha_X = \alpha_Y = 0$. The initial conditions are the same as in Fig. \[fig:HamilIntegrable\], (A) for top row and (B) for bottom rows, respectively. Also, the constant of motion is the same: $E = E_0$. The Poincaré section is given by $3x_1-x_2-2/3=0$ and $y_1-3y_2+2/3<0$ and this is indicated as a straight line in $\Delta_X$. []{data-label="fig:HamilChaos"}](figures/2RSPHamilChaosAgif.eps "fig:") ![Quasiperiodic tori and chaos: Collective dynamics in $\Delta$ (left column) and individual dynamics projected onto $\Delta_X$ and $\Delta_Y$, respectively (right two columns). Here $\epsilon_X = - \epsilon_Y = 0.5$ and $\alpha_X = \alpha_Y = 0$. The initial conditions are the same as in Fig. \[fig:HamilIntegrable\], (A) for top row and (B) for bottom rows, respectively. Also, the constant of motion is the same: $E = E_0$. The Poincaré section is given by $3x_1-x_2-2/3=0$ and $y_1-3y_2+2/3<0$ and this is indicated as a straight line in $\Delta_X$. []{data-label="fig:HamilChaos"}](figures/2RSPHamilChaosBgif.eps "fig:") A more detailed view of the complex dynamics is given in Figure \[fig:HamilPSection\] which shows Poincaré sections of Eqs. (\[LearningEquations-Constant-Example\])’s trajectories. The Poincaré section is given by $\dot{u}_3 > 0$ and $\dot{v}_3 = 0$. In $({\bf x},{\bf y})$ space the section is determined by the constraints: $$\begin{aligned} (1 - \epsilon_X) y_1 & - & (1+\epsilon_X)y_2 +\frac23\epsilon_X< 0 ~, \nonumber\\ (1 - \epsilon_Y) x_1 & - & (1 + \epsilon_Y) x_2 + \frac23\epsilon_Y = 0 ~.\end{aligned}$$ These sections are indicated as the straight lines drawn in the $\Delta_X$ simplices of Figs. \[fig:HamilIntegrable\] and \[fig:HamilChaos\]. In Figure \[fig:HamilPSection\], when $\epsilon_X=-\epsilon_Y=0.0$, closed loops depending on the initial conditions exhibits tori in the Poincaré section. When $\epsilon_X=-\epsilon_Y=0.5$, some tori collapse and become chaotic. The scatter of dots among the remaining closed loop shows characteristic Hamiltonian chaos. Table \[Table:HamiLyap\] shows Lyapunov spectra in ${\bf U}$ for dynamics with $\epsilon_X=-\epsilon_Y=0.0$ and $\epsilon_X=-\epsilon_Y=0.5$ with initial condition $({\bf x}(0), {\bf y}(0))=(x_1, 0.35, 0.65-x_1, 0.1, y_2, 0.9-y_2)$ with $E=E_0=0.74446808$ fixed. $(x_1, y_2)$ satisfies $$\frac{e^{-3(E_0+2\log3)}}{0.035} = x_1(0.65-x_1)y_2(0.9-y_2).$$ When $x_1(0)=0.05$, the initial condition is (B): $({\bf x}, {\bf y})=(0.05, 0.35, 0.6, 0.1, 0.2, 0.7)$, which we gave in the preceding examples. When $\epsilon_X=0.5$, the Lyapunov exponents indicate positive-negative pairs for $x_1(0)=0.05, 0.06$ and $0.08$, which clearly show Hamiltonian chaos. Note that $\lambda_2\simeq 0.0$, $\lambda_3\simeq 0.0$, and $\lambda_4\simeq -\lambda_1$, as expected. ![Poincaré sections of the behavior in the preceding two figures. That is, $\epsilon_X = - \epsilon_Y = 0.0$ (left) and $\epsilon_X = - \epsilon_Y = 0.5$ (right). The Poincaré section is given by $x_1=x_2$ and $y_1<y_2$ (left) and $3x_1-x_2-2/3=0$ and $y_1-3y_2+2/3<0$ (right). There are 25 randomly selected initial conditions, including the two, (A) and (B), used in Figs. \[fig:HamilIntegrable\] and \[fig:HamilChaos\]. The constant of motion ($E =E_0$) forms the outer border of the Poincaré sections. []{data-label="fig:HamilPSection"}](figures/PoincareSectiongif.eps) [@l@]{} $\epsilon_X$ $\lambda$ $x_1(0)$=0.05 0.06 0.07 0.08 0.09 0.10 -------------- ------------- --------------- --------------- ---------- --------------- ---------- ---------- $\lambda_1$ $+0.881$ $+0.551$ $+0.563$ $+0.573$ $+0.575$ $+0.589$ $0.0$ $\lambda_2$ $+0.436$ $+0.447$ $+0.464$ $+0.467$ $+0.460$ $+0.461$ $\lambda_3$ $-0.436$ $-0.447$ $-0.464$ $-0.467$ $-0.460$ $-0.461$ $\lambda_4$ $-0.881$ $-0.551$ $-0.563$ $-0.573$ $-0.575$ $-0.589$ $\lambda_1$ ${\bf +36.4}$ ${\bf +41.5}$ $+0.487$ ${\bf +26.3}$ $+0.575$ $+0.487$ $0.5$ $\lambda_2$ $+0.543$ $+0.666$ $+0.204$ $+0.350$ $+0.460$ $+0.460$ $\lambda_3$ $-0.637$ $-0.666$ $-0.197$ $-0.338$ $-0.460$ $-0.467$ $\lambda_4$ ${\bf -36.3}$ ${\bf -41.5}$ $-0.494$ ${\bf -26.3}$ $-0.575$ $-0.480$ : Lyapunov spectra for different initial conditions (columns) and different values of the tie breaking parameter $\epsilon_X$. The initial conditions are $(x_1, x_2, x_3, y_1, y_2, y_3) =(x_1, 0.35, 0.65-x_1, 0.1, y_2, 0.9-y_2)$ with $E=E_0=0.74446808$ fixed. We choose the initial conditions $(x_1, y_2)$ = $(0.05, 0.2)$, $(0.06, 0.160421)$, $(0.07, 0.135275)$, $(0.08, 0.117743)$, $(0.09, 0.104795)$, $(0.10, 0.0948432)$. The Lyapunov exponents are multiplied by $10^3$. Note that $\lambda_2\simeq 0.0$, $\lambda_3\simeq 0.0$ and $\lambda_4\simeq -\lambda_1$ as expected. The Lyapunov exponents indicating chaos are shown in boldface. []{data-label="Table:HamiLyap"} \ ### Conservative Dynamics With perfect memory ($\alpha_X=\alpha_Y=0$) and a game that is not zero-sum ($\epsilon_X \neq -\epsilon_Y$) the dynamics is conservative in $\bf U$ and one observes transients that are attracted to heteroclinic networks in the state space $X$. (See Fig. \[fig:HeteroClinic\].) ![Heteroclinic cycle with $\epsilon_X=-0.1$ and $\epsilon_Y = 0.05$ (top row). Chaotic transient to a heteroclinic network (bottom row) with $\epsilon_X=0.1$ and $\epsilon_Y = -0.05$). For both $\alpha_X = \alpha_Y = 0$. []{data-label="fig:HeteroClinic"}](figures/2RSPHeteroClinicgif.eps "fig:") ![Heteroclinic cycle with $\epsilon_X=-0.1$ and $\epsilon_Y = 0.05$ (top row). Chaotic transient to a heteroclinic network (bottom row) with $\epsilon_X=0.1$ and $\epsilon_Y = -0.05$). For both $\alpha_X = \alpha_Y = 0$. []{data-label="fig:HeteroClinic"}](figures/2RSPChaoticHeteroClinicgif.eps "fig:") ![Time series of action probabilities during the heteroclinic cycles of Fig. \[fig:HeteroClinic\]. $\epsilon_X=-0.1$ and $\epsilon_Y = 0.05$ for the left column. The right column shows the chaotic transient to a possible heteroclinic cycles when $\epsilon_X=0.1$ and $\epsilon_Y = -0.05$. For both $\alpha_X = \alpha_Y = 0$. []{data-label="fig:xHeteroClinic"}](figures/xHeteroClinicgif.eps) ![Dynamics of $H^X$, $H^Y$ and $E$ in conservative adaptive dynamics: $\epsilon_X=-0.1$ and $\epsilon_Y = 0.05$ for the left plot and $\epsilon_X=0.1$ and $\epsilon_Y = -0.05$ for the right. For both $\alpha_X = \alpha_Y = 0$. Note that $E$ increases asymptotically and $H^X$ and $H^Y$ tend to decrease. []{data-label="fig:hEntropy"}](figures/hEntropiesgif.eps) When $\epsilon_X+\epsilon_Y<0$, the behavior is intermittent and orbits are guided by the flow on $\Delta$’s edges, which describes a network of possible heteroclinic cycles. Since action ties are not rewarded there is only one such cycle. It is shown in the top row of Fig. (\[fig:HeteroClinic\]): $(R,P) \rightarrow (S,P) \rightarrow (S,R) \rightarrow (P,R) \rightarrow (P,S) \rightarrow (R,S) \rightarrow (R,P)$. Note that during the cycle each agent switches between almost deterministic actions in the order $R \rightarrow S \rightarrow P$. The agents are out of phase with respect to each other and they alternate winning each turn. With $\epsilon_X+\epsilon_Y>0$, however, the orbit is an infinitely persistent chaotic transient [@Cha95]. Since, in this case, agent $X$ can choose a tie, the cycles are not closed. For example, with $\epsilon_X > 0$, at $(R,P)$, $X$ has the option of moving to $(P,P)$ instead of $(S,P)$ with a positive probability. This embeds an instability along the heteroclinic cycle and so orbits are chaotic. (See Fig. \[fig:HeteroClinic\], bottom row.) Figure \[fig:xHeteroClinic\] shows the time series for these behaviors. Usually, in transient relaxation to heteroclinic cycle, the duration over which orbits stay near saddle vertices increases exponentially. However, for our case, it appears to increase subexponentially. This is because of the very small exponent; $(1+\delta)^n\sim 1+n\delta+\ldots$  $(\delta<<1)$. In the second chaotic transient case, it still increases subexponentially, but the visited vertices change irregularly. Figure \[fig:hEntropy\] shows the behavior of $H^X$, $H^Y$, and $E$. For both cases $E$ eventually increases monotonically and $H^X$ and $H^Y$ asymptotically decrease. The agents show a tendency to decrease choice uncertainty and to switch between almost deterministic actions. $H^X$ and $H^Y$ oscillate over the range $[0, \log 2]$ for $\epsilon_X=-0.1$ and $\epsilon_Y = 0.05$ and over $[0, \log 3]$ for $\epsilon_X=0.1$ and $\epsilon_Y = -0.05$. ### Dissipative Dynamics If the memory loss rates ($\alpha_X$ and $\alpha_Y$) are positive, the dynamics becomes dissipative in information space $\bf U$ and exhibits limit cycles and chaotic attractors. (See Fig. \[fig:DissipativeChaos\].) ![Dissipative adaptive dynamics: Stable limit cycle for $\alpha_X = 0.025$ (top), $\alpha_X = 0.021$ (middle) and chaotic attractors with $\alpha_X = 0.0198$ (bottom). All cases have $\epsilon_X = 0.5$, $\epsilon_Y=-0.3$ and $\alpha_Y = 0.01$. Period-doubling bifurcation to chaos occurs with decreasing $\alpha_X$. []{data-label="fig:DissipativeChaos"}](figures/2RSPLimitCycle2gif.eps "fig:") ![Dissipative adaptive dynamics: Stable limit cycle for $\alpha_X = 0.025$ (top), $\alpha_X = 0.021$ (middle) and chaotic attractors with $\alpha_X = 0.0198$ (bottom). All cases have $\epsilon_X = 0.5$, $\epsilon_Y=-0.3$ and $\alpha_Y = 0.01$. Period-doubling bifurcation to chaos occurs with decreasing $\alpha_X$. []{data-label="fig:DissipativeChaos"}](figures/2RSPLimitCycle4gif.eps "fig:") ![Dissipative adaptive dynamics: Stable limit cycle for $\alpha_X = 0.025$ (top), $\alpha_X = 0.021$ (middle) and chaotic attractors with $\alpha_X = 0.0198$ (bottom). All cases have $\epsilon_X = 0.5$, $\epsilon_Y=-0.3$ and $\alpha_Y = 0.01$. Period-doubling bifurcation to chaos occurs with decreasing $\alpha_X$. []{data-label="fig:DissipativeChaos"}](figures/2RSPChaosgif.eps "fig:") ![Bifurcation diagram (top) of dissipative dynamics (adapting with memory loss) projected onto coordinate $v_3$ from the Poincaré section ($\dot{u}_3>0$, $\dot{v}_3=0$) and the largest two Lyapunov exponents $\lambda_1$ and $\lambda_2$ (bottom) as a function of $\alpha_Y \in [0.01,0.03]$. Here with $\epsilon_X = 0.5$, $\epsilon_Y=-0.3$ and $\alpha_Y = 0.01$. Simulations show that $\lambda_3$ and $\lambda_4$ are always negative. []{data-label="fig:BifnLCE"}](figures/BifnLCEgif.eps) ![Dynamics of $H^X$, $H^Y$, and $E$ in dissipative adaptive dynamics: $\epsilon_X=0.5$, $\epsilon_Y = -0.3$, and $\alpha_Y = 0.01$ for both. $\alpha_X = 0.025$ for the left plot and $\alpha_X = 0.01$ for the right. $t^* \approx 10^8$ in the right figure is the (rather long) transient time. In both cases $E$ does not diverge due to memory loss. []{data-label="fig:dEntropy"}](figures/dEntropiesgif.eps) Figure \[fig:BifnLCE\] (top) shows a diverse range of bifurcations as a function of $\alpha_X$. It shows the dynamics on the surface specified by $\dot{u}_3<0$ and $\dot{v}_3=0$ projected onto $v_3$. The fixed point $({\bf x}^*, {\bf y}^*)$ becomes unstable when $\alpha_X$ is larger than $\alpha_c \approx 0.055008938$. Typically, period-doubling bifurcation to chaos occurs with decreasing $\alpha_X$. Chaos can occur only when $\epsilon_X + \epsilon_Y > 0$ [@Sat03]. Figure \[fig:dEntropy\] shows dynamics of $H^X$, $H^Y$, and $E$ in dissipative adaptive dynamics. For both cases shown $E$ does not diverge due to memory loss. When $\alpha_X=0.025$, $H^X$ and $H^Y$ converge to oscillations over the range $[\log 2, \log 3]$. When $\alpha_X=0.01$, $H^X$ and $H^Y$ exhibit chaotic behavior over the range $[0, \log 3]$. Figure \[fig:BifnLCE\] (bottom) shows that the largest Lyapunov exponent in ${\bf U}$ is positive across a significant fraction of the parameter space; indicating that chaos is common. The dual aspects of chaos, coherence and irregularity, imply that agents may behave cooperatively or competitively (or switch between both). This ultimately derives from agents’ successive mutual adaptation and memory loss in non-transitive interactions, such as in the RSP game; as was explained in Sec. \[Sec:InfoSpace\]. Note that such global behavior organization is induced by each agents’ self-interested and myopic adaptation and “weak” uncertainty of their environment. Three Agents Adapting under Rock-Scissors-Paper Interaction ----------------------------------------------------------- Consider three agents adapting via (an extension of) the RSP interaction. Here the environment is given by the following interaction $$a_{ijk} = \left\{ \begin{array}{ll} 2 &~~~\mbox{Win over the others.}\\ -2 &~~~\mbox{Lose to the other two.}\\ 1 &~~~\mbox{Win over one other.}\\ -1 &~~~\mbox{Lose to one other.}\\ \epsilon_X &~~~\mbox{Tie.}\\ \end{array} \right.$$ and similarly for $b_{jki}$ and $c_{kij}$, with $i, j, k = \{R, S, P\}$. Here $\epsilon_X, \epsilon_Y, \epsilon_Z \in (-1.0, 1.0)$. (See App. \[ReinforcementSchemesInteractionMatrices\] for the detailed listing of the reinforcement scheme.) As before we use normalized $a'_{ijk}$, $b'_{jki}$, and $c'_{kij}$: $$a'_{ijk} = \left\{ \begin{array}{ll} 2-\frac{\epsilon_X}{5} &~~~\mbox{Win over the others.}\\ -2-\frac{\epsilon_X}{5} &~~~\mbox{Lose to the other two.}\\ 1-\frac{\epsilon_X}{5} &~~~\mbox{Win over one other.}\\ -1-\frac{\epsilon_X}{5} &~~~\mbox{Lose to one other.}\\ \frac45{\epsilon_X} &~~~\mbox{Tie.}\\ \end{array} \right.$$ The normalization does not affect the dynamics. The Nash equilibrium $({\bf x}^*,{\bf y}^*, {\bf z}^*)$ is at the simplex center: $$({\bf x}^*,{\bf y}^*, {\bf z}^*) = (\frac{1}{3}, \frac{1}{3}, \frac{1}{3}, \frac{1}{3}, \frac{1}{3}, \frac{1}{3}, \frac{1}{3}, \frac{1}{3}, \frac{1}{3}) ~.$$ It is also a fixed point of the adaptation dynamics. The Jacobian there is $$J=\left( \begin{array}{cccccc} -\alpha_X & 0 &\frac13&\frac23&\frac13&\frac23\\ 0&-\alpha_X&-\frac23&-\frac13&-\frac23&-\frac13\\ \frac13&\frac23&-\alpha_Y&0&\frac13&\frac23\\ -\frac23&-\frac13&0&-\alpha_Y&-\frac23&-\frac13\\ \frac13&\frac23&\frac13&\frac23&-\alpha_Z&0\\ -\frac23&-\frac13&-\frac23&-\frac13&0&-\alpha_Z\\ \end{array} \right) ~.$$ When $\alpha_X=\alpha_Y=\alpha_Z=\alpha$, its eigenvalues are $$\lambda_i+\alpha = \frac{i}{\sqrt{3}} (-1,-1,-2,1,1,2) ~.$$ ![Flows on the simplex edges in three-agent RSP: Arrows indicate the direction of adaptation dynamics on $\Delta$’s boundary when the $\alpha$s vanish. []{data-label="fig:3RSPInteraction"}](figures/3RSPInteraction.eps) ![Periodic orbit (top: $\epsilon_X = 0.5$, $\epsilon_Y=-0.365$, $\epsilon_Z=0.8$) and chaotic orbit (bottom: $\epsilon_X = 0.5$, $\epsilon_Y=-0.3$, $\epsilon_Z=0.6$) with the other parameters are $\alpha_X=\alpha_Y = \alpha_Z = 0.01$. The Lyapunov spectrum for chaotic dynamics is $(\lambda_1,\ldots,\lambda_6)= (+45.2, +6.48, -0.336, -19.2, -38.5, -53.6)\times 10^{-3}$. []{data-label="fig:3PDissipativeChaos"}](figures/3RSPLimitCyclegif.eps "fig:") ![Periodic orbit (top: $\epsilon_X = 0.5$, $\epsilon_Y=-0.365$, $\epsilon_Z=0.8$) and chaotic orbit (bottom: $\epsilon_X = 0.5$, $\epsilon_Y=-0.3$, $\epsilon_Z=0.6$) with the other parameters are $\alpha_X=\alpha_Y = \alpha_Z = 0.01$. The Lyapunov spectrum for chaotic dynamics is $(\lambda_1,\ldots,\lambda_6)= (+45.2, +6.48, -0.336, -19.2, -38.5, -53.6)\times 10^{-3}$. []{data-label="fig:3PDissipativeChaos"}](figures/3RSPChaoticAttractorgif.eps "fig:") In the perfect memory case ($\alpha_X=\alpha_Y=\alpha_Z=0$), trajectories near $({\bf x}^*, {\bf y}^*, {\bf z}^*)$ are neutrally stable periodic orbits, since the $\lambda$s are pure imaginary. In the memory loss case ($\alpha_X, \alpha_Y, \alpha_Z > 0$), $({\bf x}^*, {\bf y}^*, {\bf z}^*)$ is asymptotically stable, since all Re($\lambda_i$) are strictly negative. One expects multiple attractors in this case. The collective state space $\Delta$ is now 6 dimensional, being the product of three two-dimensional agent simplices $\Delta=\Delta_X\times\Delta_Y\times\Delta_Z$. The flow on $\Delta$’s boundary is shown in Fig. \[fig:3RSPInteraction\], giving the adaptation dynamics on the edges of $\Delta$ when the $\alpha$s vanish. We give two examples with $\alpha_X=\alpha_Y=\alpha_Z=0.01$, $\epsilon_X=0.5$, $\epsilon_Y=-0.365$, $\epsilon_Z = 0.8$ (top: limit cycle) and $\epsilon_X=0.5$, $\epsilon_Y=-0.3$, $\epsilon_Z = 0.6$ (bottom: chaos) in Fig. \[fig:3PDissipativeChaos\]. Chaos is typically observed when $\epsilon_X+\epsilon_Y+\epsilon_Z >0$. Limit cycles are highly complex manifolds depending on the 6-dimensional heteroclinic network on the simplex boundary. The Lyapunov spectrum for the chaotic dynamics is $(\lambda_1, \ldots, \lambda_6)=( +45.2, +6.48,-0.336$, $-19.2, -38.5, -53.6)\times 10^{-3}$. The dynamics has two positive Lyapunov exponents. Note that this dynamics could have many neutrally stable subspaces in three or more dimensions. These subspaces act as quasistable attractors and may even have symplectic structure. These properties of high-dimensional dynamics will be reported elsewhere. Concluding Remarks {#Sec:TheEnd} ================== We developed a class of dynamical systems for collective adaptation. We started with very simple agents, whose adaptation was a dynamic balance between adaptation to environmental constraints and memory loss. A macroscopic description of a network of adaptive agents was produced. In one special case we showed that the dynamical system reduces to replicator equations, familiar in evolutionary game theory and population biology. In a more general setting, we investigated several of the resulting periodic, intermittent, and chaotic behaviors in which agent-agent interactions were explicitly given as game interactions. Self-organization induced by information flux was discussed using an information-theoretic viewpoint. We pointed out that unlike single-agent adaptation, information flow is multidimensional in collective adaptation and that global information maximization is of doubtful utility and a dynamic view of adaptation is more appropriate. We also noted that only with two agents via nontransitive interactions, horseshoe in the information space can be produced due to the agents’ local adaptation which amplifies fluctuations in behavior and to memory loss stabilizing behavior. Since deterministic chaos occurs even in this simple setting, one expects that in higher-dimensional and heterogeneous adaptive systems intrinsic unpredictability would become a dominant collective behavior. When dynamic memory stored in collectives emerges, collective adaptation becomes a non-trivial problem. A detailed information theoretic and dynamical systems theoretic analysis will be reported elsewhere. We close by indicating some future directions in which to extent the model. First, as we alluded to during the development, there are difficulties of scaling the model to large numbers of agents. We focused on collectives with global coupling between all agents. However, in this case, the complexity of interaction terms grows exponentially with number of agents, which is both impractical from the viewpoints of analysis and simulation, and unrealistic for natural systems that are large collectives. The solution to this, given in App. \[NetworkInteractions\], is to develop either spatially distributed agents collectives or to extend the equations to include explicit communication networks between agents. Both of these extensions will be helpful in modeling the many adaptive collectives noted in the introduction. Second, important for applications, is to develop the stochastic generalization of the deterministic equations of motion which accounts for the effects of finite and fluctuating numbers of agents and also finite histories for adaptation. Each of these introduces its own kind of sampling stochasticity and will require a statistical dynamics analysis reminiscent of that found in population genetics [@Nimw97a]. It is also important to consider the effects of asynchrony of adaptive behavior in this case. Third, one necessary and possibly difficult extension will be to agents that adapt continuous-valued actions—say, learning the spatial location of objects—to their environments. Mathematically, this requires a continuous-space extension of the adaptation equations (Eq. (\[LearningEquations\])) and this results in models that are described by PDEs [@Hofb97a]. Finally, another direction, especially useful if one attempts to quantify global function in large collectives, will be structural and information-theoretic analyses of local and global adaptive behaviors [@Sha84; @Cru89]. Analyzing the stored information and the causal architecture [@Crut98d; @Crut01a] in each agent versus that in the collective, communication in networks, and emerging hierarchical structures in collective adaptation are projects now made possible using this framework. Continuous Time {#ContinuousTimeLimits} =============== Here we give the derivation of the continuous-time limits that lead to the differential equations from the original stochastic discrete-time adaptation model. Denote the agent-agent interaction time scale, number of interactions per adaptation interval, and adaptation time scale as $d\tau$, $T$, and $t$, respectively. We assume that adaptation is very slow compared to agent-agent interactions and take the limits $d\tau\rightarrow 0$ and $T\rightarrow\infty$, keeping $dt=T d\tau$ finite. Then we take the limit $dt\rightarrow 0$ to get the derivative of the vector ${\bf Q}^X(t)$. With Eq. (\[MultiMemoryUpdate\]) and $Q_i^X(0)=0$, we have $$Q_i^X(T)=\frac1T\sum_{k=1}^{T} \left[ \sum_{m=1}^{M}\delta_{im}(k) r_{im}^X(k) - \alpha_X Q_i^X (k)\right] ~.$$ Thus, for continuous-time, when action $i$ is chosen by $X$ at step $t$, $$\begin{aligned} &&\frac{Q_i^X (t+dt) - Q_i^X (t)}{dt} \nonumber\\ &=& \frac{1}{T dt} \sum_{k=Tt}^{T(t+dt)}\left[\sum_{m=1}^{M} \delta_{im}(\frac kT)r_{im}^X(\frac{k}{T}) - \alpha_X Q_i^X (\frac{k}{T})\right] ~. \nonumber\\\end{aligned}$$ Taking $T\rightarrow \infty$ and $d\tau\rightarrow 0$, we have $$\begin{aligned} &&\frac{Q_i^X(t+dt) - Q_i^X(t)}{dt} \nonumber\\ &=& \frac{1}{dt} \int_{t}^{t+dt} \left[ \sum_{m=1}^{M}\delta_{im}(s)r_{im}^X(s)\right]ds \nonumber\\ &-& \alpha_X\frac{1}{dt}\int_{t}^{t+dt}Q_i^X (s) ds ~. \label{LearningLimit}\end{aligned}$$ Assuming $r_{ij}^X(t)$ changes as slowly as the adaptive dynamics, $r_{ij}^X(t)$ is constant during the adaptation interval $t\sim t+dt$. If we assume in addition that the behaviors of two agents $X$ and $Y$ are statistically independent at time $t$, then the law of the large numbers gives $$\begin{aligned} &&\frac{1}{dt}\int_{t}^{t+dt} \left[\sum_{m=1}^{M}\delta_{im}(s)r_{im}^X(s) \right]ds \nonumber\\ &\rightarrow& \sum_{m=1}^{M} r_{im}(t)y_m(t)\equiv R_i^X(t) ~. \label{InteractionLimit}\end{aligned}$$ Now take $dt\rightarrow 0$. Eqs. (\[LearningLimit\]) and (\[InteractionLimit\]) together give $$\dot{Q}_{i}^X(t)= R_i^X(t)-\alpha_X Q_{i}^X(t) ~,$$ for the continuous-time updating of the reinforcement memory. When environment is static given as $r_{ij}^X(t)=a_{ij}$, then $$R_i^X(t) = \sum_{n=1}^N a_{in} y_i(t) ~.$$ The single-agent case is given by letting ${\bf y}=(1,0,0,\ldots,0)$ fixed and $a_{i1}=a_i$, $i = 1,\ldots,N$. Network interactions {#NetworkInteractions} ==================== We can describe heterogeneous network interactions within our model. We give an example of a model for lattice interactions here. Agents $s=1, 2, \ldots, S$ are on a spatial lattice: agent $s$ interacts with agent $s-1$ through bi-matrices $(A^s, B^{s-1})$ and agent $s+1$ through $(B^s, A^{s+1})$. Each bi-matrix is $2\times 2$. See Fig. \[fig:lattice\]. ![Agent $s$ interacts with agent $s-1$ through bi-matrices $(A^s, B^{s-1})$ and agent $s+1$ through $(B^s, A^{s+1})$. []{data-label="fig:lattice"}](figures/LatticeDynamics.eps) Agents choose actions among the $2 \times 2$ action pairs for both the right and left neighboring agents. The action pairs are $(1, 1), (1, 2), (2, 1), (2, 2)$ and are weighted with probabilities $x_{1}, \ldots, x_{4}$. Inserting the interaction bi-matrices into the S-agent adaptive dynamics of Eq. (\[MultiLearningEquations\]) gives $$\begin{aligned} \frac{\dot{x^s_i}}{x^s_i} &=& \beta_s \left[(A^s {\bf x}^{s-1})_i - {\bf p}^s\cdot A^s{\bf x}^{s-1} \right.\nonumber\\ &+& \left.(B^s {\bf x}^{s+1})_i - {\bf q}^s\cdot B^s{\bf x}^{s+1} \right] \nonumber\\ &+& \alpha_s (-\log x^s_i-\sum_{n=1}^{4} x^s_n \log x^s_n) ~, \label{LatticeLearningEquations}\end{aligned}$$ where $\Sigma x^s_i=1$ and ${\bf p}^s=(x^s_1+x^s_{2}, x^s_{3}+ x^s_{4})$, ${\bf q}^s=(x^s_1+x^s_{3}, x^s_{2}+x^s_{4})$. In a similar way, arbitrary network interactions can be described by our adaptive dynamics given in Eqs. (\[MultiLearningEquations\]). Nash Equilibria {#NashEquilibria} =============== The *Nash equilibria* $({\bf x}^*, {\bf y}^*)$ of the bi-matrix game $(A, B)$ are those states in which all players can do no better by changing state; that is, $${\bf x}^*A{\bf y}^*\ge{\bf x}A{\bf y}^* ~\mbox{and}~ {\bf y}^*B{\bf x}^*\ge{\bf y}B{\bf x}^* ~,$$ for all $({\bf x}, {\bf y}) \in \Delta_X \times \Delta_Y$. If they exist in the interior, the solutions of the following simultaneous equations are Nash equilibria: $$\begin{aligned} &&(A{\bf y})_i = (A{\bf y})_1 ~\mbox{and}~ (B{\bf x})_j=(B{\bf x})_1 \nonumber\\ &&\Longleftrightarrow (A{\bf y})_i-{\bf x}A{\bf y}=(B{\bf x})_j-{\bf y}B{\bf x}=0 ~, \label{NashEquations}\end{aligned}$$ where $\Sigma_{n=1}^N x_n = \Sigma_{m=1}^M y_m = 1$. It is known that $N=M$ is a necessary condition for the existence of a unique Nash equilibrium in the interior of $\Delta$. With $N=M$ in the perfect memory case ($\alpha_X = \alpha_Y = 0$), the unique Nash equilibrium, if it exists, is the fixed point given by the intersection of the $x$- and $y$-nullclines of Eqs. (\[LearningEquations-Constant\]). This Nash equilibrium is not asymptotically stable, but the time average of trajectories converges to it. To see this, suppose that $x_i(t) > \delta$ for all $t$ sufficiently large, we have $$\begin{aligned} \frac{d}{dt}(\log x_i) & = & \frac{\dot{x_i}}{x_i} = (A{\bf y})_i-{\bf x}A{\bf y} ~,\nonumber\\ \frac{d}{dt}(\log y_j) & = & \frac{\dot{y_j}}{y_j} = (B{\bf x})_j-{\bf y}B{\bf x} ~.\end{aligned}$$ Integrating the both sides from $0$ to $T$ and dividing by $T$, we get $$\begin{aligned} \frac{\log x_i(T)-\log x_i(0)}{T} & = & \sum_{m=1}^M a_{im}\overline{y}_m -S_A ~, \nonumber\\ \frac{\log y_j(T)-\log y_j(0)}{T} & = & \sum_{n=1}^N b_{jn}\overline{x}_n -S_B ~,\end{aligned}$$ where $$\overline{x}_i = T^{-1} \int_0^Tx_i dt ~\mbox{and}~ \overline{y}_j = T^{-1} \int_0^Ty_j dt ~,$$ and $$S_A = T^{-1} \int_0^T{\bf x}A{\bf y} dt ~\mbox{and}~ S_B = T^{-1} \int_0^T{\bf y}B{\bf x}dt ~.$$ Letting $T\rightarrow\infty$, the left-hand sides converge to $0$. Thus, $\overline{\bf x}$ and $\overline{\bf y}$ are a solution of Eqs. (\[NashEquations\]). (This proof follows Ref. [@Sch81a].) Hamiltonian Dynamics {#HamiltonianFormInformationSpace} ==================== Consider a game $(A,B)$ that admits an interior Nash equilibrium $({\bf x}^*, {\bf y}^*) \in \Delta_X\times\Delta_Y$, and is zero-sum ($B=-A^T$), then $$E = \beta_X^{-1} D({\bf x}^*\parallel {\bf x}) + \beta_Y^{-1} D({\bf y}^*\parallel{\bf y})$$ is a constant of the motion. This follows by direct calculation: $$\begin{aligned} \frac{dE}{dt} & = & -\frac{1}{\beta_X}\sum_{n=1}^N x_n^*\frac{\dot{x}_n}{x_n} -\frac{1}{\beta_Y}\sum_{m=1}^M y_m^*\frac{\dot{y}_m}{y_m} \nonumber\\ & = & -({\bf x}^*A{\bf y}-{\bf x}A{\bf y}) -({\bf y}^*B{\bf x}-{\bf y}B{\bf x}) \nonumber\\ & = & ({\bf x}^*-{\bf x})A({\bf y}^*-{\bf y})+({\bf y}^*-{\bf y}) B({\bf x}^*-{\bf x}) \nonumber\\ & = & 0 ~. \end{aligned}$$ This holds for any number of agents. Give the agents equal numbers of actions ($N=M$) and set $\alpha$ to zero (perfect memory) and make all $\beta$s finite and positive. Then the adaptive dynamics is Hamiltonian in the information space ${\bf U}=({\bf u}, {\bf v})$ with the above constant of motion $E$, $$\dot{\bf U}=J \nabla_{\bf U} E ~,$$ with Poisson structure $J$, $$J=\left(\begin{array}{cc} O&P\\ -P^T&O\\ \end{array} \right) ~~\mbox{with}~~ P = -\beta_X \beta_Y A ~. \label{PoissonStructure}$$ *Proof*: $$\begin{aligned} &&\frac{\partial E}{\partial u_i} = \frac{\partial}{\partial u_i} \left[ \beta_X^{-1} \sum_{n=1}^N x_n^* \log x_n^* + \beta_Y^{-1} \sum_{n=1}^N y_n^* \log y_n^* \right. \nonumber\\ & & ~~~~~~\left. - \beta_X^{-1} \right( \sum_{n=1}^N x_n^* u_n - \log(\sum_{n=1}^N e^{-u_n}) \left) \right. \nonumber\\ & & ~~~~~~\left. - \beta_Y^{-1} \left( \sum_{n=1}^N y_n^* v_n - \log(\sum_{n=1}^N e^{-v_n}) \right) \right] \nonumber\\ & & ~~~~~~= \beta_X^{-1} (x_i^*-\frac{e^{-u_i}}{\sum_{n=1}^N e^{-u_n}}) = \beta_X^{-1} (x_i^*-x_i) ~, \\ &&\frac{\partial E}{\partial v_j} = \beta_Y^{-1} (y_j^*-y_j) ~. \end{aligned}$$ Since $({\bf x}^*, {\bf y}^*)$ is an interior Nash equilibrium, with Eq. (\[NormalRewards\]), $(A{\bf y}^*)_i=(B{\bf x}^*)_j=0$. Thus, $$\begin{aligned} A\frac{\partial E}{\partial {\bf v}} & = & -\frac1\beta_Y A{\bf y} ~, \nonumber\\ B\frac{\partial E}{\partial {\bf u}} & = & -\frac1\beta_X B{\bf x} ~.\end{aligned}$$ and $$\begin{aligned} J \nabla_{\bf U} E &=& \left[ \begin{array}{l} -\beta_X\beta_Y A \frac{\partial E}{\partial {\bf v}}\\ -(-\beta_X\beta_Y A)^T \frac{\partial E}{\partial {\bf u}} \end{array} \right] \nonumber\\ &=& \left[ \begin{array}{l} -\beta_X A {\bf y}\\ -\beta_Y B {\bf x} \end{array} \right] = \left[ \begin{array}{l} \dot{\bf u}\\ \dot{\bf v} \end{array} \right] = \dot{\bf U} ~. \label{HamiltonianForm}\end{aligned}$$ We can transform ${\bf U}=({\bf u}, {\bf v})$ to canonical coordinates ${\bf U}'=({\bf p}, {\bf q})$: $$\dot{\bf U}'=S\nabla_{{\bf U}'} E ~,$$ with $$S = \left( \begin{array}{cc} O&-I\\ I&O\\ \end{array} \right)$$ where $I$ is an $N\times N$ identity matrix and with a linear transformation ${\bf U}'=M{\bf U}$ to the Hamiltonian form. $\Box$ Reinforcement Schemes and Interaction Matrices {#ReinforcementSchemesInteractionMatrices} ============================================== Here we give the reinforcement scheme interaction matrices for the constant-environment collectives investigated in Sec. \[Sec:Examples\]. Matching Pennies ---------------- This game describes a non-transitive competition. Each agent chooses a coin, which turns up either heads (H) or tails (T). Agent $X$ wins when the coins differ, otherwise agent $Y$ wins. Table \[table:2mpgame\] gives the reinforcement scheme for the various possible plays. Note that the $\epsilon$s determine the size of the winner’s rewards. When $\epsilon_X+\epsilon_Y=0$, the game is zero-sum. The Nash equilibrium is ${\bf x}^*={\bf y}^*=(1/2, 1/2)$. Various extensions of Matching Pennies to more than two players are known. We give the *Even-Odd* game as an example for three agents $X$, $Y$, and $Z$ in a collective. All flip a coin. Agents $X$ and $Y$ win when the number of heads is even, otherwise $Z$ wins. Table \[table:3mpgame\] gives the reinforcement scheme. When the $\epsilon$s add to zero, the game is zero-sum. The unique mixed Nash equilibrium is ${\bf x}^*={\bf y}^*={\bf z}^* = (\frac12, \frac12, \frac12)$—the simplex center. [@l@]{} X Y $r^X$ $r^Y$ --- --- ---------------- ---------------- H H $-\epsilon_X$ $-\epsilon_Y$ H T $\epsilon_{X}$ $\epsilon_{Y}$ T H $\epsilon_{X}$ $\epsilon_{Y}$ T T $-\epsilon_X$ $-\epsilon_Y$ : The two-person Matching Pennies game: $\epsilon_X\in(0.0,1.0]$ and $\epsilon_Y\in[-1.0,0.0)$. []{data-label="table:2mpgame"} \ [@l@]{} X Y Z $r^X$ $r^Y$ $r^Z$ --- --- --- ---------------- ---------------- --------------- H H H $-\epsilon_X$ $-\epsilon_Y$ $-\epsilon_Z$ H H T $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_Z$ H T H $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_Z$ H T T $-\epsilon_X$ $-\epsilon_Y$ $-\epsilon_Z$ T H H $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_Z$ T H T $-\epsilon_X$ $-\epsilon_Y$ $-\epsilon_Z$ T T H $-\epsilon_X$ $-\epsilon_Y$ $-\epsilon_Z$ T T T $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_Z$ : The three-player Even-Odd game: $\epsilon_X\in(0.0,1.0]$ and $\epsilon_Y, \epsilon_Z\in[-1.0,0.0)$. []{data-label="table:3mpgame"} \ Rock-Scissors-Paper ------------------- This game describes a non-transitive three-sided competition between two agents: rock (R) beats scissors (S), scissors beats paper (P), but paper beats rock. Table \[table:2rspgame\] gives the reinforcement scheme. The $\epsilon$s here control the rewards for ties. When they add to zero, the game is zero-sum. The unique mixed Nash equilibrium is ${\bf x}^*={\bf y}^*=(\frac13, \frac13, \frac13)$—again, the center of the simplex. The extension of RSP interaction to three agents is straightforward. The reinforcement scheme is given in Table \[table:3rspgame\]. When $\epsilon_X+\epsilon_Y+\epsilon_Z=0$, the game is zero-sum. The Nash equilibrium is ${\bf x}^*={\bf y}^*={\bf z}^*=(1/3, 1/3, 1/3)$. [@l@]{} X Y $r^X$ $r^Y$ --- --- -------------- -------------- R R $\epsilon_X$ $\epsilon_Y$ R S 1 -1 R P -1 1 S R -1 1 S S $\epsilon_X$ $\epsilon_Y$ S P 1 -1 P R 1 -1 P S -1 1 P P $\epsilon_X$ $\epsilon_Y$ : The two-person Rock-Scissors-Paper game:  $\epsilon_X, \epsilon_Y\in(-1.0,1.0)$. []{data-label="table:2rspgame"} \ [@l@]{} X Y Z $r^X$ $r^Y$ $r^Z$ X Y Z $r^X$ $r^Y$ $r^Z$ X Y Z $r^X$ $r^Y$ $r^Z$ --- --- --- ---------------- ---------------- ---------------- --- --- --- ---------------- ---------------- ---------------- --- --- --- ---------------- ---------------- ---------------- R R R $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_{Z}$ S R R -2 1 1 P R R 2 -1 -1 R R S 1 1 -2 S R S -1 2 -1 P R S $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_{Z}$ R R P -1 -1 2 S R P $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_{Z}$ P R P 1 -2 1 R S R 1 -2 1 S S R -1 -1 2 P S R $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_{Z}$ R S S 2 -1 -1 S S S $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_{Z}$ P S S -2 1 1 R S P $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_{Z}$ S S P 1 1 -2 P S P -1 2 -1 R P R -1 2 -1 S P R $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_{Z}$ P P R 1 1 -2 R P S $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_{Z}$ S P S 1 -2 1 P P S -1 -1 2 R P P -2 1 1 S P P 2 -1 -1 P P P $\epsilon_{X}$ $\epsilon_{Y}$ $\epsilon_{Z}$ : The 3-person Rock-Scissors-Paper game:  $\epsilon_X, \epsilon_Y, \epsilon_Z\in(-1.0,1.0)$. []{data-label="table:3rspgame"} \
--- abstract: 'Production of multi strange hadrons like cascade ($\Xi$) and omega($\Omega$) baryons are studied microscopically using rate equation at $\sqrt{s_{NN}}$=2.76 TeV, Large Hadron Collider(LHC) energy. The rate equations for $\Xi$, $\Omega$ are solved simultaneously with other strange hadrons in an expanding medium. The results for $\Xi$ and $\Omega$ are compared with the data obtained from Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV from ALICE experiments. The ratio of yields of $\Xi$ and $\Omega$ to $\pi$ are analysed with various initial conditions and compared with the experimental observations made for various charge particle multiplicities.' address: | $^1$ Dept. of Applied Physics and Ballistics, F. M. University, Balasore, Odisha.\ $^2$ Variable Energy Cyclotron Centre, 1/AF Bidhan Nagar, Kolkata-700064, India. author: - '*Purabi Ghosh$^1$, Jajati K. Nayak$^{*2}$, Sushant K. Singh$^2$ and Santosh K. Agarwalla$^1$*' title: Cascade and Omega productions from Pb+Pb Collisions at LHC energy --- Heavy ion collision, Large Hadron Collider, quark gluon plasma, strangeness productions, multi strange hadrons, cascade and omega hyperons. 25.75.-q,25.75.Dw,24.85.+p \[sec:intro\] Introduction ========================== Recent measurements of multi strange baryons, $\Xi$ and $\Omega$ from p-p & p-Pb and Pb-Pb collisions at LHC energies [@alicenature17; @gyula_alice; @multistrange_alice_plb14] show interesting results that led to several intense theoretical activities. The ratios of the yield of $\Xi$ and $\Omega$ baryons to pions are observed to be enhanced with multiplicity at $\sqrt{s_{NN}}$=2.76 TeV, Pb-Pb collisions[@multistrange_alice_plb14]. A similar trend has been observed recently in high multiplicity p-p and p-Pb collisions at $\sqrt{s_{NN}}$= 7 TeV and 5.02 TeV respectively[@alicenature17; @gyula_alice; @adam16]. From the measurements of Pb-Pb collisions at 2.76 TeV, data of $\Xi/\pi$ and $\Omega/\pi$ are not available at lower multiplicities, below $dN_{ch}/d\eta$=35 (corresponding to 60-80$\%$ centrality with $N_{\text{part}}$=22.5) [@multistrange_alice_plb14] to compare with the measurements from p-p (7 TeV) and p-Pb (5.02 TeV) collisions[@alicenature17; @gyula_alice; @adam16]. However, when all available data of $\Xi /\pi$ and $\Omega/\pi$ at various multiplicities of different colliding energies are put together, it indicates that there is a steady rise in the multi strange hadron yield with multiplicity across all collision systems and then there might be a saturation. But this trend of smooth rise is not manifested strongly in case of $\Xi/\pi$ and that is clear from the data point corresponding to the lowest multiplicity of 2.76 TeV Pb-Pb collisions[@multistrange_alice_plb14; @alicenature17]. These data are not explained till now with microscopic detail. Here the attempt has been made to understand the microscopic productions of multi strange hadrons from Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV. Strange meson productions in hadronic phase have been studied using several models . However none of these models explain the production and evolution of multi strange baryons and their enhancement over p-p collisions. Statistical hadronisation model evaluated the integrated yield at those energies assuming common chemical freeze out temperature for all species[@andronic06; @andronicplb09] including RHIC and LHC energies. However it could not explain the ratios of multi-strange hadrons at 0-20% centrality of Pb-Pb collision at $\sqrt{s_{NN}}$= 2.76 TeV LHC energy while fitting with $p/\pi$ ratio. Similarly productions of kaons and anti kaons at higher colliding energies such as at RHIC and LHC (also at higher SPS energies) have been explained using models with strange quark evolution assuming a QGP phase[@BT07; @jknprc10]. But the multi-strange productions are not explained there. Enhancement of multi strange baryons at SPS energy has been attempted using URQMD in [@bassplb99] but the data were not explained well and it was argued that the enhanced productions might be due to the topological defects arising from the formation of disoriented chiral condensates (DCC) at the initial stages of collisions where the density is too high. Authors in [@kolomeitsev12] have made a novel attempt to explain HADES data using minimal statistical hadronisation model and tried to explain the ratio $\Xi^-/\Lambda$ and $\Omega^-/\Xi^-$ in [@kolomeitsev15] but could not reproduce the data although got the same trend. In this article, we focus on the microscopic production of multi-strange baryons $\Xi$ & $\Omega$ and their evolution for the first time in an expanding hot-dense system as produced in relativistic heavy ion collisions using rate equation. Extensive analysis has been done with various initial conditions and the results are compared with the observations of $\Xi/\pi$ and $\Omega/\pi$ from Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV[@multistrange_alice_plb14]. In the next section the production of $\Xi$ and $\Omega$ and their interactions in hadronic matter is discussed in detail. In section \[sec:strange\_evolve\], the evolution of strange hadrons are discussed along with secondary productions using rate equations. In this section, the equations for temperature and baryon chemical potential evolution are also highlighted. Then the results are presented in section \[sec:results\] and compared with the experimental observation. Finally, the work has been summarised in section \[sec:summary\]. \[sec:multistrangeprod\] Multi strange productions in hadronic matter ===================================================================== When the energy deposition in heavy ion collision is more than certain threshold value, an initial quark gluon system may be produced. The hadrons then are produced from the quarks due to hadronization as system expands. On the other hand when the energy deposition is less, then an initial hadronic state is plausible. Then the hadronic system evolves with secondary collisions and continue till the freeze out of hadronic species occur. The produced system encounters a hadronic medium whatever may be the energy deposition such as at RHIC and LHC. In the present article the study focuses on the production and evolution of multi-strange hadrons $\Xi$ and $\Omega$ in a hadronic medium. To study cascade and omega it is important to discuss the hadronic interactions those govern the system. The interactions considered for $\Xi$ and $\Omega$ productions are as follows; $ \bar{K}N \rightarrow K \Xi$, $ \bar{K}\Lambda \rightarrow \pi \Xi $, $ \bar{K}\Sigma \rightarrow \pi \Xi $, $ \Lambda\Lambda \rightarrow N \Xi$, $ \Lambda\Sigma \rightarrow N \Xi $, $ \Sigma\Sigma \rightarrow N \Xi$, $\Lambda \bar{K}\rightarrow \Omega^{-} K^0$, $\Sigma^{0} \bar{K}\rightarrow \Omega^{-} K^0$, $\bar{p} p\rightarrow \Omega^{-} \Omega$, $ p\bar{p} \rightarrow \Xi \bar{\Xi}$, $ \pi\Xi \rightarrow \Omega K$, where $N$ represents nucleon. We have also considered productions of other strange mesons and baryons along with $\Xi$ and $\Omega$ which are discussed in the next section. Isospin combinations are also taken into account. The production of the strange hadrons is then studied using transport equation which is discussed later. Along with these channels the inverse processes have also been considered using principle of detailed balance [@cugnon84]. There are also other $2\rightarrow 3$ and $2\rightarrow 4$ channels those contribute to the strange productions, but their rates of production are much less due to phase space factor, hence not considered here. All hadronic interactions for strange productions are broadly categorised as meson-meson, meson-baryon and baryon-baryon interactions. Each category has dominance over the other in different domain of colliding energies or depending on the system with mesonic or baryonic abundances. The channels for single strange(s=-1)productions and their cross sections are in  [@Brown1; @amslar08]. See for details [@jkn19]. **Cross sections of Cascade($\Xi$) and Omega ($\Omega$) production** -------------------------------------------------------------------- The strangeness content in $\Xi$ (S=-2) and $\Omega$ (S=-3) is more. Hence the production of these baryons are mostly from strangeness exchange reaction. To produce a baryon of S=-2 or -3 is more expensive and less probable from the reactions involving non strange hadrons in the initial channels. ### **Cross sections for $\Xi$ production** The channels involved in cascade productions are $\Lambda\Lambda\rightarrow N\Xi$, $\Lambda\Sigma\rightarrow N\Xi$, $\Sigma\Sigma\rightarrow N\Xi$, $\bar{K} \Lambda \rightarrow \pi \Xi$, $\bar{K} \Sigma \rightarrow \pi \Xi$, $\bar{K} N \rightarrow K \Xi$, $\bar{p} p \rightarrow \Xi \bar{\Xi}$, $K \Omega \rightarrow \pi \Xi$. Some of them are strangness exchange reactions and some are not. The cross sections from strangeness exchange reactions have been considered from a gauged flavor SU(3) symmetry Langragian density  [@liprc85; @linpa02] as follows, $$\begin{aligned} \mathcal{L} &= i\text{Tr}\left(\bar{B}\not\partial B\right)+\text{Tr}\left[\partial_{\mu} P^{+}\partial^{\mu}P\right] \nonumber \\ &+ g' \text{Tr}\left[ \left(2\alpha-1\right)\bar{B}\gamma^5\gamma^{\mu} B\partial_{\mu}P+ \bar{B}\gamma^5\gamma^{\mu}\left(\partial_{\mu} P\right)B\right] \label{lagrangianeq1}\end{aligned}$$ where, $$B= \begin{bmatrix} \frac{\Sigma^0}{\sqrt{2}}+\frac{\Lambda}{\sqrt{6}}& \Sigma^+ & p \\ \Sigma^{-} & \frac{-\Sigma^{0}}{\sqrt{2}}+\frac{\Lambda}{\sqrt{6}} & n \\ -\Xi^{-} & \Xi^{0} & -\sqrt{\frac{2}{3}}\Lambda\\ \end{bmatrix}$$ $$P=\frac{1}{\sqrt{2}} \begin{bmatrix} \frac{\pi^{0}}{\sqrt{2}}+\frac{\eta_8}{\sqrt{6}}+\frac{\eta_1}{\sqrt{3}} & \pi^{+} & K^{+}\\ \pi^{-} & \frac{-\pi^0}{\sqrt{2}}+\frac{\eta_8}{\sqrt{6}}+\frac{\eta_1}{\sqrt{3}} & K^0\\ K^{-} & \bar{K}^{0} & -\sqrt{\frac{2}{3}}\eta_8+\frac{\eta_1}{\sqrt{3}}\\ \end{bmatrix}$$ with $B$ and $P$ representing baryon and pseudoscalar meson octects respectively. $P$ is the linear combination of both pseudoscalar octet ($\pi, K, \eta_8$) and singlet ($\eta_1$) mesons. $g'$ is the universal coupling constant between baryons ($B$) and pseudoscalar mesons ($P$). $\alpha$ is a parameter obtained from the coupling constants of $D$-type and $F$-type interactions of $P$ and $B$ and value is taken to be 0.64  [@adelseck90]. To consider the vector mesons in the interactions of baryons and pseudo scalar mesons, vector mesons are treated as gauge particles and taken care by replacing partial derivative $\partial_{\mu}$ with covariant $D_{\mu}$, where $$D_{\mu}=\partial_{\mu}-ig[V_{\mu}]\label{covderiv}.$$ and $g$ is the other universal coupling constant that tells about the strength of vector meson interaction with pseudo scalar mesons and baryons [@linpa02]. For details refer [@jkn19]. Assuming $SU(3)$ invariant tensor interactions of $D$ and $F$ type, the interaction lagrangian then can be written as, $$\mathcal{L}^t =\frac{g^t}{2m} \text{Tr}[(2\alpha-1)\bar{B}\sigma^{\mu\nu}B\partial_{\mu}V_{\nu}+\bar{B}\sigma^{\mu\nu}(\partial_\mu V_\nu)B]$$ where $g^t$ is the universal tensor coupling constants and obtained from the empirical values of coupling of $\rho-N$ tensor interactions given by $g^t_{\rho NN}=19.8$  [@holzenkamp89]. $m$ represents the degenerate baryon mass. It may be noted that the contributions from the axial vector mesons $a_1(1260)$ and $K_1(1270)$ are expected to be small because of their masses and hence not considered in this work. The cross sections for the strangeness exchange reactions $\bar{K} \Lambda \rightarrow \pi \Xi$, $\bar{K} \Sigma \rightarrow \pi \Xi$ have been calculated considering the amplitude in the Born approximations using coupled channel approach [@linpa02]. Finite size effect of the hadrons at the interaction vertices have been taken care by considering monopole form factor. The cross sections are given by [@chen04] $$\begin{aligned} \sigma_{\bar{K}\Lambda\rightarrow \pi \Xi} &=& \frac{1}{4}\frac{p_{\pi}}{p_{\bar{K}}}\mid M_{\bar{K}\Lambda\rightarrow \pi \Xi}\mid^{2} \nonumber\\ \sigma_{\bar{K}\Sigma\rightarrow \pi \Xi} &=& \frac{1}{12}\frac{p_{\pi}}{p_{\bar{K}}}\mid M_{\bar{K}\Sigma\rightarrow \pi \Xi}\mid^{2} \label{casccross1}\end{aligned}$$ where, $\mid M_{\bar{K}\Lambda\rightarrow \pi \Xi}\mid^{2}=34.7~\frac{s_0}{s}$ and $\mid M_{\bar{K}\Sigma\rightarrow \pi \Xi}\mid^{2}=318(1-\frac{s_0}{s})^{0.6}\times(\frac{s_0}{s})^{1.7}$ with $p_i$ denoting the centre of mass momentum and $s_0=\sum_i{m_i}$ is the threshold energy and $m_i$ are masses of incoming particles. The inverse reactions from the principle of detailed balance are obtained as $$\begin{aligned} \sigma_{\pi \Xi\rightarrow \bar{K}\Lambda } &=& \frac{1}{3}\frac{p_{\bar{K}}^2}{p_{\pi}^2} \sigma_{\bar{K}\Lambda\rightarrow \pi \Xi}\nonumber\\ \sigma_{\pi \Xi\rightarrow \bar{K}\Sigma} &=& \frac{p_{\bar{K}}^2}{p_{\pi}^2} \sigma_{\bar{K}\Sigma\rightarrow \pi \Xi} \label{casccross1in}\end{aligned}$$ Similarly, the cross sections for $Y Y \rightarrow N Y$: $\Lambda\Lambda\rightarrow N\Xi$, $\Lambda\Sigma\rightarrow N\Xi$, $\Sigma\Sigma\rightarrow N\Xi$ are as follows; $$\begin{aligned} \sigma_{\Lambda\Lambda\rightarrow N\Xi}&=&37.15\frac{p_N}{p_\Lambda}\left(\sqrt{s}-\sqrt{s_0}\right)^{-0.16} ~~\text{mb} \nonumber\\ \sigma_{\Lambda\Sigma\rightarrow N\Xi}&=&25.12\left(\sqrt{s}-\sqrt{s_0}\right)^{-0.42} ~~\text{mb}\nonumber\\ \sigma_{\Sigma\Sigma\rightarrow N\Xi}&=&8.51\left(\sqrt{s}-\sqrt{s_0}\right)^{-0.395} ~~\text{mb} \label{casccross2a}\end{aligned}$$ Above parametrisation is valid for $0<(\sqrt{s}-\sqrt{s_0})<0.6$ GeV, which is allowed for our calculation. The calculations are only for the Born approximation. The other category of reactions producing cascade is $\bar{K} B\rightarrow K \Xi$ or $\bar{K} N\rightarrow K \Xi$. Their cross sections were measured experimentally [@bellefon72] and recently compared with a phenomenological calculation in  [@sharov11]. Following parameterised cross sections of isospin channels have been used. $$\begin{aligned} \sigma_{K^-p \rightarrow K^+\Xi^-}&=&235.6\left(1-\frac{\sqrt{s_0}}{\sqrt{s}}\right)^{2.4}\left(\frac{\sqrt{s_0}}{\sqrt{s}}\right)^{16.6} \text{ mb} \nonumber\\ \sigma_{K^-p \rightarrow K^0\Xi^0}&=&7739.9\left(1-\frac{\sqrt{s_0}}{\sqrt{s}}\right)^{3.8}\left(\frac{\sqrt{s_0}}{\sqrt{s}}\right)^{26.5} \text{ mb}\nonumber\\ \sigma_{K^-n \rightarrow K^0\Xi^-}&=&235.6\left(1-\frac{\sqrt{s_0}}{\sqrt{s}}\right)^{2.4}\left(\frac{\sqrt{s_0}}{\sqrt{s}}\right)^{16.6} \text{ mb}. \label{casccross3}\end{aligned}$$ Averaging over isopin channels one can have the cross section for $\bar{K}N\rightarrow K\Xi$ channel as $$\begin{aligned} \sigma_{\bar{K}N->K\Xi}=0.5\left[\sigma_{K^-p \rightarrow K^+\Xi^-}+\sigma_{K^-p \rightarrow K^0\Xi^0}+\sigma_{K^-n \rightarrow K^0\Xi^-}\right]\end{aligned}$$ The parametrisation here is valid within $0 \leq \left({\sqrt{s}-\sqrt{s_0}}\right) \leq 1 ({\text {GeV}})$. The $\Xi$ productions from other important category of reaction where the initial channel doesn’t contain any strange hadron is $B \bar{B} \rightarrow \Xi \overline{\Xi}$ *i.e.* $p \bar{p} \rightarrow \Xi^- \overline{\Xi}^+$ and $p\bar{p}\rightarrow \overline{\Xi}^0\Xi^0$. The cross section has been calculated using quark gluon string model\ (QGSM) and compared with experiment [@kaidalov94]. The cross sections of two isospin (outgoing) channels are related as follows, $\sigma_{\bar{p}p \rightarrow \overline{\Xi}^+\Xi^-}=16\sigma_{\bar{p}p\rightarrow \overline{\Xi}^0\Xi^0}$ where, $$\sigma_{\bar{p}p \rightarrow \overline{\Xi}^0\Xi^0}=\frac{16}{81\pi}\frac{[\sigma_{\bar{p}p\rightarrow \bar{\Lambda}\Lambda}]^2}{2\Lambda_1}\text{exp}\left[\Lambda_1 t_{DC}\right] \label{crosslambdapp}$$ Here the cross section of $\Xi$ production is related to the cross section of $\bar{p} p\rightarrow \bar{\Lambda}\Lambda$. The parameter $\Lambda_1$ appearing in the Eq. \[crosslambdapp\] is the slope of the differential cross section of $\bar{p}p\rightarrow \bar{\Lambda}\Lambda$ and the value is taken to be 9 $\text{GeV}^{-2}$  [@kaidalov94]. For the other factor appearing in the exponent please see [@kaidalov94; @jkn19]. ![Rate (R=$\langle \sigma v\rangle$)of cascade production with temperature from reactions $YY \rightarrow N\Xi$ and $pp\rightarrow \Xi \bar{\Xi}$. Dashed,solid, dot-dashed(color-red onlie) and dotted (color online-blue)lines represent the contributions from $\Lambda \Lambda\rightarrow N\Xi$, $\Lambda \Sigma\rightarrow N\Xi$ and $\Sigma \Sigma\rightarrow N\Xi$, $p\bar{p}\rightarrow \Xi\bar{\Xi}$ respectively.[]{data-label="fig_cascaderate1"}](cascaderate1.eps) ![Rate, R(T)(=$\langle \sigma v\rangle$)of cascade production with $\bar{K}\Sigma, \bar{K}\Lambda,\bar{K}N, K\Omega$ in the initial channels.[]{data-label="fig_cascaderate2"}](cascaderate2.eps) ![Rate (R=$\langle \sigma v\rangle$)of omega production from $\pi\Xi \rightarrow \Omega K$, ${\bar K}\Lambda \rightarrow \Omega K$ and ${\bar K}\Sigma \rightarrow \Omega K$ reactions at various temperature.[]{data-label="fig_omegarate1"}](omegarate1.eps) ![Rate of omega production from $p\bar{p} \rightarrow \Omega \bar{\Omega}$ reactions at various temperature.[]{data-label="fig_omegarate2"}](omegarate2.eps) ### **Cross sections for $\Omega$ production** $Omega$ is the hyperon with maximum strangeness and production channels are not understood clearly. However, following channels are considered in this work. $K^{-} \Lambda \rightarrow \Omega^{-} K^{0}$, $K^{-}\Sigma^0 \rightarrow \Omega^{-} K^{0}$, $\pi^{0}\Xi \rightarrow \Omega^{-}K^{0}$, $p \bar{p} \rightarrow \Omega \bar{\Omega}$. The reactions like $\Xi Y \rightarrow \Omega N$, $\bar{K} \Xi \rightarrow \Omega \pi$ although produce $\Omega$ but we don’t have a clear understanding of the production cross section. However some authors mention that the production is like $\bar{K} N \rightarrow \pi Y$ [@koch89] but the necessary experimental coupling is missing. The authors in  [@gaitanos16] mention the $\Omega$ production from $\pi \Xi \rightarrow \Omega K$ ($\pi^0 \Xi^- \rightarrow \Omega^- K^0$), $\bar{K} Y \rightarrow K \Omega$ ($\bar{K} \Lambda \rightarrow K^0 \Omega^{-} $, $\bar{K} \Sigma^{0}\rightarrow K^0 \Omega^{-}$) [@gaitanos16] using the cross section from PYTHIA simulation. The cross sections are parametrised as follows, $$\begin{aligned} \sigma_{K^{-} \Lambda \rightarrow \Omega^{-} K^{0}}&=&a_0+a_1 \, p_{\text{lab}}+a_2\, p_{\text{lab}}^2+a_3 \, \text{exp}(-a_4 p_{\text{lab}}) \nonumber\\ \sigma_{K^{-} \Sigma^0 \rightarrow \Omega^{-} K^{0}}&=&b_0+b_1 p_{\text{lab}}+b_2 p_{\text{lab}}^2+b_3 \, \text{exp}(-b_4 p_{\text{lab}}) \nonumber\\ \sigma_{\pi^{0} \Xi^{-} \rightarrow \Omega^{-} K^{0}}&=&c_0+c_1 \, p_{\text{lab}}+c_2\, p_{\text{lab}}^2+c_3/p_{\text{lab}}+\nonumber\\ &&c_4/(p_{\text{lab}}^2)+c_5 \text{exp}(-p_{\text{lab}}) \end{aligned}$$ The parameters $a_i$’s and the validity of crossections in the domain of momentum in laboratory frame($p_{\text{lab}}$) are mentioned in table \[table\_parameters\]. The cross section from the annihilation of $p$ and$\bar{p}$ is taken from [@kaidalov94] and reads as follows; $$\begin{aligned} \sigma_{\bar{p} p\rightarrow \Omega^{-} \bar{\Omega}^+}= \frac{4^3}{\pi^2}\times\frac{[\sigma_{p \bar{p}\rightarrow \bar{\Lambda}\Lambda}]^3}{\Lambda_1^2} \times \exp[\Lambda_1 t_{DO}]\end{aligned}$$ where $t_{DO}=t_{min}^{\Lambda\Xi}-t_{min}^{p\Lambda}+t_{min}^{\Xi\Omega}-t_{min}^{p\Lambda}$ and $t_{min}^{ij}=-\frac{s}{2}+m_{i}^{2}+m_{j}^{2}+\frac{1}{2}\sqrt{(s-4m_{i}^{2})(s-4m_{j}^{2})}$. For details see [@jkn19]. Out of these four channels the proton antiproton producing omega channels is the primary omega producing channel and the rest three channels are secondary channels. **Cross sections for other strange hadrons** -------------------------------------------- The productions of other strange mesons, single strange baryons as follows $\pi \pi \rightarrow K \bar{K}$, $\pi \rho \rightarrow K \bar{K}$, $\rho \rho \rightarrow K \bar{K}$, $\pi N \rightarrow \Lambda K$, $\rho N \rightarrow \Lambda K$, $\pi N \rightarrow \Sigma K$, $\bar{K} N \rightarrow \Lambda \pi$, $\bar{K} N \rightarrow \Sigma \pi$, $\bar{p} p \rightarrow \Lambda \bar{\Lambda}$, $\bar{p} p \rightarrow \Sigma^- \bar{\Sigma^+}$, $\bar{p} p \rightarrow K^- \bar{K^+}$, $N \Xi \rightarrow \Lambda \Lambda$, $N \Xi \rightarrow \Lambda \Sigma$, $N \Xi \rightarrow \Sigma \Sigma$, $K \Xi \rightarrow \bar{K} N$, $\pi \Xi \rightarrow \bar{K} \Lambda$, $\pi \Xi \rightarrow \bar{K} \Sigma$, $K \Omega \rightarrow K \Sigma$, $K\Omega \rightarrow {\bar K}\Lambda$ etc. are considered simultaneously to calculate the multi strange baryons. The cross sections are described in  [@Brown1; @amslar08; @liprc85; @kaidalov94; @cugnonnpa84; @linpa97; @jknprc10]. $\sigma_{K^{-} \Lambda^0 \rightarrow \Omega^{-} K^{0}}$ ($1.011\leq P_{lab} (GeV)\leq 6.55$)\ $a_0$ $a_1$ $a_2$ $a_3$ $a_4$ ---------- ------------ ------------ ---------- ---------- 0.155591 -0.0473326 0.00362302 -0.29776 0.917116 : Parameters for $\Omega$ productions $\sigma_{K^{-} \Sigma^0 \rightarrow \Omega^{-} K^{0}}$ ($1.19\leq P_{lab}(GeV)\leq 5.991$)\ $b_0$ $b_1$ $b_2$ $b_3$ $b_4$ ---------- ------------ ------------ ----------- ---------- 0.137027 -0.0422865 0.00327658 -0.281588 0.942457 : Parameters for $\Omega$ productions $\sigma_{\pi^{0} \Xi \rightarrow \Omega^{-} K^{0}}$ ($1.033\leq P_{lab}(GeV)\leq 5.351$)\ $c_0$ $c_1$ $c_2$ $c_3$ $c_4$ $c_5$ ----------- ----------- ------------ -------- ----------- ---------- -0.414988 -0.025499 0.00628967 2.1816 -0.639193 -2.85555 : Parameters for $\Omega$ productions \[table\_parameters\] **Rate of production** ---------------------- We consider the thermal rate for the above binary interactions for strangeness production and evolution in hadronic matter. The rate $R$ at a temperature $T$ for a particular channel of reaction of type $a+b \rightarrow c+d$ is given by  [@kapusta86; @gondolo91], $$\begin{aligned} % \label{eqn_reaction_rate} \langle \sigma v\rangle &=&\frac{T^4}{4m_a^2m_b^2K_2(m_a/T)K_2(m_b/T)}\int _{z_0}^{\infty} \, dz\, [z^2-\nonumber\\ && ((m_a+m_b)/T)^2][z^2-((m_a-m_b)/T)^2]\sigma K_1(z) \nonumber\end{aligned}$$ where $z_0=\text{max}(m_a+m_b,m_c+m_d)/T$, $z=\frac{\sqrt{s}}{T}$, $\sigma$ is the cross section of particular channel of consideration and $m_a, m_b$ are incoming masses with relative velocity $v$(moller). $K$’s are modified bessel functions. \[sec:strange\_evolve\] Strangeness evolution in Hadronic Medium with secondary productions =========================================================================================== The evolution of $\Xi$ and $\Omega$ and their yield in terms of number density are studied using momentum integrated Boltzmann equation or rate equation for a hadronic medium. The equations for all strange hadrons are mentioned below. Each rate equation contains several production terms according to various reaction channels and a dilution term due to the expansion of the system. Pions which contribute maximally to the total entropy of the system provide the thermal background where the strange hadrons are assumed to be away from equilibrium initially. The hadronic system evolves as the temperature falls. The rate equations are as follows. $$\begin{aligned} \frac{dn_{K}}{dt}&=& n_{\pi}n_{\pi} \langle\sigma v\rangle_{\pi\pi\rightarrow K\bar{K}} -n_{K}n_{\bar{K}}\langle\sigma v \rangle_{K\bar{K}\rightarrow \pi\pi} \nonumber\\ && +n_{\rho} n_{\rho} \langle\sigma v\rangle_{\rho\rho\rightarrow K\bar{K}} -n_{K}n_{\bar{K}}\langle\sigma v\rangle_{K\bar{K}\rightarrow \rho\rho} \nonumber\\ && +n_{\pi} n_{\rho}\langle\sigma v\rangle_{\pi\rho\rightarrow K\bar{K}} -n_{K}n_{\bar{K}}\langle\sigma v\rangle_{K\bar{K}\rightarrow \pi\rho} \nonumber\\ && +n_{\pi} n_N\langle\sigma v\rangle_{\pi N\rightarrow \Lambda K} -n_{\Lambda} n_K\langle\sigma v\rangle_{\Lambda K\rightarrow \pi N} \nonumber\\ && +n_{\rho} n_N\langle\sigma v\rangle_{\rho N\rightarrow \Lambda K} -n_{\Lambda} n_K\langle\sigma v\rangle_{\Lambda K\rightarrow \rho N} \nonumber\\ && +n_{\pi} n_N\langle\sigma v\rangle_{\pi N\rightarrow \Sigma K} -n_{\Sigma} n_K\langle\sigma v\rangle_{\Sigma K\rightarrow \pi N} \nonumber\\ && +n_{\bar K} n_N\langle\sigma v\rangle_{\bar{K}N\rightarrow K\Xi} -n_K n_{\Xi}\langle\sigma v\rangle_{K\Xi\rightarrow \bar{K}N} \nonumber\\ && +n_p n_{\bar{p}} \langle\sigma v\rangle_{ p \bar{p}\rightarrow K \bar{K}} -n_K n_{\bar{K}}\langle\sigma v\rangle_{K \bar{K} \rightarrow p \bar{p}} \nonumber\\ && +n_{\bar{K}} n_{\Lambda} \langle\sigma v\rangle_{\bar{K} \Lambda\rightarrow \Omega K} -n_{\Omega}n_{K}\langle\sigma v\rangle_{\Omega K \rightarrow \bar{K}\Lambda} \nonumber\\ && +n_{\bar{K}} n_{\Sigma} \langle\sigma v\rangle_{\bar{K} \Sigma\rightarrow \Omega K} -n_{\Omega}n_{K}\langle\sigma v\rangle_{\Omega K \rightarrow \bar{K}\Sigma} \nonumber\\ && +n_{\pi} n_{\Xi}\langle\sigma v\rangle_{\pi \Xi \rightarrow K\Omega} -n_{\Omega} n_{K}\langle\sigma v\rangle_{ \Omega K \rightarrow \pi \Xi} -\frac{n_K}{t} \nonumber\end{aligned}$$ $$\begin{aligned} \frac{dn_{\bar{K}}}{dt}&=& n_{\pi}n_{\pi} \langle\sigma v\rangle_{\pi\pi\rightarrow K\bar{K}} -n_{K}n_{\bar{K}}\langle\sigma v \rangle_{K\bar{K}\rightarrow \pi\pi} \nonumber\\ && +n_{\rho} n_{\rho} \langle\sigma v\rangle_{\rho\rho\rightarrow K\bar{K}} -n_{K}n_{\bar{K}}\langle\sigma v\rangle_{K\bar{K}\rightarrow \rho\rho} \nonumber\\ && +n_{\pi} n_{\rho}\langle\sigma v\rangle_{\pi\rho\rightarrow K\bar{K}} -n_{K}n_{\bar{K}}\langle\sigma v\rangle_{K\bar{K}\rightarrow \pi\rho} \nonumber\\ && -n_{\bar{K}} n_N \langle\sigma v\rangle_{\bar{K}N\rightarrow \Lambda \pi} +n_{\Lambda}n_{\pi}\langle\sigma v\rangle_{\Lambda \pi\rightarrow \bar{K}N} \nonumber\\ && -n_{\bar{K}} n_N \langle\sigma v\rangle_{\bar{K}N\rightarrow \Sigma\pi} +n_{\Sigma}n_{\pi}\langle\sigma v\rangle_{\Sigma \pi\rightarrow \bar{K}N} \nonumber\\ && -n_{\bar K} n_N\langle\sigma v\rangle_{\bar{K}N\rightarrow K\Xi} +n_K n_{\Xi}\langle\sigma v\rangle_{K\Xi\rightarrow \bar{K}N} \nonumber\\ && -n_{\bar{K}} n_{\Lambda}\langle\sigma v\rangle_{\bar{K}\Lambda\rightarrow \pi\Xi} +n_{\pi} n_{\Xi}\langle\sigma v\rangle_{\pi\Xi \rightarrow \bar{K}\Lambda} \nonumber\\ && -n_{\bar{K}} n_{\Sigma}\langle\sigma v\rangle_{\bar{K}\Sigma\rightarrow \pi\Xi} +n_{\pi} n_{\Xi}\langle\sigma v\rangle_{\pi\Xi \rightarrow \bar{K}\Sigma} \nonumber\\ && +n_p n_{\bar{p}} \langle\sigma v\rangle_{ p\bar{p} \rightarrow K \bar{K}} -n_{K}n_{\bar{K}}\langle\sigma v\rangle_{ K \bar{K} \rightarrow p \bar{p}} \nonumber\\ && -n_{\bar{K}} n_{\Lambda} \langle\sigma v\rangle_{\bar{K} \Lambda\rightarrow \Omega K} +n_{\Omega}n_{K}\langle\sigma v\rangle_{\Omega K \rightarrow \bar{K}\Lambda} \nonumber\\ && -n_{\bar{K}} n_{\Sigma} \langle\sigma v\rangle_{\bar{K} \Sigma\rightarrow \Omega K} +n_{\Omega}n_{K}\langle\sigma v\rangle_{\Omega K \rightarrow \bar{K}\Sigma} -\frac{n_{\bar{K}}}{t} \nonumber\end{aligned}$$ $$\begin{aligned} \frac{dn_{\Lambda}}{dt}&=& n_{\pi} n_N\langle\sigma v\rangle_{\pi N\rightarrow \Lambda K} -n_{\Lambda} n_K\langle\sigma v\rangle_{\Lambda K\rightarrow \pi N} \nonumber\\ && +n_{\rho} n_N\langle\sigma v\rangle_{\rho N\rightarrow \Lambda K} -n_{\Lambda} n_K\langle\sigma v\rangle_{\Lambda K\rightarrow \rho N} \nonumber\\ && -n_{\Lambda} n_{\Lambda}\langle\sigma v\rangle_{\Lambda\Lambda\rightarrow N\Xi} +n_N n_{\Xi}\langle\sigma v\rangle_{N \Xi\rightarrow \Lambda\Lambda}\nonumber\\ &&-n_{\Lambda} n_{\Sigma}\langle\sigma v\rangle_{\Lambda\Sigma\rightarrow N\Xi} +n_N n_{\Xi}\langle\sigma v\rangle_{N \Xi\rightarrow \Lambda\Sigma} \nonumber\\ && -n_{\bar K} n_{\Lambda}\langle\sigma v\rangle_{\bar{K}\Lambda\rightarrow \pi\Xi} +n_{\pi} n_{\Xi}\langle\sigma v\rangle_{\pi\Xi \rightarrow \bar{K}\Lambda} \nonumber\\ && +n_{\bar K} n_{N}\langle\sigma v\rangle_{\bar{K}N\rightarrow {\Lambda}\pi} -n_{\Lambda} n_{\pi}\langle\sigma v\rangle_{\Lambda\pi \rightarrow \bar{K}N} \nonumber\\ && +n_p n_{\bar{p}}\langle\sigma v\rangle_{p \bar{p}\rightarrow \Lambda\bar{\Lambda}} -n_{\Lambda} n_{\bar{\Lambda}}\langle\sigma v\rangle_{ \Lambda \bar{\Lambda}\rightarrow p\bar{p}} \nonumber\\ &&+n_{K} n_{\Omega}\langle\sigma v\rangle_{K{\Omega} \rightarrow {\bar K}{\Lambda}} -n_{{\bar K}} n_{\Lambda}\langle\sigma v\rangle_{{\bar K} {\Lambda}\rightarrow K \Omega}- \frac{n_{\Lambda}}{t} \nonumber\end{aligned}$$ $$\begin{aligned} \frac{dn_{\Sigma}}{dt}&=& n_{\pi} n_N\langle\sigma v\rangle_{\pi N\rightarrow \Sigma K} -n_{\Sigma} n_K\langle\sigma v\rangle_{\Sigma K\rightarrow \pi N} \nonumber\\ && -n_{\Lambda} n_{\Sigma}\langle\sigma v\rangle_{\Lambda\Sigma\rightarrow N\Xi} +n_N n_{\Xi}\langle\sigma v\rangle_{N \Xi\rightarrow \Lambda\Sigma} \nonumber\\ && -n_{\Sigma} n_{\Sigma}\langle\sigma v\rangle_{\Sigma\Sigma\rightarrow N\Xi} +n_N n_{\Xi}\langle\sigma v\rangle_{N \Xi\rightarrow \Sigma\Sigma}\nonumber\\ && -n_{\bar{K}} n_{\Sigma}\langle\sigma v\rangle_{\bar{K}\Sigma\rightarrow \pi\Xi} +n_{\pi} n_{\Xi}\langle\sigma v\rangle_{\pi\Xi \rightarrow \bar{K}\Sigma} \nonumber\\ && +n_{\bar K} n_{N}\langle\sigma v\rangle_{\bar{K}N\rightarrow {\Sigma}\pi} -n_{\Sigma} n_{\pi}\langle\sigma v\rangle_{\Sigma\pi \rightarrow \bar{K}N} \nonumber\\ && +n_p n_{\bar{p}}\langle\sigma v\rangle_{p \bar{p}\rightarrow \Sigma\bar{\Sigma}} -n_{\Sigma} n_{\bar{\Sigma}}\langle\sigma v\rangle_{ \Sigma \bar{\Sigma}\rightarrow p\bar{p}} \nonumber\\ && +n_{K} n_{\Omega}\langle\sigma v\rangle_{K{\Omega} \rightarrow \bar{K}{\Sigma}} -n_{\bar{K}} n_{\Sigma}\langle\sigma v\rangle_{{\bar K} {\Sigma}\rightarrow K \Omega}- \frac{n_{\Sigma}}{t} \nonumber\end{aligned}$$ $$\begin{aligned} \frac{dn_{\Xi}}{dt}&=& n_{\Lambda} n_{\Lambda}\langle\sigma v\rangle_{\Lambda\Lambda\rightarrow N\Xi} -n_N n_{\Xi}\langle\sigma v\rangle_{N \Xi\rightarrow \Lambda\Lambda} \nonumber\\ && +n_{\Lambda} n_{\Sigma}\langle\sigma v\rangle_{\Lambda\Sigma\rightarrow N\Xi} -n_N n_{\Xi}\langle\sigma v\rangle_{N \Xi\rightarrow \Lambda\Sigma} \nonumber\\ && +n_{\Sigma} n_{\Sigma}\langle\sigma v\rangle_{\Sigma\Sigma\rightarrow N\Xi} -n_N n_{\Xi}\langle\sigma v\rangle_{N \Xi\rightarrow \Sigma\Sigma}\nonumber\\ && +n_{\bar K} n_N\langle\sigma v\rangle_{\bar{K}N\rightarrow K\Xi} -n_K n_{\Xi}\langle\sigma v\rangle_{K\Xi\rightarrow \bar{K}N} \nonumber\\ && +n_{\bar{K}} n_{\Lambda}\langle\sigma v\rangle_{\bar{K}\Lambda\rightarrow \pi\Xi} -n_{\pi} n_{\Xi}\langle\sigma v\rangle_{\pi\Xi \rightarrow \bar{K}\Lambda} \nonumber\\ && +n_{\bar{K}} n_{\Sigma}\langle\sigma v\rangle_{\bar{K}\Sigma\rightarrow \pi\Xi} -n_{\pi} n_{\Xi}\langle\sigma v\rangle_{\pi\Xi \rightarrow \bar{K}\Sigma}\nonumber\\ &&+n_p n_{\bar{p}}\langle\sigma v\rangle_{p \bar{p}\rightarrow \Xi\bar{\Xi}} -n_{\Xi} n_{\bar{\Xi}}\langle\sigma v\rangle_{ \Xi \bar{\Xi}\rightarrow p\bar{p}} \nonumber\\ && +n_{\Omega} n_{K}\langle\sigma v\rangle_{ \Omega K\rightarrow {\pi}{\Xi}}- n_{\pi} n_{\Xi}\langle\sigma v\rangle_{\pi {\Xi}\rightarrow \Omega K}- \frac{n_{\Xi}}{t} \nonumber\end{aligned}$$ $$\begin{aligned} \frac{dn_{\Omega}}{dt}&=& n_p n_{\bar{p}}\langle\sigma v\rangle_{p \bar{p}\rightarrow \Omega\bar{\Omega}} -n_{\Omega} n_{\bar{\Omega}}\langle\sigma v\rangle_{ \Omega \bar{\Omega}\rightarrow p\bar{p}} \nonumber\\ && +n_{\pi} n_{\Xi}\langle\sigma v\rangle_{\pi {\Xi}\rightarrow \Omega K} -n_{\Omega} n_{K}\langle\sigma v\rangle_{ \Omega K\rightarrow {\pi}{\Xi}} \nonumber\\ && +n_{{\bar K}} n_{\Lambda}\langle\sigma v\rangle_{{\bar K} {\Lambda}\rightarrow K \Omega} -n_{K} n_{\Omega}\langle\sigma v\rangle_{ K\Omega\rightarrow{\bar K}{\Lambda}}\nonumber\\ &&+n_{\bar K} n_{\Sigma}\langle\sigma v\rangle_{{\bar K} {\Sigma}\rightarrow K \Omega} -n_{K} n_{\Omega}\langle\sigma v\rangle_{K\Omega\rightarrow{\bar K}{\Sigma}} - \frac{n_{\Omega}}{t} \end{aligned}$$ Along with the rate equations the evolution of baryonic chemical potential and temperature have also been considered with relativistic bjorken hydrodynamic expansion[@bjorken] and the chemical potential has been constrained with the values obtained from statistical hadronization model. ----------------- ------------ -------------- -------------- -------------- -------------- -------------- $dn_{ch}/d\eta$ $N_{part}$ I II III IV V $T_{f_{1}}$ $ T_{f_{2}}$ $T_{f_{3}}$ $T_{f_{4}}$ $T_{f_{5}}$ $\Xi,\Omega$ $\Xi,\Omega$ $\Xi,\Omega$ $\Xi,\Omega$ $\Xi,\Omega$ 1447.5 356.1 0.144 0.144 0.144 0.154 0.134, 0.145 966 260.1 0.142 0.144 0.144 0.154 0.141, 0.144 537.5 157.2 0.140 0.144 0.144 0.154 0.143, 0.143 205 68.6 0.132 0.144 0.144 0.154 0.137, 0.137 55 22.5 0.116 0.144 0.144 0.154 0.118, 0.118 ----------------- ------------ -------------- -------------- -------------- -------------- -------------- : initial conditions (Freeze out temperatures, $T_F$) for various multiplicities for various scenarios-I, II, III, IV, V \[table\_IC\] \[sec:results\] Results ======================= The rate, $R$(=$\langle \sigma v\rangle$) of multi-strange hadron productions have been evaluated for all mentioned channels considering the cross sections mentioned in earlier section. The cascade production rates are displayed in Figs.\[fig\_cascaderate1\] & \[fig\_cascaderate2\]. The rates from $\Lambda\Lambda \rightarrow N\Xi$, $\Lambda\Sigma \rightarrow N\Xi$, $\Sigma\Sigma \rightarrow N\Xi$ do not vary much with temperature. The cross sections for these reactions decrease very slowly with the centre of mass energy of the colliding channel ( Eq. \[casccross2a\]) beyond the threshold, while the centre of mass energy increases slowly with the temperature within the range where thermal rates have been shown. Hence the rates for these reactions appear to be constant (although slightly increase with temperature) when the Boltzmann factor is considered. However, it is found that contributions from $\Lambda \Sigma$ interactions is 7-8 times larger than $\Sigma \Sigma$ and 2-3 times larger than $\Lambda \Lambda$. The rate from $\bar{K} \Sigma \rightarrow \pi\Xi$ is found to be more than $\bar{K} \Lambda \rightarrow \pi\Xi$ and $\bar{K} N \rightarrow K\Xi$ as shown in the plots displayed in Fig.\[fig\_cascaderate2\]. The rates from $\bar{K} \Sigma$ and $\bar{K} N$ are also found not to vary much within this temperature range. The production from $K\Omega \rightarrow \pi \Xi$ does not contribute much as shown in Fig. \[fig\_cascaderate2\]. $\Lambda \Sigma \rightarrow N\Xi$ is the dominant channel for cascade productions and the net cascade yield is decided by $\Lambda, \Sigma$ and $K$ interactions. The rate of $\Xi$ and $\Omega$ productions from non strange hadrons as initial channels are less compared to strangeness exchange reactions as their cross sections are less. One can have the information from the comparison of the production from the channels $p p \rightarrow \Xi \bar{\Xi}$ and $\Lambda \Lambda \rightarrow N \Xi$ or $\bar{K} \Lambda \rightarrow \pi \Xi$. The rate of production in case of $p p \rightarrow \Xi \bar{\Xi}$ is $10^6$ times less. The omega productions from $\pi\Xi\rightarrow K\Omega$, ${\bar K}\Lambda\rightarrow K\Omega$, ${\bar K}\Sigma\rightarrow K\Omega$ and $p\bar{p}\rightarrow \Omega\bar{\Omega}$ are shown in Figs. \[fig\_omegarate1\] &\[fig\_omegarate2\]. The contribution of $\pi \Xi \rightarrow K \Omega$ is the dominant one as the threshold is less compared to other channels and the pion abundance is more. Cascade and omega yields have been calculated from momentum integrated Boltzmann transport equations which considers the production and evolution of all strange hadrons simultaneously. Yield of these particles are normalized with thermal pions. The study has been done for various initial conditions. The initial number densities of strange hadrons are unknown parameters and considered to be away from equilibrium value initially. A hadronic system is assumed to be started at $T_c$=155 MeV. The value taken from the recent first principle calculation of quantum chromodynaimcs based on lattice computation [@swagato17], which suggests a value of 154$\pm$9 MeV as the transition temperature. Then different scenarios are assumed with different initial conditions to analyse the data obtained from Pb+Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV, LHC energy  [@multistrange_alice_plb14; @alicenature17]. Initial time is constrained with multiplicity. The yields of cascade and omega baryons have been measured for various multiplicities and normalised with charged pion data. The corresponding centralities and $N_{\text{part}}$ for various multiplicities are shown in table \[table\_IC\]. Theoretical results are obtained for the following scenarios. In scenario-I, initial number density is assumed to be 20 $\%$ away from equilibrium. Various freeze out temperatures ($T_F$) have been considered for various multiplicities with velocity of sound $c_s^2$=1/5. The results in terms of the ratio of the yield of $(\Xi^- +\bar{\Xi^+})$ and $(\Omega^-+\bar{\Omega^+})$ to $(\pi^++\pi^-)$ are shown in Figs. \[fig\_multistrangebypiratio-1\] & \[fig\_multistrangebypiratio-2\]. The filled symbols are data measured by ALICE collaboration and taken from  [@alicenature17; @multistrange_alice_plb14; @gyula_alice] and the solid line is the result of theoretical calculation. Higher freeze out temperature is considered for higher multiplicity in scenario-I and the values have been tabulated in table \[table\_IC\]. The ratios of $(\Omega^-+\overline{\Omega^+})/({\pi^++\pi^-})$ are explained quite successfully for all multiplicities with this initial condition. However, the ratio of $(\Xi^-+\overline{\Xi^+})/({\pi^++\pi^-})$ falis to explain the top two data points with higher multiplicities. Similarly, in Scenario-II, the system is allowed to evolve with initial cascade and omega number densities, 40$\%$ away from the equilibrium value. Here the results are analysed with a constant freeze out temperature $T_F$=144 MeV for all multiplicities. But the evaluation does not reproduce the data. Upon varying the initial number densities to be 20 $\%$ away from equilibrium value but with constant $T_F$=144 MeV for all multiplicities, it doesn’t explain the data which is depicted as scenario-III in Fig.\[fig\_multistrangebypiratio-1\]. But it gives a clue to look for a constant freezeout scenario. The results have also been obtained for constant $T_F$=154 MeV as there is a thermal model prediction, which has been pointed out in the article(Fig.4 of the article) by Adam [*et al.*]{} for ALICE collaboration  [@alice_adam2016]. In this case, we assume strange initial densities to be 40$\%$ away from the equilibrium values. This is depicted as Scenario-IV in Figs.\[fig\_multistrangebypiratio-3\] & \[fig\_multistrangebypiratio-4\]. Here the yield ratio of $\Xi, \Omega$ to $\pi$ remains almost constant with multiplicity and explains the data points excluding the measurement for the lowest multiplicity. The constant freeze out scenario is then ruled out. ![Ratio of the yield of cascade to pion with multiplicity (centrality). The solid points are data points from 2.76 TeV Pb+Pb collisions measured by ALICE collaboration. The solid lines are the results of theoretical calculation with initial condition for scenario-I, II and III.[]{data-label="fig_multistrangebypiratio-1"}](cascadebypiratio-123.eps) ![Ratio of the yield of omega to pion with multiplicity (centrality). The solid points are data points from 2.76 TeV Pb+Pb collisions measured by ALICE collaboration. The solid lines are the results of theoretical calculation with initial condition for scenario-I, II and III.[]{data-label="fig_multistrangebypiratio-2"}](omegabypiratio-123.eps) ![Ratio of the yield of cascade to pion with multiplicity (centrality). The solid lines are the results of theoretical calculation with initial condition for scenario-IV & V.[]{data-label="fig_multistrangebypiratio-3"}](cascadebypiratio-45.eps) ![Ratio of the yield of omega to pion with multiplicity (centrality). The solid lines are the results of theoretical calculation with initial condition for scenario-IV & V.[]{data-label="fig_multistrangebypiratio-4"}](omegabypiratio-45.eps) This constant freeze out scenario almost generates similar value of the yield ratio for various multiplicities. That is because the rates of production for cascade and omega do not change much with temperature. It may also tell that the final state effect is dominant for the yield at such energies. Finally, scenario-V considers the initial condition with different freeze out temperature which explains the data nicely with initial number densities 20$\%$ away from equilibrium values and plotted in Fig.\[fig\_multistrangebypiratio-4\]. The velocity of sound given by $c_s^2$=1/5 is consiedred for the above calculations. Considering $c_s^2$=1/3 through out the evolution, the yields for cascade and omega have also been calculated with initial number densities 20 $\%$ and 40 $\%$ away from equilibrium. However the theoretical estimation overestimates the data for all multiplicities. Hence $c_s^2$=1/3 has been ruled out for hadronic phase here. The yield of $\Xi$ and $\Omega$ depends very strongly on the equation of state or velocity of sound. Fast equation of state or high value of velocity of sound leads to overproduction in the system with present evolution. Hence the calculation over estimates $\Xi/\pi$ or $\Omega/\pi$ data. \[sec:summary\] Summary ======================= The $\Xi$ & $\Omega$ productions have been evaluated microscopically for the first time for Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV using rate equation considering cross sections from various possible hadronic interactions. The details of the cross sections are discussed referring available literatures, where most of them are constrained with experimental observations. The thermal rates for multi strange hadrons are shown and the yields have been calculated for various initial conditions using rate equations. Largely, the conditions that explain the data of omega suggests a lower freeze out temperature at smaller multiplicity(scenario-I & V) and it increases with $dN_{ch}/d\eta$, that is when one moves from peripheral to central collisions or from a region of larger overlap to a region of smaller overlap of the colliding nuclei. In case of $\Xi$ the deviation happens at large multiplicities, the explanation requires a lower freeze out (scenario-V). Calculation with a constant freeze out temperature $T_F$ =154 MeV and initial densities 20% away from the equilibrium values (scenario-IV) also explains most of the data points putting a question mark on the similarity of systems produced in different colliding energies with same multiplicity. This motivates to go for an investigation for small systems with similar multiplicities. It has been observed that, the calculation with $c_s^2$=1/3 fails to reproduce the data, which basically overestimates for all multiplicities. The yield depends strongly on velocity of sound. Fast equation of state leads to a over production of multistrange hadrons. Incorporation of $c_s^2(T)$ may improve the calculation. When the calculation is extended to analyse the yield of other hadrons $K, \Lambda$, a multiple freeze-out scenario is emerged for Pb-Pb collisions at LHC energy[@jkn19]. [*This article with microscopic calculation will provide a guideline to the key question that whether the systems produced in different colliding energies with similar multiplicty are similar are not, which may be addressed in future work*]{}. [**Acknowledgment:**]{} Author P. Ghosh thanks VECC for partial support from CNT project vide no. 3/5/2012/VECC/R&D-I/14802 during the stay at VECC. [99]{} J. Adam [*et al.*]{}, for ALICE collaboration, Nature Physics [**13**]{} (2017) 535-539. Gyula Benc$\over{'}$edi on behalf of ALICE Collaboration, arxiv:hep-ex-1801.03350. B. Abelev [*et al.*]{} for ALICE collaboration, Phys. Lett. B [**728**]{} (2014)216. J. Adam [*etal.*]{} ALICE Collaboration, arxiv:hep-ex-1512.07227. J. Rafelski and B. Muller, Phys. Rev. Lett [**48**]{} (1982)1066. J. Kapusta and A. Mekjian, Phys. Rev. D [**33**]{} (1986) 1304. B. Tomasik [*et al.*]{} Eur. Phys. J. C [**49**]{}, 115 (2007); B. Tomasik, nucl-th/0509101;J. Letessier and J. Rafelski, Eur. Phys. J. A [**35**]{}, 221(2008);J. Rafelski and J. Letessier, Acta Phys. Polon. B [**30**]{}, 3559(1999);B. Tomasik [*et al.*]{} Eur. Phys. J. C [**49**]{}, 115 (2007); B. Tomasik, nucl-th/0509101;S. Chatterjee, R. M. Godbole and Sourendu Gupta,Phys. Rev. C [**81**]{}, 044907 (2010). J. Cleymans, H. Oeschler, K. Redlich and S. Wheaton, hep-ph/0510283. M. Gazdzicki, J. Phys. G [**30**]{} (2004) S701. A. Andronic, P. Braun-Munzinger and J. Stachel, Nucl. Phys.[**A**]{}, [**772**]{}(2006)167; Erratum ibid [**678**]{}, 516 (2009). Jajati K. Nayak, J. Alam, B. Mohanty, P. Roy and A. K. Dutt-Mazumder, Acta Phys.Slov. [**56**]{}, 27 (2006). Jajati K. Nayak, S. Banik and J. Alam, Phys. Rev. C [**82**]{}, 024914 (2010). A. Tawfik, Fizika B [**18**]{}, 141 (2009). A. Andronic, P. Braun-Munzinger, and J. Stachael, Phys. Lett. B [**673**]{}, 142 (2009). S. Soff, S. A. Bass, M. Bleicher, L. Bravina and E. Zabrodin, H. St[$\ddot{o}$]{}cker, W. Greiner, Phys. Lett. B [**471**]{}, (1999)89. E. E. Kolomeitsev, B. Tomasik and D. N. Voskresensky, Phys. Rev. C 86, 054909 (2012). B. Tomasik [*et al.*]{} Eur. Phys. J. A doi:10.1140/epja/i2016-16251-6 \[arxiv:1510.04349(2015)\]. J. Cugnon [*et al.*]{} Lettere Al Nuvo Cemento [**41**]{} (1984)213. G. E. Brown [*et. al*]{} Phys. Rev. C [**43**]{} (1991) 1881. C. Amslar [*et al.*]{} Phys. Lett. B [**667**]{}(2008) 1. P. Ghosh, J. K. Nayak, S. Singh and S. Agarwalla arXiv:1909.07885. F. Li, L. Chen, C. M. Ko and S. Lee Phys. Rev. C [**85**]{} (2012) 064902. A. B. Kaidalov, P. E. Volkovitsky, Z.Phys.C [**63**]{},517(1994) J. Cugnon, R. M. Lombard, Nucl. Phys. A [**422**]{}(1984)635. G. Q. Li, C. -H. Lee, G. E. Brown Nucl. Phys. A [**625**]{}(1997)372. C. H. Li, C. M. Ko, Nucl. Phys. A [**712**]{} (2002) 110. R. A. Adelseck, B. Saghai, Phys. Rev. C [**42**]{} (1990)108. B. Holzenkamp, K. Holinda, J. Speth, Nucl. Phys. A [**500**]{} (1989) 485. L. W. Chen, C. M. Ko, Y. H. Zheng, Phys. Lett. B [**584**]{} (2004) 269. A. de Bellefon [*et al.*]{}, Nuovo Cimento A [**7**]{}, 567 (1972);J. P. Berge [*et al.*]{}, Phys. Rev. [**147**]{}, 945 (1966);E. Briefel [*et al.*]{}, Phys. Rev. D [**12**]{}, 1859 (1975);E. Briefel [*et al.*]{}, Phys. Rev. D [**16**]{}, 2706 (1977);G. Burgun [*et al.*]{}, Nucl. Phys. B [**8**]{}, 447 (1968);J. R. Charlson[*et al.*]{} Phys. Rev. D [**7**]{}, 2533(1973);D. D. Carmony [*et al.*]{}, Phys. Rev.Lett. [**12**]{}, 482 (1964);P. M. Dauber [*et al.*]{}, Phys. Rev. [**179**]{}, 1262 (1969);J. Griselin et al., Nucl. Phys. B [**93**]{}, 189 (1975);M. Haque et al., Phys. Rev. [**152**]{}, 1148 (1966). D. A. Sharov, V. L. Krotkikh, and D. E. Lanskoy, Eur. Phys. J. A [**47**]{}, 109 (2011). P. Koch, C. B. Dover, Phys. Rev. C [**40**]{} (1989)145. T. Gaitanos [*et al.*]{}, Nucl. Phys. A [**954** ]{} (2016)308. P. Gondolo and G. Gelmini, Nucl. Phys. B**360** (1991) 145-179 J. D. Bjorken, Phys. Rev. D, [**27**]{} (1983),140. A. Bazavov [*et al*]{}, Phys. Rev. D [**95**]{}(2017) 054504. J Adam [*et al.*]{} for ALICE collaboration, arxiv:1512.07227 (2016).
Introduction {#sec:intro} ============ The II–VI semiconductor ZnO has become a frequently studied material in surface science because of its wide range of technological applications. ZnO is a basic material for varistors, thyristors and optical coatings. In addition, its direct band gap makes it an interesting candidate for blue and UV emitting LEDs and laser diodes.[@laser] The electronic and structural properties of the ZnO surfaces are in particular important in its applications as chemical sensor in gas detecting systems and as catalyst for hydrogenation and dehydrogenation reactions. In combination with Cu particles at the surface, ZnO is a very efficient catalyst for the methanol synthesis[@catalysis] where it is employed in industrial scale. The mechanism behind the enhanced catalytic activity when combined with Cu is poorly understood. However, before this interesting interplay between the ZnO substrate and the Cu particles can be addressed, a thorough understanding of the underlying clean ZnO surfaces is necessary. From a physical/chemical point of view, ZnO is a very interesting material because of the mixed covalent/ionic aspects in the chemical bonding. ZnO crystallizes in the hexagonal wurtzite structure (B4) which consists of hexagonal Zn and O planes stacked alternately along the $c$–axis (see Fig. \[fig:bulk\]). Anions and cations are 4–fold coordinated, respectively, like in the closely related zincblende structure. A tetrahedral coordinated bulk structure is typical for rather covalent semiconductors. On the other hand, ZnO shows great similarities with ionic insulators such as MgO.[@cox] This is why ZnO is often called the ‘ionic extreme’ of tetrahedral coordinated semiconductors. Wurtzite crystals are dominated by four low Miller index surfaces: the nonpolar (10$\bar{1}$0) and (11$\bar{2}$0) surfaces and the polar zinc terminated (0001)–Zn and the oxygen terminated (000$\bar{1}$)–O surfaces (see Fig. \[fig:bulk\]). By ion sputtering and annealing at not too high temperatures all four surfaces can be prepared in a bulk terminated, unreconstructed state, where the surface atoms only undergo symmetry conserving relaxations. A typical p(1$\times$1) pattern is observed in low-energy electron diffraction (LEED) and other diffraction experiments.[@duke1; @duke2; @duke3; @noguera] Although in a recent He–scattering experiment[@woell] it was shown that O–terminated (000$\bar{1}$) surfaces with p(1$\times$1) LEED patterns are usually hydrogen covered whereas the clean O–terminated surface exhibits a (3$\times$1) reconstruction, we will focus in this study on the clean, unreconstructed surfaces of ZnO. In the present paper, we investigate all four main crystal terminations of ZnO. The fully relaxed geometric structures and the surface/cleavage energies have been calculated using a first-principles density-functional (DFT) method. We have employed both, a local-density (LDA) and a generalized-gradient approximation (GGA) functional. We will discuss the relative stability of the four surfaces and how the surface relaxations of the nonpolar faces are connected to the covalency/ionicity of the chemical bond in ZnO. Finally, a detailed comparison with existing theoretical and experimental results will be given. The nonpolar (10$\bar{1}$0) surface of ZnO has been the focus of several experimental and theoretical studies. However, the form of the relaxation of the surface atoms is still very controversial. Duke et al.[@duke2] concluded from their best LEED analysis[@comment1] that the top-layer zinc ion is displaced downwards by $\Delta d_\perp$(Zn)=$-$0.45$\pm$0.1[Å]{} and likewise the top-layer oxygen by $\Delta d_\perp$(O)=$-$0.05$\pm$0.1[Å]{}, leading to a tilt of the Zn–O dimer of 12$^\circ\pm$5$^\circ$. No compelling evidence for lateral distortions within the first layer or for second-layer relaxations were obtained, but small improvements could be achieved by assuming a lateral displacement of the Zn ion toward oxygen by $\Delta d_\parallel$(Zn)=0.1$\pm$0.2[Å]{}.[@comment2] The strong inward relaxation of the Zn ion was later confirmed by Göpel at al.[@goepel] in an angle-resolved photoemission experiment. By comparing the relative position of a particular surface state with its theoretically predicted geometry dependence, a Zn displacement downwards by $\Delta d_\perp$(Zn)=$-$0.4[Å]{} was concluded. In contrast, Jedrecy et al.[@jedrecy1] found best agreement with their gracing incidence X–ray diffraction data (GIXD) for a structural model where the top-layer zinc atom is displaced downwards by only $\Delta d_\perp$(Zn)=$-$0.06$\pm$0.02[Å]{} and shifted toward oxygen by $\Delta d_\parallel$(Zn)=0.05$\pm$0.02[Å]{}. However, for their samples they observed a high density of steps and from their best-fit model they predict rather high vacancy concentrations in the first two surface layers with occupancy factors of 0.77$\pm$0.02 and 0.90$\pm$0.04 for the first and second layer, respectively. On the other hand, Parker et al.[@parker] reported scanning tunneling microscopy (STM) images of the nonpolar (10$\bar{1}$0) surfaces with atomic resolution where large flat terraces are found and no defects are visible in areas as large as 11$\times$14 surface unit cells. Due to the small scattering contribution, the position of oxygen could not be determined very accurately in the GIXD experiment of Ref. . The result of the best fit was that O relaxes further toward the bulk than Zn with $\Delta d_\perp$(O)=$-$0.12$\pm$0.06[Å]{}. This would be very unusual since to our knowledge no (10$\bar{1}$0) wurtzite or (110) zincblende surface structure has been reported where the surface dimers tilts with the cation above the anion. First theoretical investigations of the (10$\bar{1}$0) surface were done using empirical tight-binding (TB) models. With two very different TB models Wang and Duke[@wang] found a strong displacement of $\Delta d_\perp$(Zn)=$-$0.57[Å]{}, whereas Ivanov and Pollmann[@ivanov] obtained an almost bulk-like surface geometry. A recent calculation with atomistic potentials based on a shell model[@catlow] predicted $\Delta d_\perp$(Zn)=$-$0.25[Å]{} and a rather strong upward relaxation of the second-layer Zn of +0.165[Å]{}. Several ab-initio studies (DFT-LDA,[@schroer1] Hartree-Fock (HF),[@jaffe1] and a hybrid HF and DFT method using the B3LYP functional[@wander1]) employing Gaussian orbitals as basis functions to solve the electronic structure problem favor small inward relaxations of Zn and small tilts of the ZnO–dimers of 2$^\circ$–5$^\circ$. However, it is questionable, if these studies represent fully converged results. There is only one recent first-principles DFT-LDA calculation using plane waves[@filippetti] where larger relaxations with a tilt of 11.7$^\circ$ were obtained. The nonpolar (11$\bar{2}$0) ZnO surface has been less frequently studied than its (10$\bar{1}$0) counterpart. The two tight-binding models[@wang; @ivanov] predicted the same relaxation behavior for the (11$\bar{2}$0) as for the (10$\bar{1}$0) surface: Wang and Duke[@wang] found a strong zinc displacement of $\Delta d_\perp$(Zn)=$-$0.54[Å]{} toward the bulk whereas the TB model of Ivanov and Pollmann preferred an almost bulk-like surface structure. With a first-principles hybrid B3LYP method Wander and Harrison[@wander2] found much smaller relaxations for the (11$\bar{2}$0) surface than for the (10$\bar{1}$0) face, but not all degrees of freedom were relaxed in this study. To our knowledge there has been no quantitative experimental investigation. LDA PBE Expt. --------------------- ----------------- ----------------- --------- $a$ \[Å\] 3.193 ($-$1.7%) 3.282 ($+$1.0%) 3.250 $c$ \[Å\] 5.163 ($-$0.8%) 5.291 ($+$1.6%) 5.207 $c$/$a$ 1.617 1.612 1.602 $u$ 0.3783 0.3792 0.3825 $B_0$ \[GPa\] 161 128 143 $p_{\rm T}$ \[GPa\] 9.0 11.8 9.0–9.5 : \[tab:bulk\] Computed and experimental values of the structural parameters for bulk ZnO. $a$ and $c$ are the lattice constants, $u$ is an internal coordinate of the wurtzite structure which determines the relative position of the anion and cation sublattice along the $c$ axis, $B_0$ is the bulk modulus, and $p_{\rm T}$ is the transition pressure between the wurtzite (B4) and rocksalt (B1) structure of ZnO. Experimental values are from Refs. . Relative deviations from experiment are given in parenthesis. Coming to the polar surfaces, we encounter the fundamental problem that in an ionic model these surfaces are unstable and should not exist. They are so-called ‘Tusker type 3’ surfaces,[@tusker] and with simple electrostatic arguments it can be shown that the surface energy diverges for such a configuration.[@tusker] To stabilize the polar surfaces a rearrangement of charges between the O– and the Zn–terminated surfaces needs to take place in which the Zn–terminated side becomes less positively charged and the O–terminated face less negative. In fact, most polar surfaces show massive surface reconstructions or exhibit facetting to accommodate the charge transfer.[@noguera] Also randomly distributed vacancies, impurity atoms in the surface layers, or the presence of charged adsorbates are possible mechanisms to stabilize polar surfaces. However, the polar ZnO surfaces are remarkably stable, and many experiments suggest that they are in an unreconstructed, clean and fully ordered state.[@noguera] Despite many investigations it is still an open question how the polar ZnO surfaces are stabilized.[@noguera] Assuming clean and unreconstructed surfaces, the reduction in surface charge density can only occur from a redistribution of the electrons. Negative charge has to be transferred from the O–terminated face to the Zn–terminated side, leading to partially occupied bands at the surface. This so called ‘metallization of the surface’ has been used by all previous ab-initio calculations [@wander3; @carlsson; @noguera] to model the polar ZnO surfaces and will also be employed in the present study. However, whether or not the surfaces are metallic will depend on the width of the partially occupied bands. From another point of view, if the polar surfaces were stabilized by vacancies, defects or adsorbates, many defect states would be created. Now if we think of somehow averaging over the surface, the defect states would form a partially filled band. In this sense, the ‘metallization’ may be regarded as some ‘mean-field’ description for a situation where many defect states are present. Several attempts have been made to determine the layer relaxations of the unreconstructed polar surfaces. In an early dynamical LEED analysis Duke and Lubinsky[@duke4] found an outer Zn–O double-layer spacing of $d_{12}$=0.607[Å]{} for the Zn–terminated surface and $d_{12}$=0.807[Å]{} for the O–terminated face. Unfortunately, this analysis was based on an early bulk structure of ZnO, see Ref. , in which the bulk double-layer spacing was assumed to be 0.807[Å]{} instead of 0.612[Å]{}. For the Zn–terminated surface, it was concluded from the comparison of X–ray photo-diffraction (XPD) data with scattering simulations[@xpd1] that any inward relaxation of the surface Zn layer can be ruled out. Coaxial impact-collision ion-scattering spectroscopy[@caiciss] (CAICISS) proposed an expansion of $d_{12}$ by +0.35[Å]{}. Also an expansion of $d_{12}$ by +0.05[Å]{} for the Zn–terminated surface was found in an GIXD measurement.[@jedrecy2] In this experiment, the X–ray data could best be fitted by assuming a random removal of 1/4 of the Zn atoms in the surface layer. On the other hand, from the shadowing and blocking edges of a low-energy alkali ion scattering[@overbury] (LEIS) experiment no evidence for substantial quantities of point defects in the Zn–terminated as well as in the O–terminated surface was found. For the O–terminated surface, it was concluded from LEIS[@overbury] that the Zn–O double-layer spacing $d_{12}$ is close its bulk value. An XPD study[@xpd2] found a contraction of 25% of $d_{12}$, but like in the LEED analysis[@duke4], the wrong bulk structure of Ref.  was used in the scattering simulations. A GIXD measurement[@jedrecy2] predicted also an inward relaxation of the topmost O–layer by $-$0.33[Å]{} and an outward relaxation of the underlying Zn–plane by +0.08[Å]{}. The occupancy probabilities were fitted, resulting in 1.3(!) and 0.7 for the first bilayer O and Zn, respectively. After considerably improved sample preparation was achieved, the same authors reinvestigated the O–terminated polar surface.[@jedrecy3] Best agreement with their GIXD data was now found for a structural model where both, the upper O and Zn planes relax inwards by $-$0.19$\pm$0.02[Å]{} and $-$0.07$\pm$0.01[Å]{}, respectively, with occupancy factors of 1.0 in the oxygen plane and 0.75$\pm$0.03 in the underlying Zn plane. The inward relaxation of the O–layer has been confirmed by another surface X–ray diffraction measurement[@wander3] where $\Delta d_{12}$=$-$0.24$\pm$0.06[Å]{} and $\Delta d_{23}$=+0.04$\pm$0.05[Å]{} was obtained. Ab-initio calculations on polar slabs[@wander3; @carlsson; @noguera] predict consistently for both surface terminations contractions for the first Zn–O double-layer distance, with a larger inward relaxation at the O–terminated surface. In view of the above discussed discrepancies between different experimental and theoretical investigations, it is our aim to provide a consistent set of fully converged calculations for the four main ZnO surfaces. We attempt to overcome the restrictions of previous theoretical studies such that the current study can be regarded as a reference for perfectly ordered, defect-free surfaces. An accurate set of uniform theoretical data may then allow to discuss the differences between theory and experiment in terms of deviations between the model of ideal, unreconstructed surfaces as assumed in the ab-initio simulations and the structure of the surfaces occurring in nature. In particular, for the polar surfaces this may give new insight into how these surfaces are stabilized. Theoretical details {#sec:theorie} =================== Method of calculation and bulk properties {#sec:method} ----------------------------------------- We have carried out self-consistent total-energy calculations within the framework of density-functional theory (DFT).[@hks] The exchange and correlation effects were treated within both, the local-density approximation (LDA),[@ca; @sic] and the generalized-gradient approximation (GGA) where we used the functional of Perdew, Becke, and Ernzerhof[@pbe] (PBE). Two different pseudopotential schemes were applied: For the study of the nonpolar surfaces we used pseudopotentials of the Vanderbilt ultrasoft type.[@van-usp] The electronic wave functions were expanded in a plane wave basis set including plane waves up to a cut-off energy of 25Ry. A conjugate gradient technique as described in Ref.  was employed to minimize the Kohn-Sham total energy functional. For the calculations on the polar surfaces we used normconserving pseudopotentials[@van-pp] together with a mixed-basis consisting of plane waves and non-overlapping localized orbitals for the O–$2p$ and the Zn–$3d$ electrons.[@mb] A plane-wave cut-off energy of 20Ry was sufficient to get well converged results. To improve convergence in the presence of partly occupied bands, a Gaussian broadening[@kmh] with a smearing parameter of 0.1eV was included. For several configurations representing nonpolar surfaces we repeated the calculations with the mixed-basis approach. No significant differences compared to the results from the ultrasoft-pseudopotential method could be seen. It is a well known shortcoming of LDA and GGA that both predict the Zn–$d$ bands to be roughly 3eV too high in energy as compared to experiment.[@goepel; @girard] In consequence, the Zn–$d$ states hybridize stronger with the O–$p$ valence bands, thereby shifting them unphysically close to the conduction band. The underestimate for the band gap is therefore even more severe in ZnO than in other semiconductors. In our calculations we obtained band gaps of 0.78eV and 0.74eV with LDA and PBE, respectively, as opposed to the experimental value of 3.4eV. The band gap and the position of the Zn–$d$ bands can be improved significantly, if a self-interaction correction (SIC) is used.[@sic] Usually SIC calculations are very demanding, but if the SIC effects are incorporated into the pseudopotential[@sic-pp], the additional calculational cost is modest. Unfortunately, the SIC pseudopotential scheme does not improve the structural properties of ZnO[@sic-pp] and also causes some problems when accurate atomic forces are needed.[@diss-vogel] Therefore, and since we are mostly interested in accurate relaxed geometries of the surfaces and not so much in their electronic structure, we have omitted the use of SIC in our calculations. The computed structural parameters for bulk ZnO are shown in Table \[tab:bulk\]. Mixed-basis and ultrasoft-pseudopotential calculations give the same results within the accuracy displayed in Table \[tab:bulk\]. As is typical for the functionals, LDA underestimates the lattice constants by 1–2%, and GGA overestimates them by roughly the same amount. The $c/a$–ratio strongly influences the internal parameter $u$. If $u\!=\!1/4+a^2/3c^2$, all nearest-neighbor bonds are equal. Since the $c/a$–ratio is slightly overestimated in our calculations, we get $u$–values that are slightly smaller than observed in experiment. The construction of appropriate supercells for the study of the surfaces will be detailed in the following subsection. All atomic configurations were fully relaxed by minimizing the atomic forces using a variable-metric scheme.[@numrec] Convergence was assumed when the forces on the ions were less than 0.005eV/Å. Surfaces, slab structures, and the stability problem {#sec:surfgeom} ---------------------------------------------------- All surfaces were represented by periodically repeated slabs consisting of several atomic layers and separated by a vacuum region of 9.4 to 12.4Å. For the polar surfaces a dipole correction[@bengtsson; @bm] was used to prevent artificial electrostatic interactions between the repeated units. To simulate the underlying bulk structure, the slab lattice constant in the direction parallel to the surface was always set equal to the theoretical equilibrium bulk value (see Table \[tab:bulk\]). The nonpolar surfaces are obtained by cutting the crystal perpendicular to the hexagonal Zn– and O–layers (see Fig. \[fig:bulk\]). In both cases, for the (10$\bar{1}$0) and the (11$\bar{2}$0) planes, two equivalent surfaces are created so that always stoichiometric slabs with the same surface termination on top and on bottom can be formed. The (10$\bar{1}$0) surface geometry is sketched in Fig. \[fig:1010\]. Each surface layer contains one ZnO dimer. The dimers form characteristic rows along the \[1$\bar{2}$10\] direction which are separated by trenches. Slabs with 4–20 atomic layers were used, thus containing up to 40 atoms, and the Brillouin-zone of the supercell was sampled with a (4$\times$2$\times$2) Monkhorst-Pack[@mp] k–point grid. No differences were found when going to a (6$\times$4$\times$2) mesh. The surface layers of the (11$\bar{2}$0) surface are built up by two ZnO dimers which form zig-zag lines along the surface (see Fig. \[fig:1120\]). The two dimers are equivalent and are related by a glide plane symmetry. This symmetry is not destroyed by the atomic relaxations of the surface.[@duke3] The slabs in our calculations were built of 4–8 atomic layers with up to 32 atoms, and a (2$\times$2$\times$2) Monkhorst-Pack[@mp] k–point mesh was used. Again, a denser (4$\times$4$\times$2) mesh did not alter the results. Cleaving the crystal perpendicular to the $c$–axis (see Fig. \[fig:bulk\]) always creates simultaneously a Zn– and an O–terminated polar (0001) and (000$\bar{1}$) surface, respectively. If we only consider cuts where the surface atoms stay 3–fold coordinated, all slabs representing polar surfaces are automatically stoichiometric and are inevitably Zn–terminated on one side and O–terminated on the other side. Figure \[fig:0001\] sketches the characteristic sequence of Zn–O double-layers of the polar slabs. In our calculations slabs with 4–20 Zn–O double-layers were used, thus containing 8–40 atoms. k–point convergence was achieved with a (6$\times$6$\times$1) Monkhorst-Pack[@mp] grid, and tests with up to (12$\times$12$\times$1) k–points were made. Each Zn–O double-layer in Fig. \[fig:0001\] exhibits a dipole moment perpendicular to the surface. If we assume for simplicity a purely ionic model for ZnO and assign the fixed formal charges $+Ze$ and $-Ze$ to the Zn– and O–ions, respectively, then a slab of $N$ double-layers will exhibit a dipole moment of $m\!=\!N\,Ze\,(1\!-\!2u)\,c/2$ (see Fig. \[fig:slab\]). This corresponds to a spontaneous polarization of $P_{\rm s}\!=\!Ze\,(1\!-\!2u)$ which is independent of the thickness of the slab. If the external electric field is zero, inside the slab an electric field of $E\!=\!-4\pi P_{\rm s}$ will be present. Therefore, no matter how thick we choose our slab, the inner part will [*never*]{} become bulk-like, and the surface energy, defined as the difference between the energy of the slab and the energy of the same number of atoms in the bulk environment, will [*diverge*]{} with slab thickness.[@tusker] Thus, the polar surfaces are [*not stable*]{}. On the other hand, it can easily be seen that if we modify the charge in the top and bottom layer of the slab from $\pm Ze$ to $\pm (Z\!-\!\delta)e$ with $\delta\!=\!(1\!-\!2u)\,Z \approx Z/4$, then the dipole moment of the slab will become [*independent*]{} of the slab thickness and the internal electric field [*vanishes*]{}. This charge transfer is equivalent to applying an external dipole which compensates the internal electric field. For most polar surfaces the rearrangement of the charges is accomplished by a modification of the surface layer composition with respect to the bulk. If this does not occur, the internal electric field will ‘tilt’ the band structure by which the upper edge of the valence band close to the O–terminated surface will become higher in energy than the lower edge of the conduction band at the Zn–terminated face (see Fig. \[fig:band\]). The slab can now lower its energy (thereby reducing the internal electric field) by transferring electrons from the valence band at the O–terminated side to the conduction band at the Zn–terminated face. This will happen ‘automatically’ in any self-consistent electronic structure calculation that makes use of a slab geometry. This is what is usually referred to as ‘the metallization of polar surfaces’. However, one problem still remains: electrons move from the O– to the Zn–terminated surface until the upper valence band edge at the O–terminated side has reached the same energy as the lower edge of the conduction band at the Zn–terminated face as sketched in Fig. \[fig:band\]. In this situation, the internal electric field is not [*fully*]{} removed for a finite slab with thickness $D$. The residual electric field depends on the band gap and vanishes only with 1/$D$. In our calculations we found that for slabs with up to 6 Zn–O double-layers the residual electric field is still so strong that the slabs are not stable. There is no energy barrier when the O– and Zn–layers are shifted simultaneously and rigidly toward each other. Therefore, to get good converged results for the surface geometries and energies very thick slabs have to be used which makes the investigation of the polar surfaces computationally very demanding. Ideally, one should calculate all quantities of interest for different slab thicknesses $D$ and extrapolate the results to 1/$D \longrightarrow 0$. In the present study we obtained the relaxations of the surface layers (see Fig. \[fig:relax\]) as well as the cleavage energy of the polar surfaces (see Fig. \[fig:energy\]) by extrapolating the results of slab calculations containing up to 20 Zn–O double-layers. ------------------------ ------------- ------------- ------------- ------------- LDA PBE LDA PBE $\Delta_{1,\perp}$ $+$0.106$a$ $+$0.100$a$ $+$0.076$a$ $+$0.073$a$ $\Delta_{2,\perp}$ $-$0.041$a$ $-$0.038$a$ $-$0.016$a$ $-$0.015$a$ Bulk \[4pt\] $\Delta_{1,y}$ 0.6531$c$ 0.6539$c$ 0.6506$c$ 0.6516$c$ $\Delta_{2,y}$ 0.6243$c$ 0.6231$c$ 0.6230$c$ 0.6221$c$ Bulk \[4pt\] $\Delta_{1,x}$ 0.083$a$ 0.077$a$ $\Delta_{2,x}$ 0.020$a$ 0.016$a$ Bulk \[4pt\] $d_{12,\perp}$ 0.1445$a$ 0.1447$a$ 0.4093$a$ 0.4089$a$ $d_{23,\perp}$ 0.6328$a$ 0.6337$a$ 0.5215$a$ 0.5222$a$ Bulk \[4pt\] $d_{12,y}$ 0.5355$c$ 0.5357$c$ 0.5259$c$ 0.5266$c$ $d_{23,y}$ 0.5013$c$ 0.5017$c$ 0.5014$c$ 0.5009$c$ Bulk \[4pt\] $d_{1,x}$ 0.4381$a$ 0.4399$a$ $d_{2,x}$ 0.5515$a$ 0.5556$a$ Bulk ------------------------ ------------- ------------- ------------- ------------- : \[tab:nonpolar\] Summary of the structural relaxations of the first two surface layers for the nonpolar (10$\bar{1}$0) and (11$\bar{2}$0) surfaces. The definitions of the independent structural parameters are shown in Fig. \[fig:1010\] and \[fig:1120\]. All relaxations are given in fractions of the theoretical bulk lattice constants $a$ and $c$ (see Table \[tab:bulk\]). The rows labeled ’Bulk’ are the corresponding values for the unrelaxed surface. $\omega$ $C_{\rm B,\parallel}$ $C_{\rm B}$(Zn) $C_{\rm B}$(O) ----------------------- ------------------------ ----------------------- ----------------- ---------------- LDA, this study 10.7$^\circ$ $-$6.7 $-$2.8 $-$3.2 PBE, this study 10.1$^\circ$ $-$7.2 $-$3.1 $-$3.4 \[8pt\] LEED, Ref.  12$^\circ\pm$5$^\circ$ $-$3$\pm$6 \[8pt\] LDA+pw, Ref.  11.7$^\circ$ $-$6.0 LDA+Gauss, Ref.  3.6$^\circ$ $-$7.9 $-$5.2 $-$2.7 HF, Ref.  2.3$^\circ$ $-$7.2 $-$3.6 $-$3.4 B3LYP, Ref.  + 5.2$^\circ$ $-$4.9 $-$2.9 $-$0.5 : \[tab:1010\] Tilt angle $\omega$ of the surface dimer (see Fig. \[fig:1010\]) and relative bond length contraction $C_{\rm B}$ (in % of the corresponding bulk value) of the surface bonds for the nonpolar (10$\bar{1}$0) surface in comparison with LEED experiment and previous calculations. $C_{\rm B,\parallel}$ refers to the Zn–O dimer bond parallel to the surface, $C_{\rm B}$(Zn) to the back bond of zinc to oxygen in the second layer, and $C_{\rm B}$(O) to the respective back bond of the surface O atom. Bulk values of the surface and the back bonds are $u\,c$ and $\big( (1/2\!-\!u)^2+a^2/3c^2\big)^{1/2}c$, respectively. Results and discussion {#sec:results} ====================== The nonpolar (10$\mathbf\bar{1}$0) and (11$\mathbf\bar{2}$0) surfaces {#sec:nonpolar} --------------------------------------------------------------------- The nonpolar wurtzite (10$\bar{1}$0) surface and the closely related zincblende (110) surface have been studied experimentally and theoretically for a wide range of III–V and II–VI compound semiconductors. It was found that all surfaces show the same basic relaxations with the surface cation moving inwards and the anion staying above, resulting in a tilt of the surface anion–cation dimers, and the magnitude of the relaxation is determined by a competition between dehybridizaton and charge transfer effects.[@duke5; @duke6; @pollmann; @filippetti] At the surface (this applies also to the (11$\bar{2}$0) surface), the coordination of the surface atoms is reduced from 4–fold to 3–fold, thereby creating an occupied dangling bond at the anion and an empty dangling bond at the cation. Two limiting cases may now be distinguished: In a dominantly covalent bonded compound the cation will rehybridize from sp$^3$ to sp$^2$ and will move downwards until it lays nearly in the plane of its three anion neighbors. The anion stays behind (often even an outward relaxation is observed) tending toward p–like bonds to its neighbors. The result is a strong tilt of the surface anion–cation dimer (up to 30$^\circ$ are observed) with only a small change of the bondlength. In a dominantly ionic solid, electrostatics prevails over dehybridization effects. To obtain a better screening, both, anion and cation, move toward the bulk. The tilt of the anion–cation dimer will be small but the bond length can be significantly reduced. Therefore, the relaxation of the surface dimers directly reflects the covalency or ionicity of the chemical bond in the compound of consideration. Our results for the relaxation of the (10$\bar{1}$0) surface are given in Tables \[tab:nonpolar\] and \[tab:1010\]. All lengths are expressed as fractions of the theoretical lattice parameters given in Table \[tab:bulk\]. Using these dimensionless relative quantities no significant differences between the LDA and GGA calculations can be seen. For two structural parameters the decay of the surface relaxations into the bulk is illustrated in Fig. \[fig:deeplayer\]. Compared to the topmost surface layer, the tilt angle $\omega$ and the in-plane bond length contraction $C_{\rm B,\parallel}$ of the Zn–O dimers are already much smaller in the second and the subsequent layers, but still significant deviations from the bulk structure can be seen as deep as five or six layers below the surface. The relatively small angle of $\omega\approx 10^\circ$ for the tilt of the surface Zn–O dimer together with the Zn–O bond contraction of $C_{\rm B,\parallel}\approx 7$% confirms that the chemical bond in ZnO is highly ionic but with significant covalent contributions. A tilt of 10$^\circ$ is at the lower boundary of what has been observed for other III–V and II–VI compounds.[@duke6] Only the nitride semiconductors show tilt angles that are similarly small.[@filippetti] The calculated surface relaxations in Table \[tab:nonpolar\] and \[tab:1010\] agree very well with the DFT-LDA study of Ref.  and with the results from the LEED analysis.[@duke2] Relative to the central layer of the slab we find a downward relaxation of the surface atoms of $\Delta d_\perp$(Zn)=$-$0.36[Å]{} and $\Delta d_\perp$(O)=$-$0.04[Å]{} with a shift parallel to the surface of $\Delta d_\parallel$(Zn)=0.18[Å]{} compared to $\Delta d_\perp$(Zn)=$-$0.45$\pm$0.1[Å]{}, $\Delta d_\perp$(O)=$-$0.05$\pm$0.1[Å]{}, and $\Delta d_\parallel$(Zn)=0.1$\pm$0.2[Å]{} from the LEED experiment. Rotation angles of $\omega$=2$^\circ$–5$^\circ$ seem anomalously small in the context of what is known for other compounds. Even for the very ionic AlN a tilt angle of $\omega$=7.5$^\circ$ has been reported.[@filippetti] The smaller relaxations obtained in Ref.  may be due to not fully converged calculations. Very thin slabs were partly used or only the first one or two surface layers were relaxed. In Ref.  no relaxation of the Zn ions parallel to the surface was allowed. Also the convergence of the localized basis sets employed in these studies and the k–point sampling may have been a problem. As a test we did a slab calculation where we fixed the positions of the atoms at the bulk positions and allowed only the first surface layer to relax. The tilt angle $\omega$ then reduces to roughly half of its value. Also coarsening the k–point mesh results in changes of 2$^\circ$–3$^\circ$ in $\omega$. Since we did our calculations with two very different pseudopotential approaches we can exclude any bias caused by the use of pseudopotentials. Table \[tab:nonpolar\] and \[tab:1120\] show our results for the relaxation of the (11$\bar{2}$0) surface. The atomic displacements are of the same order of magnitude as has been found for the (10$\bar{1}$0) surface. Again, no significant differences between LDA and GGA calculation can be seen. The tilt of the surface dimers of 7.5$^\circ$ and the reduction of the Zn–O dimer bond length of about 6% fits nicely into the picture of ZnO being at the borderline between ionic and covalent solids. In a hybrid B3LYP study[@wander2] much smaller relaxations for the (11$\bar{2}$0) surface were reported. However, in this study only three degrees of freedom per surface layer were relaxed. The authors claimed that the position of the Zn and O ions are constrained by symmetry. This is not correct. From the two Zn–O dimers in each surface layer, the atoms of one dimer can move freely in all three Cartesian directions, leading to six degrees of freedom per surface layer (see Fig. \[fig:1120\]). The position of the second dimer is then determined by the glide plane symmetry (see also Ref. ). ----------------- ------------- ----------------------- ----------------- ---------------- $\omega$ $C_{\rm B,\parallel}$ $C_{\rm B}$(Zn) $C_{\rm B}$(O) LDA, this study 7.6$^\circ$ $-$5.8 $-$1.4 $-$1.7 PBE, this study 7.4$^\circ$ $-$6.4 $-$1.5 $-$1.8 ----------------- ------------- ----------------------- ----------------- ---------------- : \[tab:1120\] Tilt angle $\omega$ of the surface dimer (see Fig. \[fig:1120\]) and relative bond length contraction $C_{\rm B}$ of the surface bonds for the nonpolar (11$\bar{2}$0) surface. The same notation as in Table \[tab:1010\] is used. The polar (0001)–Zn and (000$\mathbf\bar{1}$)–O surfaces {#sec:polar} -------------------------------------------------------- In Figure \[fig:relax\] we have plotted the calculated distances between the topmost surface layers of the polar (0001) and (000$\bar{1}$) surfaces as a function of the slab thickness $D$. As expected from the thickness dependence of the residual electric field inside the slab, the $1/D$ plots reveal a nice linear behavior for the interlayer distances. By extrapolating $1/D\longrightarrow 0$, all distances may now be obtained in the limit of a vanishing internal electric field. The extrapolated results for the relaxations of the polar surfaces are summarized in Table \[tab:polar\] and \[tab:0001\]. Very good agreement with the results of previous ab-initio calculations is found. In general, all double-layers are contracted and the distances between the double-layers are increased relative to the bulk spacings. For finite slabs, the residual internal electric field further amplifies this characteristic relaxation pattern. The largest relaxation is found for the O–terminated surface where the outermost double-layer distance is compressed by $\approx$50%. This agrees reasonably well with the results of the X–ray experiments[@wander3; @jedrecy2; @jedrecy3] where a contraction of 40%, 54%, and 20% were found. On the other hand, from LEED analysis[@duke4] and LEIS[@overbury] measurements it was concluded that the Zn–O double-layer spacing for the O–terminated surface is close to its bulk value. The recent finding of Wöll et al.[@woell] may perhaps help to solve this contradiction. With helium scattering it was shown that after commonly used preparation procedures the O–terminated surfaces are usually hydrogen covered. To test how much hydrogen may influence the surface relaxations, we repeated a calculation where we adsorbed hydrogen on top of the O–terminated side of the slab. We find that in this case the outermost Zn–O double-layer expands again, and the Zn–O separation goes back close to the bulk distance. A similar result was also reported by Wander and Harrison.[@wander4] For the Zn–terminated surface there is a clear discrepancy between theory and experiment. All calculations predict consistently a contraction of the first Zn–O double-layer of 20–30%, whereas in experiment no contraction[@xpd1] or even an outward relaxation of the topmost Zn–layer is found.[@caiciss; @jedrecy2] This may indicate that the ’metallization’ used in all theoretical studies is not the adequate model to describe the polar Zn–terminated surface. Recently Dulub and Diebold[@diebold] proposed a new stabilization mechanism for the Zn–terminated surface. With scanning tunneling microscopy (STM) they found that many small islands with a height of one double-layer and many pits one double-layer deep are present on the (0001)–Zn surface. Assuming that the steps edges are O–terminated, an analysis of the island and pit size distribution yielded a decrease of surface Zn concentration of roughly 25%. Such a reduction of Zn atoms at the surface would be enough to accomplish the charge transfer needed to stabilize the polar surface. It would not be in contradiction with the observed p(1$\times$1) LEED pattern since a long range correlation between the different terraces remains. The missing of 25% of the Zn atoms was also obtained by Jedrecy[@jedrecy2] as best fit of their GIXD data. -------------- -------- -------- -------- -------- LDA PBE LDA PBE $d_{12}$ 0.0952 0.0883 0.0594 0.0645 $d_{23}$ 0.3947 0.3995 0.4022 0.3962 $d_{34}$ 0.1172 0.1132 0.1044 0.1107 $d_{45}$ 0.3811 0.3857 0.3817 0.3784 $d_{56}$ 0.1187 0.1186 0.1194 0.1251 \[4pt\] Bulk -------------- -------- -------- -------- -------- : \[tab:polar\] Summary of the surface relaxations for the polar Zn–terminated (0001) and the O–terminated (000$\bar{1}$) surface (see Fig. \[fig:0001\]). All distances are given in fractions of the theoretical bulk lattice constant $c$ (see Table \[tab:bulk\]). ---------------------- ----------------- ----------------- ----------------- ----------------- $\Delta d_{12}$ $\Delta d_{23}$ $\Delta d_{12}$ $\Delta d_{23}$ LDA, this study $-$22% $+$5.1% $-$51% $+$4.7% PBE, this study $-$27% $+$5.3% $-$47% $+$4.5% \[8pt\] B3LYP, Ref.  $-$23% $+$3.5% $-$41% $+$3.0% GGA, Ref.  $-$31% $+$7.0% $-$52% $+$6.5% GGA, Ref.  $-$25% $-$41% ---------------------- ----------------- ----------------- ----------------- ----------------- : \[tab:0001\] Relaxation of the surface layers of the polar ZnO surfaces in comparison with previous ab-initio calculations. A structure where the surface is stabilized by many small islands and pits with a Zn deficiency at the step edges is, of course, far away from the model of a clean, perfectly ordered (0001)–Zn surface used in the theoretical calculations. Basically all surface Zn–atoms will be next to a step edge, and therefore very different relaxations may occur. Unfortunately, it is presently out of the reach of our ab-initio method to do calculations on slabs representing such an island and pit structure. For the O–terminated surface, on the other hand, the STM measurements show a very different picture. Smooth and flat terraces separated mostly by a two double-layer step are observed. The number of single double-layer steps was by far not large enough to account for a similar stabilization mechanism as for the Zn–terminated surface. Surface/cleavage energies {#sec:surfenergy} ------------------------- For the nonpolar surfaces we can obtain directly the surface energy from our slab calculations since the slabs are always terminated by the same surface on both sides. This is not possible for the polar surface since inevitably both surface terminations are present in a slab calculation. Only the cleavage energy of the crystal is well defined. To be able to compare the relative stability of the nonpolar and polar surfaces, we will discuss in the following only the cleavage energies. The surface energies of the nonpolar surfaces are just given by half of their cleavage energy. Like the interlayer distances, the 1/$D$–plot of the cleavage energy for the polar surfaces in Fig. \[fig:energy\] exhibits a simple linear behavior. As can be seen, the cleavage energy does not change too much with the slab thickness so that moderate slab sizes would be sufficient to obtain reasonable converged results. The extrapolated values for the cleavage energy of the polar surfaces together with the results for the nonpolar faces and the findings of previous studies are summarized in Table \[tab:energy\]. The nonpolar (10$\bar{1}$0) surface is the most stable face of ZnO with the lowest cleavage energy. But the energy of the (11$\bar{2}$0) surface is only slightly higher. The cleavage energy for the polar surface is roughly a factor of two larger than for the nonpolar surfaces. This is surprisingly low compared to what has been found in other systems, for example MgO, where a ’metallization’ was also assumed as stabilization mechanism for the polar surfaces.[@pojani] Therefore, for ZnO the ’metallization’ mechanism can well compete with other stabilization mechanisms like reconstructions or randomly distributed vacancies and can not be ruled out by energetic considerations alone. [lcc]{} & $E_{\rm cleav}$ & $E_{\rm relax}$\ (10$\bar{1}$0) surface:\ LDA, this study & 2.3 & 0.37\ PBE, this study & 1.6 & 0.37\ LDA+pw, Ref.  & 1.7 & 0.37\ B3LYP, Ref.  + & 2.3 &\ HF, Ref.  & 2.7 & 0.38\ Shell model, Ref.  & 2.0\ (11$\bar{2}$0) surface:\ LDA, this study & 2.5 & 0.29\ PBE, this study & 1.7 & 0.30\ (0001)/(000$\bar{1}$) surface:\ LDA, this study & 4.3 & 0.28\ PBE, this study & 3.4 & 0.28\ B3LYP, Ref.  & 4.0 &\ The LDA and GGA results in Table \[tab:energy\] show the same ordering for the cleavage energies of the different surfaces but the absolute GGA energies are roughly 30% lower than the LDA results. This is a well known improvement of the GGA, where a much better description of the rapidly decaying charge density into the vacuum region is achieved. The cleavage energies agree well with previous theoretical results as given in the Table. Surprisingly, the results of the hybrid B3LYP studies are much closer to LDA than to the GGA results. Interestingly, the relaxation energy is roughly the same for all surfaces when normalized to one Zn–O pair. This means that despite the partially filled bands at the polar surfaces, the strength of the relaxation is almost the same as for the isolating nonpolar faces. Summary and conclusions {#sec:summary} ======================= A first-principles density-functional pseudopotential approach was used to determine the fully relaxed atomic structures and the surface/cleavage energies of the nonpolar (10$\bar{1}$0) and (11$\bar{2}$0) surfaces and the polar Zn–terminated (0001) and the O–terminated (000$\bar{1}$) basal surface planes of ZnO. The main results of the presented investigation are an extensive set of reliable data for the structural parameters and the energetics of the various ZnO surfaces within the LDA and the PBE approximation, which we consider to be a reference for future studies (see in particular the compilations in Tables \[tab:nonpolar\], \[tab:polar\], and \[tab:energy\]). For the nonpolar surfaces we could resolve the discrepancy between experiment and several previous ab-initio studies by showing that if calculations are carefully converged a moderate tilt of the Zn–O surface dimers with a relatively strong contraction of the dimer bond length is obtained. Such a relaxation pattern is typical for rather ionic compounds but with strong covalent contribution to the chemical bonding. Our results are in line with LEED analysis and fit very well the systematic trends that are observed for other more or less ionic II–VI and III–V semiconductors. The polar surfaces can only be stable if a rearrangement of charges between the Zn– and the O–terminated surfaces takes place. In our calculations the polar surfaces were stabilized by allowing the electrons to move from the (000$\bar{1}$)–O to the (0001)–Zn surface, thereby quenching the internal electric field. Nevertheless, even for thick slabs a finite residual electric field is present inside the slabs, which affects the results for the structural parameters and the surface energy. To get well converged results in the limit of a vanishing internal electric, we repeated all calculations with slabs consisting of different numbers of Zn–O double-layers and extrapolated the results to the limit of an infinite thick slab. For both polar surfaces we obtain a strong contraction of the outermost double-layer spacing. This agrees well with experiments for the O–terminated surface but not for the Zn termination, indicating that the electron transfer may be not the adequate model to describe the stabilization mechanism of the polar Zn–terminated surface. Since this is consistently predicted by all calculations, it is very likely that other mechanisms, such as defect formation, hydroxylation and/or the mechanism proposed by Dulub and Diebold might stabilize the (0001)–Zn surface. Concerning the surface energies, we find very similar values for the two nonpolar surfaces with a slightly lower value for the (10$\bar{1}$0) surface. The cleavage energy for the polar surfaces is predicted to be roughly a factor of two larger than for the (10$\bar{1}$0) face. Acknowledgments =============== We wish to thank Volker Staemmler, Karin Fink, and Christof Wöll for fruitful discussions. The work was supported by SFB 558 and FCI. D.M. Bagnall, Y.F. Chen, Z. Zhu, T. Yao, S. Koyama, M.Y. Shen, and T. Goto, Appl. Phys. Lett. [**70**]{}, 2230 (1997). J.B. Hansen, [*Handbook of Heterogeneous Catalysis*]{}, G. Ertl, H. Knötzinger, J. Weitkamp (Eds.), Wiley–VCH, Weinheim, 1997. P.A. Cox, [*Transition Metal Oxides: An Introduction to Their Electronic Structure and Properties*]{}, Clarendon Press, Oxford, 1992. C.B. Duke, A.R. Lubinsky, S.C. Chang, B.W. Lee, and P. Mark, Phys. Rev. B [**15**]{}, 4865 (1977). C.B. Duke, R.J. Meyer, A. Paton, and P. Mark, Phys. Rev. B [**18**]{}, 4225 (1978). C.B. Duke, J. Vac. Sci. Technol. [**14**]{}, 870 (1977). C. Noguera, J. Phys.: Condens. Matter [**12**]{}, R367 (2000). Ch. Wöll et al., to be published. Several publications[@jedrecy1; @catlow; @schroer1] quote an earlier LEED analysis of Duke et al., Ref. , where a smaller relaxation of the top-layer Zn of $-$0.3[Å]{} and a larger displacement of O of $-$0.1[Å]{} was found, leading to a smaller tilt of the Zn–O dimers. As is stated in Ref. , Ref.  is an analysis of the same experimental data, but the wrong structural bulk model of Ref.  was used. Additionally, several conceptual improvements were made in the reanalysis Ref. . Under these circumstances, the earlier publication should be disregarded in favour of the results of Ref. . The LEED experiments are sometimes interpreted in literature as to conclude that the surface dimer distance is [*expanded*]{} compared to the bulk situation.[@jedrecy1; @jaffe1; @wander1; @filippetti] In these cases, the authors either refere to the older LEED analysis of Ref. , or they neglect the lateral displacement $\Delta d_\parallel$(Zn) or they misinterpret $\Delta d_\parallel$(Zn) as a shift in the wrong direction. Indeed, the sign convention for the lateral displacements is not very clear in Ref. , but from the absolute atomic positions given in the summary of Ref.  it becomes clear that Zn relaxes [*toward*]{} the O ions thereby [*shortening*]{} the Zn–O distance. W. Göpel, J. Pollmann, I. Ivanov, and B. Reihl, Phys. Rev. B [**26**]{}, 3144 (1982). N. Jedrecy, S. Gallini, M. Sauvage-Simkin, and R. Pinchaux, Surf. Sci. [**460**]{}, 136 (2000). T.M. Parker, N.G. Condon, R. Lindsay, F.M. Leibsle, and G. Thornton, Surf. Sci. [**415**]{}, L1046 (1998). Y.R. Wang and C.B. Duke, Surf. Sci. [**192**]{}, 309 (1987). I. Ivanov and J. Pollmann, Phys. Rev. B [**24**]{}, 7275 (1981). L. Whitmore, A.A. Sokol, and C.R.A. Catlow, Surf. Sci. [**498**]{}, 135 (2002). P. Schröer, P. Krüger, and J. Pollmann, Phys. Rev. B [**49**]{}, 17092 (1994). J.E. Jaffe, N.M. Harrison, and A.C. Hess, Phys. Rev. B [**49**]{}, 11153 (1994). A. Wander and N.M Harrison, Surf. Sci. [**457**]{}, L342 (2000). A. Filippetti, V. Fiorentini, G. Cappellini, and A. Bosin, Phys. Rev. B [**59**]{}, 8026 (1999). A. Wander and N.M. Harrison, Surf. Sci. [**468**]{}, L851 (2000). , Landolt–Börnstein, New Series Group III, Vol. 17a amd 22a. edited by K.-H. Hellwege and O. Madelung, Springer, New York, 1982. S.C. Abrahams and J.L. Bernstein, Acta Cryst. [**B25**]{}, 1233 (1969); T.M. Sabine and S. Hogg, Acta Cryst. [**B25**]{}, 2254 (1969). C.H. Bates, W.B. White, and R. Roy, Science [**137**]{}, 993 (1962); W. Class, A. Ianucci, and H. Nesor, Morelco Rep. [**13**]{}, 87 (1966); J.C. Jamieson, Phys. Earth Planet. Inter. [**3**]{}, 201 (1970). P.W. Tusker, J. Phys. C: Solid State Phys. [**12**]{}, 4977 (1979). A. Wander, F. Schedin, P. Steadman, A. Norris, R. McGrath, T.S. Turner, G. Thornton, and N.M. Harrison, Phys. Rev. Lett. [**86**]{}, 3811 (2001). J.M. Carlsson, Comp. Mat. Sci. [**22**]{}, 24 (2001). C.B. Duke and A.R. Lubinsky, Surf. Sci. [**50**]{}, 605 (1975). R.W.G. Wyckoff, [*Crystal Structures*]{}, Vol. I, 2nd ed., Wiley, New York, 1963, p. 111–112. M. Sambi, G. Granozzi, G.A. Rizzi, M. Casari, and E. Tondello, Surf. Sci. [**319**]{}, 149 (1994). H. Maki, N. Ichinose, N. Ohashi, H. Haneda, and J. Tanaka, Surf. Sci. [**457**]{}, 377 (2000). N. Jedrecy, M. Sauvage-Simkin, and R. Pinchaux, Appl. Surf. Sci. [**162-163**]{}, 69 (2000). S.H. Overbury, P.V. Radulovic, S. Thevuthasan, G.S. Herman, M.A. Henderson, and C.H.F. Peden, Surf. Sci. [**410**]{}, 106 (1998). M. Galeotti, A. Atrei, U. Bardi, G. Rovida, M. Torrini, E. Zanazzi, A. Santucci, and A. Klimov, Chem. Phys. Lett. [**222**]{}, 349 (1994). N. Jedrecy, S. Gallini, M. Sauvage-Simkin, and R. Pinchaux, Phys. Rev. B [**64**]{}, 085424 (2001). P. Hohenberg and W. Kohn, Phys. Rev. [**136**]{}, B864 (1964); W. Kohn and L.J. Sham, Phys. Rev. [**140**]{}, A1133 (1965). D.M. Ceperley and B.J. Alder, Phys. Rev. Lett. [**45**]{}, 566 (1980). J.P. Perdew and A. Zunger, Phys. Rev. B [**23**]{}, 5048 (1981). J.P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett.  [**77**]{}, 3865 (1996); Phys. Rev. Lett. [**78**]{}, 1396 (1997). D. Vanderbilt, Phys. Rev. B [**41**]{}, 7892 (1990). R.D. King-Smith and D. Vanderbilt, Phys. Rev. B [**49**]{}, 5828 (1994). D. Vanderbilt, Phys. Rev. B [**32**]{}, 8412 (1985). B. Meyer, C. Elsässer, and M. Fähnle, FORTRAN 90 program for mixed-basis pseudopotential calculations for crystals, Max–Planck Institut für Metallforschung, Stuttgart (unpublished). C.-L. Fu and K.M. Ho, Phys. Rev. B [**28**]{}, 5480 (1983). R.T. Girard, O. Tjernberg, G. Chiaia, S. Söderholm, U.O. Karlsson, C. Wigren, H. Hylén, and I. Lindau, Surf. Sci. [**373**]{}, 409 (1997). D. Vogel, P. Krüger, and J. Pollmann, Phys. Rev. B [**54**]{}, 5495 (1996); Phys. Rev. B [**52**]{}, R14316 (1995). D. Vogel, Dissertation, Universität Münster, Germany, 1998. W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, [*Numerical Recipes*]{}, Cambridge University Press, New York 1986. L. Bengtsson, Phys. Rev. B [**59**]{}, 12301 (1999). B. Meyer and D. Vanderbilt, Phys. Rev. B [**63**]{}, 205426 (2001). H.J. Monkhorst and J.D. Pack, Phys. Rev. B [**53**]{}, 5188 (1976). The data given in Ref.  is not consistent. In Table \[tab:1010\] we cite the bond length values of Table 3, Ref. . Using the atomic displacements listed in Table 1, Ref.  will lead to different results for the back bond lengths. C.B. Duke and Y.R. Wang, J. Vac. Sci. Technol. A [**7**]{}, 2035 (1989). C.B. Duke, J. Vac. Sci. Technol. A [**10**]{}, 2032 (1992). J. Pollmann, P. Krüger, M. Rohlfing, M. Sabisch, and D. Vogel, Appl. Surf. Sci. [**104/105**]{}, 1 (1996). A. Wander and N.M. Harrison, J. Chem. Phys. [**115**]{}, 2312 (2001). O. Dulub and U. Diebold, to be published. A. Pojani, F. Finocchi, J. Goniakowski, and C. Noguera, Surf. Sci. [**387**]{}, 354 (1997).
--- abstract: 'We find necessary and sufficient conditions for a Banach space operator $T$ to satisfy the generalized Browder’s theorem, and we obtain new necessary and sufficient conditions to guarantee that the spectral mapping theorem holds for the $B$-Weyl spectrum and for polynomials in $T$.  We also prove that the spectral mapping theorem holds for the $B$-Browder spectrum and for analytic functions on an open neighborhood of $\sigma (T)$.  As applications, we show that if $T$ is algebraically $M$-hyponormal, or if $T$ is algebraically paranormal, then the generalized Weyl’s theorem holds for $f(T)$, where $f\in H((T))$, the space of functions analytic on an open neighborhood of $\sigma (T)$.  We also show that if $T$ is reduced by each of its eigenspaces, then the generalized Browder’s theorem holds for $f(T)$, for each $f\in H(\sigma (T))$.' address: - 'Department of Mathematics, University of Iowa, Iowa City, IA 52242-1419.' - 'Department of Mathematics, Kyunghee University, Seoul, Korea 130-701.' author: - 'Raúl E. Curto and Young Min Han' title: | Generalized Browder’s and Weyl’s theorems\ for Banach space operators[** **]{} --- [^1] \[sect1\]Introduction ===================== In [@Weyl], H. Weyl proved, for hermitian operators on Hilbert space, his celebrated theorem on the structure of the spectrum (Equation (\[eq11\]) below).  Weyl’s theorem has been extended from hermitian operators to hyponormal and Toeplitz operators ([@Co]), and to several classes of operators including seminormal operators ([@Ber1], [@Ber2]).  Recently, M. Berkani and J.J. Koliha [@Berkani4] introduced the concepts of generalized Weyl’s theorem and generalized Browder’s theorem, and they showed that $T$ satisfies the generalized Weyl’s theorem whenever $T$ is a normal operator on Hilbert space. In this paper we extend this result to several classes much larger than that of normal operators.  We first find necessary and sufficient conditions for a Banach space operator $T$ to satisfy the generalized Browder’s theorem (Theorem \[thm21\]).  We then characterize the smaller class of operators satisfying the generalized Weyl’s theorem (Theorem \[thm24\]).  Next, we obtain a new necessary and sufficient condition to guarantee that the spectral mapping theorem holds for the $B$-Weyl spectrum and for polynomials in $T$ (Theorem \[thm210\]); this result is then refined in the case when $T$ already satisfies the generalized Browder’s theorem (Theorem \[thm211\]).  Along the way we prove that the spectral mapping theorem always holds for the $B$-Browder spectrum and for analytic functions on an open neighborhood of $\sigma (T)$ (Theorem \[thm29\]).  We have three main applications of our results: if $T$ is algebraically $M$-hyponormal, or if $T $ is algebraically paranormal, then the generalized Weyl’s theorem holds for $f(T)$, for each $f\in H(\sigma (T))$, the space of functions analytic on an open neighborhood of $\sigma (T)$ (Theorems \[thm36\] and \[algpara\], respectively); and if $T$ is reduced by each of its eigenspaces, then the generalized Browder’s theorem holds for $f(T)$, for each $f\in H(\sigma (T))$ (Corollary \[cor219\]). As we shall see below, the concept of Drazin invertibility plays an important role for the class of $B$-Fredholm operators.  Let $\mathcal{A}$ be a unital algebra.  We say that $x\in \mathcal{A}$ is Drazin invertible of degree $k$ if there exists an element $a\in \mathcal{A}$ such that $$x^{k}ax=x^{k},\;\;\;axa=a,\;\;\;\text{and \ \ \ }xa=ax.$$For $a\in \mathcal{A}$, the Drazin spectrum is defined as $$\sigma _{D}(a):=\{\lambda \in \mathbb{C}:a-\lambda \text{ is not Drazin invertible}\}.$$In the case of $T\in \mathcal{B(X)}$, it is well known that $T$ is Drazin invertible if and only if $T$ has finite ascent and descent, which is also equivalent to having $T$ decomposed as $T_{1}\oplus T_{2}$, where $T_{1}$ is invertible and $T_{2}$ is nilpotent.   Throughout this note let $\mathcal{B(X)}$, $\mathcal{B}_{0}\mathcal{(X)}$ and $\mathcal{B}_{00}\mathcal{(X)}$ denote, respectively, the algebra of bounded linear operators, the ideal of compact operators, and the set of finite rank operators acting on an infinite dimensional Banach space $\mathcal{X}$.  If $T\in \mathcal{B(X)}$ we shall write $N(T)$ and $R(T)$ for the null space and range of $T$.  Also, let $\alpha (T):=\dim \;N(T)$, $\beta (T):=\dim \;\mathcal{X}/R(T)$, and let $\sigma (T)$, $\sigma _{a}(T)$, $\sigma _{p}(T)$, $\sigma _{pi}(T)$, $p_{0}(T)$ and $\pi _{0}(T)$ denote the spectrum, approximate point spectrum, point spectrum, the eigenvalues of infinite multiplicity of $T$, the set of poles of $T$, and the set of all eigenvalues of $T$ which are isolated in $\sigma (T)$, respectively.  An operator $T\in \mathcal{B(X)}$ is called *upper semi-Fredholm* if it has closed range and finite dimensional null space, and is called *lower semi-Fredholm* if it has closed range and its range has finite co-dimension.  If $T\in \mathcal{B(X)}$ is either upper or lower semi-Fredholm, then $T$ is called *semi-Fredholm*; the *index* of a semi Fredholm operator $T\in \mathcal{B(X)}$ is defined as $$i(T):=\alpha (T)-\beta (T).$$If both $\alpha (T)$ and $\beta (T)$ are finite, then $T$ is called *Fredholm*. $\ T\in \mathcal{B(X)}$ is called *Weyl* if it is Fredholm of index zero, and *Browder* if it is Fredholm “of finite ascent and descent;” equivalently, ([@Har2 Theorem 7.9.3]) if $T$ is Fredholm and $T-\lambda $ is invertible for sufficiently small $\lambda \neq 0$ in $\mathbb{C}$.  The essential spectrum, $\sigma _{e}(T)$, the Weyl spectrum, $\omega (T)$, and the Browder spectrum, $\sigma _{b}(T)$, are defined as ([@Har1],[@Har2]) $$\sigma _{e}(T):=\{\lambda \in \mathbb{C}:T-\lambda \ \text{is not Fredholm}\},$$$$\omega (T):=\{\lambda \in \mathbb{C}:T-\lambda \ \text{is not Weyl}\},$$and $$\sigma _{b}(T):=\{\lambda \in \mathbb{C}:T-\lambda \ \text{is not Browder}\},$$respectively.  Evidently $$\sigma _{e}(T)\subseteq \omega (T)\subseteq \sigma _{b}(T)=\sigma _{e}(T)\cup \operatorname*{acc}\;\sigma (T),$$where we write $\operatorname*{acc}\;K$ for the accumulation points of ${K}\subseteq \mathbb{C}$.  For $T\in \mathcal{B(X)}$ and a nonnegative integer $n$ we define $T_{[n]}$ to be the restriction of $T$ to $R(T^{n})$, viewed as a map from $R(T^{n})$ into $R(T^{n})$ (in particular $T_{[0]}=T$).  If for some integer $n$ the range $R(T^{n})$ is closed and $T_{[n]}$ is upper (resp. lower) semi-Fredholm, then $T$ is called *upper* (resp. *lower*) *semi*-$B$-*Fredholm*.  Moreover, if $T_{[n]}$ is Fredholm, then $T$ is called $B$-Fredholm.  $T$ is called *semi*-$B$-*Fredholm* if it is upper or lower $\text{semi}$-$B$-$\text{Fredholm}$. \[def11\]Let $T\in \mathcal{B(X)}$ and let $$\Delta (T):=\{n\in \mathbb{Z}_{+}:m\in \mathbb{Z}_{+},m\geq n\Rightarrow R(T^{n})\cap N(T\mathbb{)}\subseteq R(T^{m})\cap N(T)\}.$$The *degree of stable iteration of* $T$ is defined as $\operatorname*{dis}\;T:=\inf \;\Delta (T)$. Let $T$ be $\text{semi}$-$B$-$\text{Fredholm}$ and let $d$ be the degree of stable iteration of $T$.  It follows from [@Berkani6 Proposition 2.1] that $T_{[m]}$ is semi-Fredholm and $i(T_{[m]})=i(T_{[d]})$ for every $m\geq d$.  This enables us to define the *index* of a *semi*-$B$-*Fredholm* operator $T$ as the index of the semi-Fredholm operator $T_{[d]}$.  Let $BF(\mathcal{X})$ be the class of all $B$-Fredholm operators.  In [@Berkani1] the author studied this class of operators and proved [@Berkani1 Theorem 2.7] that $T\in \mathcal{B(X)}$ is $B$-Fredholm if and only if $T=T_{1}\oplus T_{2}$, where $T_{1}$ is Fredholm and $T_{2}$ is nilpotent. An operator $T\in \mathcal{B(X)}$ is called $B$-Weyl if it is $B$-Fredholm of index $0$.  The $B$-Fredholm spectrum, $\sigma _{BF}(T)$, and $B$-Weyl spectrum, $\sigma _{BW}(T)$, are defined as$$\sigma _{BF}(T):=\{\lambda \in \mathbb{C}:T-\lambda \text{ is not }B\text{-Fredholm}\}$$and $$\sigma _{BW}(T):=\{\lambda \in \mathbb{C}:T-\lambda \text{ is not }B\text{-Weyl}\}.$$It is well known that the following equality holds [@Berkani2]:$$\sigma _{BW}(T)=\bigcap \{\sigma _{D}(T+F):F\in B_{00}(\mathcal{X})\}.$$We now introduce the $B$-Browder spectrum $\sigma _{BB}(T)$, defined as $$\sigma _{BB}(T):=\bigcap \{\sigma _{D}(T+F):F\in B_{00}(\mathcal{X})\text{ and }TF=FT\}.$$Clearly, $\sigma _{BW}(T)\subseteq \sigma _{BB}(T)$.  In this note we shall show that the $B$-Browder spectrum plays an important role in determining whether an operator satisfies the generalized Browder’s theorem. If we write $\operatorname*{iso}\;K=K\setminus \operatorname*{acc}\;K$ then we let$$\pi _{00}(T):=\{\lambda \in \operatorname*{iso}\;\sigma (T):0<\alpha (T-\lambda )<\infty \ \}$$and $$p_{00}(T):=\sigma (T)\setminus \sigma _{b}(T).$$Given $T\in \mathcal{B(X)}$, we say that Weyl’s theorem holds for $T$ (or that $T$ satisfies Weyl’s theorem, in symbols, $T\in \mathcal{W}$) if $$\sigma (T)\setminus \omega (T)=\pi _{00}(T), \label{eq11}$$$T$$T\in \mathcal{B}$$$\sigma (T)\setminus \omega (T)=p_{00}(T). \label{eq12}$$We also say that the generalized Weyl’s theorem holds for $T$ (and we write $T\in g\mathcal{W}$) if $$\sigma (T)\setminus \sigma _{BW}(T)=\pi _{0}(T), \label{eq13}$$$T$$T\in g\mathcal{B}$$$\sigma (T)\setminus \sigma _{BW}(T)=p_{0}(T). \label{eq14}$$$$g\mathcal{W}\subseteq g\mathcal{B}\bigcap \mathcal{W} \label{gw}$$and that $$g\mathcal{B}\bigcup \mathcal{W}\subseteq \mathcal{B}\text{.} \label{gb}$$Moreover, given $T\in g\mathcal{B}$, it is clear that $T\in g\mathcal{W}$ if and only if $p_{0}(T)=\pi _{0}(T)$. An operator $T\in \mathcal{B(X)}$ is called *isoloid* if every isolated point of $\sigma (T)$ is an eigenvalue of $T$.  If $T\in \mathcal{B(X)}$, we write $r(T)$ for the spectral radius of $T$; it is well known that $r(T)\leq ||T||$.  An operator $T\in \mathcal{B(X)}$ is called *normaloid* if $r(T)=||T||$.  An operator $X\in \mathcal{B(X)}$ is called a quasiaffinity if it has trivial kernel and dense range.  An operator $S\in \mathcal{B(X)}$ is said to be a quasiaffine transform of $T\in \mathcal{B(X)} $ (in symbols, $S\prec T$) if there is a quasiaffinity $X\in \mathcal{B(X)}$ such that $XS=TX$.  If both $S\prec T$ and $T\prec S$, then we say that $S$ and $T$ are quasisimilar. We say that $T\in \mathcal{B(X)}$ has the *single valued extension property* (SVEP) at $\lambda _{0}$ if for every open set $U\subseteq \mathbb{C}$ containing $\lambda _{0}$ the only analytic solution $f:U\longrightarrow \mathcal{X}$ of the equation $$(T-\lambda )f(\lambda )=0\;\;\;(\lambda \in U)$$is the zero function ([@Fin],[@Lau2]).  An operator $T$ is said to have SVEP if $T$ has SVEP at every $\lambda \in \mathbb{C}$.  Given $T\in \mathcal{B(X)}$, the *local resolvent set* $\rho _{T}(x)$ of $T$ at the point $x\in \mathcal{X}$ is defined as the union of all open subsets $U\subseteq \mathbb{C}$ for which there is an analytic function $f:U\longrightarrow \mathcal{X}$ such that $$(T-\lambda )f(\lambda )=x\ \quad \;(\lambda \in U).$$The *local spectrum* $\sigma _{T}(x)$ of $T$ at $x$ is then defined as $$\sigma _{T}(x):=\mathbb{C\setminus \rho }_{T}(x).$$For $T\in \mathcal{B(X)}$, we define the *local* (resp. *glocal*) *spectral subspaces* of $T$ as follows.  Given a set $F\subseteq \mathbb{C}$ (resp. a closed set $G\subseteq \mathbb{C}$), $$X_{T}(F):=\{x\in \mathcal{X}:\sigma _{T}(x)\subseteq F\}$$(resp. $$\begin{aligned} \mathcal{X}_{T}(G)& :=\{x\in \mathcal{X}:\text{there exists an analytic function } \\ f& :\mathbb{C\setminus }G\rightarrow \mathcal{X}\text{ such that }(T-\lambda )f(\lambda )=x\text{ for all }\lambda \in \mathbb{C}\setminus G\}).\end{aligned}$$An operator $T\in \mathcal{B(X)}$ has *Dunford’s property* (C) if the local spectral subspace $X_{T}(F)$ is closed for every closed set $F\subseteq \mathbb{C}$.  We also say that $T$ has *Bishop’s property* ($\beta $) if for every sequence $f_{n}:U\rightarrow $[$\mathcal{X}$]{} such that $(T-\lambda )f_{n}\rightarrow 0$ uniformly on compact subsets in $U$, it follows that $f_{n}\rightarrow 0$ uniformly on compact subsets in $U$. It is well known [@Lau1; @Lau2] that $$\text{Bishop's property }({\beta })\Longrightarrow \text{Dunford's property }(C)\Longrightarrow \text{SVEP}.$$ \[sect2\]Structural Properties of Operators in $g\mathcal{B}$ and $g\mathcal{W}$ ================================================================================ \[thm21\]Let $T\in \mathcal{B(X)}$.  Then the following statements are equivalent:(i)  $T\in g\mathcal{B}$(ii)  (iii)  (iv)  $\operatorname*{acc}\;$;(v)   \(i) $\Longrightarrow $ (ii): Suppose that $T\in g\mathcal{B}$.  Then $\sigma (T)\setminus \sigma _{BW}(T)=p_{0}(T)$.  Let $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$; then $\lambda \in p_{0}(T)$, so $T-\lambda $ is Drazin invertible.  Let $F\in \mathcal{B}_{00}\mathcal{(X)}$ with $TF=FT$.  It follows from [@Berkani3 Theorem 2.7] that $T+F-\lambda $ is also Drazin invertible.  Therefore $\lambda \notin \sigma _{D}(T+F)$, and hence $\lambda \notin \sigma _{BB}(T)$.  Thus, $\sigma _{BB}(T)\subseteq \sigma _{BW}(T)$.  On the other hand, it follows from [@Berkani2 Theorem 4.3] that $\sigma _{BW}(T)=\cap \{\sigma _{D}(T+F):F\in \mathcal{B}_{00}\mathcal{(X)}\}$.  Therefore $\sigma _{BW}(T)\subseteq \sigma _{BB}(T)$, and hence $\sigma _{BW}(T)=\sigma _{BB}(T)$. \(ii) $\Longrightarrow $ (i):  We assume that $\sigma _{BW}(T)=\sigma _{BB}(T)$ and we will establish that $\sigma (T)\setminus \sigma _{BW}(T)=p_{0}(T)$.  Suppose first that $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Then $\lambda \in \sigma (T)\setminus \sigma _{BB}(T)$, and thus there is a finite rank operator $F$ such that $TF=FT$ and $T+F-\lambda $ is Drazin invertible, but $T-\lambda $ is not invertible. Since $TF=FT$, it follows from [@Berkani3 Theorem 2.7] that $T-\lambda $ is Drazin invertible.  Therefore $T-\lambda $ has finite ascent and descent.  Since $\lambda \in \sigma (T)$, we have $\lambda \in p_{0}(T)$. Thus $\sigma (T)\setminus \sigma _{BW}(T)\subseteq p_{0}(T)$. Conversely, suppose that $\lambda \in p_{0}(T)$.  Then $T-\lambda $ is Drazin invertible but not invertible.  Since $\lambda $ is an isolated point of $\sigma (T)$, [@Berkani2 Theorem 4.2] implies that $T-\lambda $ is $B$-Weyl.  Therefore $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Thus $p_{0}(T)\subseteq \sigma (T)\setminus \sigma _{BW}(T)$. \(ii) $\Longrightarrow $ (iii):  Let $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Then $\lambda \in \sigma (T)\setminus \sigma _{BB}(T)$, and so there exists a finite rank operator $F$ such that $TF=FT$ and $T+F-\lambda $ is Drazin invertible, but $T-\lambda $ is not invertible.  Therefore $T-\lambda $ is Drazin invertible but not invertible.  Hence $\lambda \in \sigma (T)\setminus \sigma _{D}(T)$, and so $\lambda \in \pi _{0}(T)$. Thus $\sigma (T)\subseteq \sigma _{BW}(T)\cup \pi _{0}(T)$.  Since $\sigma _{BW}(T)\cup \pi _{0}(T)\subseteq \sigma (T)$, always, we must have $\sigma (T)=\sigma _{BW}(T)\cup \pi _{0}(T)$. \(iii) $\Longrightarrow $ (ii):  Suppose that $\sigma (T)=\sigma _{BW}(T)\cup \pi _{0}(T)$.  To show that $\sigma _{BW}(T)=\sigma _{BB}(T)$ it suffices to show that $\sigma _{BB}(T)\subseteq \sigma _{BW}(T)$. Suppose that $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Then $T-\lambda $ is $B$-Weyl but not invertible.  Since $\sigma (T)=\sigma _{BW}(T)\cup \pi _{0}(T)$, we see that $\lambda \in \pi _{0}(T)$.  In particular, $\lambda $ is an isolated point of $\sigma (T)$.  It follows from [@Berkani2 Theorem 4.2] that $T-\lambda $ is Drazin invertible. Therefore $\lambda \notin \sigma _{D}(T)$.  If $F$ is a finite rank operator and $FT=TF$ then by [@Berkani3 Theorem 2.7] $\sigma _{D}(T)=\sigma _{D}(T+F)$.  Hence $\lambda \notin \sigma _{BB}(T)$, and so $\sigma _{BW}(T)=\sigma _{BB}(T)$. \(i) $\Longleftrightarrow $ (iv):  Suppose that $T\in g\mathcal{B}$.  Then $\sigma (T)\setminus \sigma _{BW}(T)=p_{0}(T)$.  Let $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Then $\lambda \in p_{0}(T)$, and so $\lambda $ is an isolated point of $\sigma (T)$.  Therefore $\lambda \in \sigma (T)\setminus \operatorname*{acc}\;\sigma (T)$, and hence $\operatorname*{acc}\;\sigma (T)\subseteq \sigma _{BW}(T)$. Conversely, let $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Since $\operatorname*{acc}\;\sigma (T)\subseteq \sigma _{BW}(T)$, it follows that $\lambda \in \operatorname*{iso}\;\sigma (T)$ and $T-\lambda $ is $B$-Weyl.  By [@Berkani3 Theorem 2.3], we must have $\lambda \in p_{0}(T)$. Therefore $\sigma (T)\setminus \sigma _{BW}(T)\subseteq p_{0}(T)$.  For the converse, suppose that $\lambda \in p_{0}(T)$.  Then $\lambda $ is a pole of the resolvent of $T$, and so $\lambda $ is an isolated point of $\sigma (T)$.  Therefore $\lambda \in \sigma (T)\setminus \operatorname*{acc}\;\sigma (T)$.  It follows from [@Berkani3 Theorem 2.3] that $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Thus $p_{0}(T)\subseteq \sigma (T)\setminus \sigma _{BW}(T)$, and so $T\in g\mathcal{B}$. \(iv) $\Longleftrightarrow $ (v): Suppose that $\operatorname*{acc}\;\sigma (T)\subseteq \sigma _{BW}(T)$, and let $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Then $T-\lambda $ is $B$-Weyl but not invertible. Since $\operatorname*{acc}\;\sigma (T)\subseteq \sigma _{BW}(T)$, $\lambda $ is an isolated point of $\sigma (T)$.  It follows from [Berkani3]{} that $\lambda $ is a pole of the resolvent of $T$.  Therefore $\lambda \in \pi _{0}(T)$, and hence $\sigma (T)\setminus \sigma _{BW}(T)\subseteq \pi _{0}(T)$.  Conversely, suppose that $\sigma (T)\setminus \sigma _{BW}(T)\subseteq \pi _{0}(T)$ and let $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Then $\lambda \in \pi _{0}(T)$, and so $\lambda $ is an isolated point of $\sigma (T)$.  Therefore $\lambda \in \sigma (T)\setminus \operatorname*{acc}\;\sigma (T)$, which implies that $\operatorname*{acc}\;\sigma (T)\subseteq \sigma _{BW}(T)$. \[cor22\]Let $T$ be quasinilpotent or algebraic.  Then $T\in g\mathcal{B}$. Straightforward from Theorem \[thm21\] and the fact that $\operatorname*{acc}\;\sigma (T)=\emptyset $ whenever $T$ is quasinilpotent or algebraic. Recall that $g\mathcal{W}\subseteq g\mathcal{B}$ (cf. (\[gw\])). However, the reverse inclusion does not hold, as the following example shows. \[ex23\]Let $\mathcal{X}=\ell _{p}$, let $T_{1},T_{2}\in \mathcal{B(X)}$ be given by $$T_{1}(x_{1},x_{2},x_{3},\cdots ):=(0,\frac{1}{2}x_{1},\frac{1}{3}x_{2},\frac{1}{4}x_{3},\cdots )\text{ and }T_{2}:=0,$$and let $$T:=\begin{pmatrix} T_{1} & 0 \\ 0 & T_{2}\end{pmatrix}\in \mathcal{B(X}\oplus \mathcal{X)}.$$Then  $$\sigma (T)=\omega (T)=\sigma _{BW}(T)=\pi _{0}(T)=\{0\}$$and $$p_{0}(T)=\emptyset .$$Therefore, $T\in g\mathcal{B}\setminus g\mathcal{W}$. The next result gives simple necessary and sufficient conditions for an operator $T\in g\mathcal{B}$ to belong to the smaller class $g\mathcal{W}$. \[thm24\]Let $T\in g\mathcal{B}$.  The following statements are equivalent.(i)  $T\in g\mathcal{W}$.(ii)  $\newline $(iii)   \(i) $\Rightarrow $ (ii): Assume $T\in g\mathcal{W}$, that is, $\sigma (T)\setminus \sigma _{BW}(T)=\pi _{0}(T)$.  It then follows easily that $\sigma _{BW}(T)\cap \pi _{0}(T)=\emptyset $, as required for (ii). \(ii) $\Rightarrow $ (iii): Let $\lambda \in \pi _{0}(T)$.  The condition in (ii) implies that $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$, and since $T\in g\mathcal{B}$, we must then have $\lambda \in p_{0}(T)$.  It follows that $\pi _{0}(T)\subseteq p_{0}(T)$, and since the reverse inclusion always hold, we obtain (iii). \(iii) $\Rightarrow $ (i): Since $T\in g\mathcal{B}$, we know that $\sigma (T)\setminus \sigma _{BW}(T)=p_{0}(T)$, and since we are assuming $p_{0}(T)=\pi _{0}(T)$, it follows that $\sigma (T)\setminus \sigma _{BW}(T)=\pi _{0}(T)$, that is, $T\in g\mathcal{W}$. Let $T\in \mathcal{B(X)}$ and let $f\in H(\sigma (T))$, where $H(\sigma (T))$ is the space of functions analytic in an open neighborhood of $\sigma (T)$.  It is well known that $\omega (f(T))\subseteq f(\omega (T))$ holds.  The following theorem shows that a similar result holds for the $B$-Weyl spectrum.  To prove this we begin with the following lemma. \[lem25\]([@Berkani2 Theorem 3.2]) Let $S$ and $T$ be two commuting $B$-Fredholm operators.  Then the product $ST$ is a $B$-Fredholm operator and $i(ST)=i(S)+i(T)$. \[thm26\]Let $T\in \mathcal{B(X)}$ and let $f\in H(\sigma (T))$.  Then $$\sigma _{BW}(f(T))\subseteq f(\sigma _{BW}(T)). \label{261}$$ Observe that if $S$ and $T$ are two commuting $B$-Weyl operators then the product $ST$ is a $B$-Weyl operator.  Indeed, suppose that $S$ and $T$ are both $B$-Weyl.  Then $S$ and $T$ are both $B$-Fredholm of index $0$.  It follows from Lemma \[lem25\] that $ST$ is $B$-Fredholm and $$i(ST)=i(S)+i(T)=0.$$Therefore $ST$ is $B$-Weyl.  Let $f$ be an analytic function on an open neighborhood of $\sigma (T)$. Now we show that $\sigma _{BW}(f(T))\subseteq f(\sigma _{BW}(T))$.  Suppose that $\lambda \notin f(\sigma _{BW}(T))$. Let $$f(T)-\lambda =c_{0}(T-\lambda _{1})(T-\lambda _{2})\cdots (T-\lambda _{n})g(T),$$where $c_{0},\lambda _{1},\lambda _{2},\dots ,\lambda _{n}\in \mathbb{C}$ and $g(T)$ is invertible.  Since $\lambda \notin f(\sigma _{BW}(T))$, $c_{0}(\mu -\lambda _{1})(\mu -\lambda _{2})\cdots (\mu -\lambda _{n})g(\mu )\neq 0$ for every $\mu \in \sigma _{BW}(T)$.  Therefore $\mu \neq \lambda _{i}$ for every $\mu \in \sigma _{BW}(T)$, and hence $T-\lambda _{i}$ is $B$-Weyl ($i=1,2,\dots ,n$).  Since $g(T)$ is invertible, it follows from the previous observation that $$i(f(T)-\lambda )=\sum_{j=1}^{n}i(T-\lambda _{j})+i(g(T))=0.$$Therefore $f(T)-\lambda $ is $B$-Weyl, and hence $\lambda \notin \sigma _{BW}(f(T))$.  Thus $\sigma _{BW}(f(T))\subseteq f(\sigma _{BW}(T))$. It is well known that $\sigma _{b}(T)=\sigma _{e}(T)\cup \operatorname*{acc}\;\sigma (T)$.  A similar result holds for the $B$-Browder spectrum. \[thm27\] Let $T\in \mathcal{B(X)}$.  Then $\sigma _{BB}(T)=\sigma _{BF}(T)\cup \operatorname*{acc}\;\sigma (T)$. Suppose that $\lambda \notin \sigma _{BB}(T)$.  Since $\sigma _{BB}(T)=\cap \{\sigma _{D}(T+F):F\in \mathcal{B}_{00}\mathcal{(X)}$ and $TF=FT\}$, there exists a finite rank operator $F$ such that $TF=FT$ and $\lambda \notin \sigma _{D}(T+F)$.  Since $T+F-\lambda $ is Drazin invertible and $TF=FT$, it follows from [@Berkani3 Theorem 2.7] that $T-\lambda $ is Drazin invertible.  Therefore $T-\lambda $ has finite ascent and descent, and hence $T-\lambda $ can be decomposed as $T-\lambda =T_{1}\oplus T_{2}$, where $T_{1}$ is invertible and $T_{2}$ is nilpotent.  It follows from [@Berkani2 Lemma 4.1] that $T-\lambda $ is $B$-Fredholm.  On the other hand, since $T-\lambda $ has finite ascent and descent, $\lambda $ is an isolated point of $\sigma (T)$.  Hence $\lambda \notin \sigma _{BF}(T)\cup \operatorname*{acc}\;\sigma (T)$. Conversely, suppose that $\lambda \notin \sigma _{BF}(T)\cup \operatorname*{acc}\;\sigma (T)$.  Then $T-\lambda $ is $B$-Fredholm and $\lambda $ is an isolated point of $\sigma (T)$.  Since $T-\lambda $ is $B$-Fredholm, it follows from [@Berkani1 Theorem 2.7] that $T-\lambda $ can be decomposed as $T-\lambda =T_{1}\oplus T_{2}$, where $T_{1}$ is Fredholm and $T_{2}$ is nilpotent. $\ $We consider two cases. **Case I**.**  **Suppose that $T_{1}$ is invertible.  Then $T-\lambda $ is Drazin invertible.  Thus, if $F$ is a finite rank operator and $TF=FT$, then $T+F-\lambda $ is Drazin invertible by [Berkani3]{}.  Therefore $\lambda \notin \sigma _{BB}(T)$. **Case II**.**  **Suppose that $T_{1}$ is not invertible.  Then $0$ is an isolated point of $\sigma (T_{1})$.  But $T_{1}$ is a Fredholm operator, hence it follows from the punctured neighborhood theorem that $T_{1}$ is Browder.  Therefore there exists a finite rank operator $S_{1}$ such that $T_{1}+S_{1}$ is invertible and $T_{1}S_{1}=S_{1}T_{1}$.  Put $F:=S_{1}\oplus 0$.  Then $F$ is a finite rank operator, $TF=FT$ and $$T-\lambda +F=T_{1}\oplus T_{2}+S_{1}\oplus 0=(T_{1}+S_{1})\oplus T_{2}$$is Drazin invertible.  Hence $\lambda \notin \sigma _{BB}(T)$. In general, the spectral mapping theorem does not hold for the $B$-Weyl spectrum, as shown by the following example. \[ex28\] Let $U\in B(l_{2})$ be the unilateral shift and consider the operator $$T:=U\oplus (U^{\ast }+2).$$Let $p(z):=z(z-2)$.  Since $U$ is Fredholm with $i(U)=-1$ and since $U-2$ and $U^{\ast }+2$ are both invertible, it follows that $T$ and $T-2$ are Fredholm with indices $-1$ and $1$, respectively.  Therefore $T$ and $T-2$ are both $B$-Fredholm but $T$ is not $B$-Weyl.  On the other hand, it follows from the index product theorem that $$i(p(T))=i(T(T-2))=i(T)+i(T-2)=0,$$hence $p(T)$ is Weyl.  Thus $0\notin \sigma _{BW}(p(T))$, whereas $0=p(0)\in p(\sigma _{BW}(T))$. By contrast, the spectral mapping theorem does hold for the Browder spectrum and analytic functions.  The following theorem shows that a similar result holds for the $B$-Browder spectrum. \[thm29\]Let $T\in \mathcal{B(X)}$ and let $f\in H(\sigma (T))$.  Then $$\sigma _{BB}(f(T))=f(\sigma _{BB}(T)).$$ Suppose that $\mu \notin f(\sigma _{BB}(T))$ and set $$h(\lambda ):=f(\lambda )-\mu .$$Then $h$ has no zeros in $\sigma _{BB}(T)$.  Since $\sigma _{BB}(T)=\sigma _{BF}(T)\cup \operatorname*{acc}\;\sigma (T)$ by Theorem \[thm27\], we conclude that $h$ has finitely many zeros in $\sigma (T)$.  Now we consider two cases. **Case I**.  Suppose that $h$ has no zeros in $\sigma (T)$.  Then $h(T)=f(T)-\mu $ is invertible, and so $\mu \notin \sigma _{BB}(f(T))$. **Case II**.  Suppose that $h$ has at least one zero in $\sigma (T)$.  Then $$h(\lambda )\equiv c_{0}(\lambda -\lambda _{1})(\lambda -\lambda _{2})\cdots (\lambda -\lambda _{n})g(\lambda ),$$where $c_{0},\lambda _{1},\lambda _{2},\dots ,\lambda _{n}\in \mathbb{C}$ and $g(\lambda )$ is a nonvanishing analytic function on an open neighborhood.  Therefore $$h(T)=c_{0}(T-\lambda _{1})(T-\lambda _{2})\cdots (T-\lambda _{n})g(T),$$where $g(T)$ is invertible.  Since $\mu \notin f(\sigma _{BB}(T))$, $\lambda _{1},\lambda _{2},\dots ,\lambda _{n}\notin \sigma _{BB}(T)$. Therefore $T-\lambda _{i}$ is $B$-Browder, and hence each $T-\lambda _{i}$ is $B$-Weyl ($i=1,2,\dots ,n$).  But each $\lambda _{i}$ is an isolated point of $\sigma (T)$, hence it follows from [@Berkani3 Theorem 2.3] that each $\lambda _{i}$ is a pole of the resolvent of $T$.  Therefore $T-\lambda _{i}$ has finite ascent and descent ($i=1,2,\dots ,n$), so $(T-\lambda _{1})(T-\lambda _{2})\cdots (T-\lambda _{n})$ has finite ascent and descent by [@A.E.Taylor Theorem 7.1].  Since $g(T)$ is invertible, $h(T)$ has finite ascent and descent.  Therefore $h(T)$ is Drazin invertible, and so $0\notin \sigma _{D}(h(T))$.  Hence $\mu \notin \sigma _{BB}(f(T))$.  It follows from Cases I and II that $\sigma _{BB}(f(T))\subseteq f(\sigma _{BB}(T))$. Conversely, suppose that $\lambda \notin \sigma _{BB}(f(T))$.  Then $f(T)-\lambda $ is $B$-Browder.  We again consider two cases. **Case I**.  Suppose that $f(T)-\lambda $ is invertible.  Then $\lambda \notin \sigma (f(T))=f(\sigma (T))$, and hence $\lambda \notin f(\sigma _{BB}(T))$. **Case II**.  Suppose that $\lambda \in \sigma (f(T))\setminus \sigma _{BB}(f(T))$.  Write $$f(T)-\lambda \equiv c_{0}(T-\lambda _{1})(T-\lambda _{2})\cdots (T-\lambda _{n})g(T),$$where $c_{0},\lambda _{1},\lambda _{2},\dots ,\lambda _{n}\in \mathbb{C}$ and $g(T)$ is invertible.  Since $f(T)-\lambda $ is $B$-Browder, there is a finite rank operator $F$ such that $F(f(T)-\lambda )=(f(T)-\lambda )F$ and $f(T)-\lambda +F$ is Drazin invertible.  It follows from [Berkani3]{} that $f(T)-\lambda $ is Drazin invertible.  Therefore $f(T)-\lambda =c_{0}(T-\lambda _{1})(T-\lambda _{2})\cdots (T-\lambda _{n})g(T)$ has finite ascent and descent, and hence $T-\lambda _{i}$ has finite ascent and descent for every $i=1,2,\dots ,n$ by [A.E.Taylor]{}.  Therefore each $T-\lambda _{i}$ is Drazin invertible, and so $\lambda _{1},\lambda _{2},\dots ,\lambda _{n}\notin \sigma _{BB}(T)$.   We now wish to prove that $\lambda \notin f(\sigma _{BB}(T))$.  Assume not; then there exists a $\mu \in \sigma _{BB}(T)$ such that $f(\mu )=\lambda $.  Since $g(\mu )\neq 0$, we must have $\mu =\mu _{i}$ for some $i=1,...,n$, which implies $\mu _{i}\in \sigma _{BB}(T)$, a contradiction.  Hence $\lambda \notin f(\sigma _{BB}(T))$, and so $f(\sigma _{BB}(T))\subseteq \sigma _{BB}(f(T))$.  This completes the proof. A sufficient condition for the spectral mapping theorem to hold for the $B$-Weyl spectrum and analytic functions can be given in terms of the set $$\mathcal{P(X)}:=\{T\in \mathcal{B(X)}:i(T-\lambda )i(T-\mu )\geq 0\ \text{\ for all }\lambda ,\mu \in \mathbb{C}\setminus \sigma _{BF}(T)\}.$$ \[thm210\]Let $T\in \mathcal{B(X)}$.  Then the following statements are equivalent:(i) $\ T\in \mathcal{P(X)}$;(ii)  $f(\sigma _{BW}(T))=\sigma _{BW}(f(T))$  for every $f\in H(\sigma (T))$. \(i) $\Longrightarrow $ (ii):  Suppose that $T\in \mathcal{P(X)}$.  Since $\sigma _{BW}(f(T))\subseteq f(\sigma _{BW}(T))$ by Theorem \[thm26\], it suffices to show that $f(\sigma _{BW}(T))\subseteq \sigma _{BW}(f(T))$. Suppose that $\lambda \notin \sigma _{BW}(f(T))$ and write $$f(T)-\lambda \equiv c_{0}(T-\lambda _{1})(T-\lambda _{2})\cdots (T-\lambda _{n})g(T), \label{eq2101}$$where $c_{0},\lambda _{1},\lambda _{2},\dots ,\lambda _{n}\in \mathbb{C}$ and $g(T)$ is invertible.  Since $\lambda \notin \sigma _{BW}(f(T))$, the operator $f(T)-\lambda $ is $B$-Weyl.  Therefore $f(T)-\lambda $ is $B$-Fredholm with index $0$.  Since the operators on the right-hand side of (\[eq2101\]) commute, it follows from [@Berkani1 Corollary 3.3] that $T-\lambda _{i}$ is $B$-Fredholm (all $i=1,2,\dots ,n$).  Since $T\in \mathcal{P(X)}$, $i(T-\lambda )i(T-\mu )\geq 0\ $ for all $\lambda ,\mu \in \mathbb{C}\setminus \sigma _{BF}(T)$.  So we consider two cases. **Case I**.  Suppose that $i(T-\lambda )\leq 0\ $ for all $\lambda \in \mathbb{C}\setminus \sigma _{BF}(T)$.  Since $f(T)-\lambda $ is $B$-Fredholm with index $0$, we have $$i(f(T)-\lambda )=\sum_{j=1}^{n}i(T-\lambda _{j})+i(g(T))=0, \label{eq2102}$$which implies that $T-\lambda _{i}$ is $B$-Weyl (all $i=1,2,\dots ,n$). Therefore $\lambda \notin f(\sigma _{BW}(T))$, and hence $f(\sigma _{BW}(T))\subseteq \sigma _{BW}(f(T))$. **Case II**.  Suppose that $i(T-\lambda )\geq 0$ for all $\lambda \in \mathbb{C}\setminus \sigma _{BF}(T)$.  Since each $T-\lambda _{i}$ is $B$-Fredholm, $i(T-\lambda _{i})\geq 0$ (all $i=1,2,\dots ,n$).  Since $f(T)-\lambda $ is $B$-Fredholm with index $0$, we know that $T-\lambda _{i}$ is $B$-Weyl ($i=1,2,\dots ,n$), by (\[eq2102\]).  Therefore $\lambda \notin f(\sigma _{BW}(T))$, and hence $f(\sigma _{BW}(T))\subseteq \sigma _{BW}(f(T))$. From Cases I and II, it follows that $\sigma _{BW}(f(T))=f(\sigma _{BW}(T))$. \(ii) $\Longrightarrow $ (i):  Suppose that $\sigma _{BW}(f(T))=f(\sigma _{BW}(T))$ for every $f\in H(\sigma (T))$.  Assume to the contrary that $T\notin \mathcal{P(X)}$.  Then there exist $\lambda _{1},\lambda _{2}\in \mathbb{C}\setminus \sigma _{BF}(T)$ such that $i(T-\lambda _{1})<0$ and $i(T-\lambda _{2})>0$.  Let $m:=-i(T-\lambda _{1})$ and $n:=i(T-\lambda _{2}) $.  Define $f(z):=(z-\lambda _{1})^{n}(z-\lambda _{2})^{m}$. $\ $Then $f(T)=(T-\lambda _{1})^{n}(T-\lambda _{2})^{m}$ is $B$-Fredholm and $$i(f(T))=i((T-\lambda _{1})^{n}(T-\lambda _{2})^{m})=ni(T-\lambda _{1})+mi(T-\lambda _{2})=0.$$Therefore $f(T)$ is $B$-Weyl, and hence $0\notin \sigma _{BW}(f(T))$.  On the other hand, $$0=f(\lambda _{2})\in f(\sigma _{BW}(T))=\sigma _{BW}(f(T)),$$a contradiction.  Hence $T\in \mathcal{P(X)}$. In Theorem \[thm29\], we proved that the spectral mapping theorem holds for the $B$-Browder spectrum and analytic functions.  This might suggest that the validity of the generalized Browder’s theorem for $T$ provides the right framework for analyzing the equality in (\[261\]).  The following result confirms this. \[thm211\]Let $T\in \mathcal{B(X)}$.  Suppose that $T\in g\mathcal{B}$.  Then the following statements are equivalent:(i)  (ii)  for every (iii)  $f(T)\in g\mathcal{B}$ for every $f\in H(\sigma (T))$. \(i) $\Longleftrightarrow $ (ii):  This is straightforward from Theorem [thm210]{}. \(ii) $\Longleftrightarrow $ (iii):  Suppose that $\sigma _{BW}(f(T))=f(\sigma _{BW}(T))$ for every $f\in H(\sigma (T))$.  By Theorem \[thm29\], $$\sigma _{BB}(f(T))=f(\sigma _{BB}(T))=f(\sigma _{BW}(T))=\sigma _{BW}(f(T)),$$whence $f(T)\in g\mathcal{B}$ by Theorem \[thm21\]. Conversely, suppose that $f(T)\in g\mathcal{B}$ for every $f\in H(\sigma (T)) $.  It follows from Theorem \[thm21\] that $\sigma _{BW}(f(T))=\sigma _{BB}(f(T))$.  By Theorem \[thm29\], we have$$\sigma _{BW}(f(T))=\sigma _{BB}(f(T))=f(\sigma _{BB}(T))=f(\sigma _{BW}(T)),$$and hence $\sigma _{BW}(f(T))=f(\sigma _{BW}(T))$. As a consequence, we obtain the following theorem, which extends a result in [@Curto1]. \[thm212\]Let $S,T\in \mathcal{B(X)}$.  If $T$ has SVEP and $S\prec T$, then $f(S)\in g\mathcal{B}$ for every $f\in H(\sigma (S))$. Suppose that $T$ has SVEP.  Since $S\prec T$, it follows from the proof of [@Curto1 Theorem 3.2] that $S$ has SVEP.  We now show that $S\in g\mathcal{B}$.  Let $\lambda \in \sigma (S)\setminus \sigma _{BW}(S)$; then $S-\lambda $ is $B$-Weyl but not invertible.  Since $S-\lambda $ is $B$-Weyl, it follows from [@Berkani2 Lemma 4.1] that $S-\lambda $ admits the decomposition $S-\lambda =S_{1}\oplus S_{2},$where $S_{1}$ is Weyl and $S_{2}$ is nilpotent.  Since $S$ has SVEP, $S_{1}$ and $S_{2}$ also have SVEP.  Therefore Browder’s theorem holds for $S_{1}$, and hence $\omega (S_{1})=\sigma _{b}(S_{1})$.  Since $S_{1}$ is Weyl, $S_{1}$ is Browder. Hence $\lambda $ is an isolated point of $\sigma (S)$.  It follows from Theorem \[thm21\] that $S\in g\mathcal{B}$.   Now let $f\in H(\sigma (S))$; we shall show that $\sigma _{BW}(f(S))=f(\sigma _{BW}(S))$.  To prove this, by Theorem \[thm211\] it suffices to show that $i(S-\lambda )i(S-\mu )\geq 0\ $for every $\lambda ,\mu \in \mathbb{C}\setminus \sigma _{BF}(S)$ .  Let $\lambda ,\mu \in \mathbb{C}\setminus \sigma _{BF}(S)$.  Then $S-\lambda $ and $S-\mu $ are both $B$-Fredholm, and so it follows from [@Berkani1 Theorem 2.7] that $S-\lambda $ and $S-\mu $ can be decomposed as $S-\lambda =S_{1}\oplus S_{2}$ and $S-\mu =S_{3}\oplus S_{4},$where $S_{1}$ and $S_{3}$ are both Fredholm, and $S_{2}$ and $S_{4}$ are nilpotent.  Since $S$ has SVEP, $S_{1}$ and $S_{3}$ have SVEP.  By [@Aie Theorem 2.6], $S-\lambda $ and $S-\mu $ have finite ascent, which implies $i(S-\lambda )i(S-\mu )\geq 0$.  Thus $\sigma _{BW}(f(S))=f(\sigma _{BW}(S))$.  It follows from Theorem [thm211]{} that $f(S)\in g\mathcal{B}$. We now recall that the generalized Weyl’s theorem may not hold for quasinilpotent operators, and that it does not necessarily transfer to or from adjoints. \[ex213\]On $\mathcal{X}\equiv \ell _{p}$ let$$T(x_{1},x_{2},x_{3},\cdots ):=(\text{$\frac{1}{2}$}x_{2},\text{$\frac{1}{3}$}x_{3},\text{$\frac{1}{4}$}x_{4},\cdots ).$$Then $$\sigma (T^{\ast })=\sigma _{BW}(T^{\ast })=\{0\}$$and $$\pi _{0}(T^{\ast })=\emptyset .$$Therefore $T^{\ast }\in g\mathcal{W}$.  On the other hand, since $\sigma (T)=\omega (T)=\pi _{00}(T)$, $T\notin \mathcal{W}$.  Hence $T\notin g\mathcal{W}$. However, the generalized Browder’s theorem performs better. Let $T\in \mathcal{B(X)}$.  Then the following statements are equivalent:(i) $\ T\in g\mathcal{B}$;(ii) $\ T^{\ast }\in g\mathcal{B}$. Recall that $$\sigma (T)=\sigma (T^{\ast })\text{ and }\sigma _{BW}(T)=\sigma _{BW}(T^{\ast }).$$Therefore, $$\operatorname*{acc}\;\sigma (T)\subseteq \sigma _{BW}(T)\Longleftrightarrow \operatorname*{acc}\;\sigma (T^{\ast })\subseteq \sigma _{BW}(T^{\ast }).$$It follows from Theorem \[thm21\] that $T\in g\mathcal{B}$ if and only if $T^{\ast }\in g\mathcal{B}$. \[sect3\]Operators Reduced by Their Eigenspaces =============================================== Let $\mathcal{H}$ be an infinite dimensional Hilbert space and suppose that $T\in \mathcal{B(H)}$ is reduced by each of its eigenspaces.  If we let $$\mathfrak{M}:=\bigvee \{N(T-\lambda ):\ \lambda \in \sigma _{p}(T)\},$$it follows that $\mathfrak{M}$ reduces $T$.  Let $T_{1}:=T|\mathfrak{M}$ and $T_{2}:=T|\mathfrak{M}^{\perp }$.  By [@Ber2 Proposition 4.1] we have: - $T_{1}$ is a normal operator with pure point spectrum; - $\sigma _{p}(T_{1})=\sigma _{p}(T)$; - $\sigma (T_{1})=\text{cl}\,\sigma _{p}(T_{1})$ (here cl denotes closure); - $\sigma _{p}(T_{2})=\emptyset $. In [@Ber2 Definition 5.4], Berberian defined $$\tau (T):=\sigma (T_{2})\cup \operatorname*{acc}\;\sigma _{p}(T)\cup \sigma _{pi}(T);$$we shall call $\tau (T)$ the *Berberian spectrum* of $T$.  Berberian proved that $\tau (T)$ is a nonempty compact subset of $\sigma (T)$.  In the following theorem we establish a relation amongst the $B$-Weyl, the $B$-Browder and the Berberian spectra. \[thm215\]Suppose that $T\in \mathcal{B(H)}$ is reduced by each of its eigenspaces.  Then $$\sigma _{BW}(T)=\sigma _{BB}(T)\subseteq \tau (T). \label{eq2151}$$ Let $\mathfrak{M}$ be the closed linear span of the eigenspaces $N(T-\lambda )$ ($\lambda \in \sigma _{p}(T)$) and write $$T_{1}:=T|\mathfrak{M}\text{ and }T_{2}:=T|\mathfrak{M}^{\perp }.$$From the preceding arguments it follows that $T_{1}$ is normal, $\sigma _{p}(T_{1})=\sigma _{p}(T)$ and $\sigma _{p}(T_{2})=\emptyset $.  Toward (\[eq2151\]) we will show that $$\sigma _{BW}(T)\subseteq \tau (T) \label{eq2152}$$and $$\sigma _{BB}(T)\subseteq \sigma _{BW}(T). \label{eq2153}$$To establish (\[eq2152\]) suppose that $\lambda \in \sigma (T)\setminus \tau (T)$.  Then $T_{2}-\lambda $ is invertible and $\lambda \in \pi _{0}(T_{1})$.  Since $\sigma _{pi}(T)\subseteq \tau (T)$, we see that $\lambda \in \pi _{00}(T_{1})$.  Since $T_{1}$ is normal, it follows from [@Berkani2 Theorem 4.5] that $T_{1}\in g\mathcal{W}$.  Therefore $\lambda \in \sigma (T_{1})\setminus \sigma _{BW}(T_{1})$, and hence $T-\lambda $ is $B$-Weyl.  This proves (\[eq2152\]). Toward (\[eq2153\]) suppose that $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Then $T-\lambda $ is $B$-Weyl but not invertible.  Observe that if $\mathcal{H}_{1}$ is a Hilbert space and an operator $R\in \mathcal{B(H}_{1}\mathcal{)}$ satisfies $\sigma _{BW}(R)=\sigma _{BF}(R)$, then $$\sigma _{BW}(R\oplus S)=\sigma _{BW}(R)\cup \sigma _{BW}(S), \label{eq2154}$$for every Hilbert space $\mathcal{H}_{2}$ and $S\in \mathcal{B(H}_{2}\mathcal{)}$.  Indeed, if $\lambda \notin \sigma _{BW}(R)\cup \sigma _{BW}(S)$, then $R-\lambda $ and $S-\lambda $ are both $B$-Weyl.  Therefore $R-\lambda $ and $S-\lambda $ are $B$-Fredholm with index $0$.  Hence $R-\lambda \oplus S-\lambda $ is $B$-Fredholm; moreover, $$i\begin{pmatrix} R-\lambda & 0 \\ 0 & S-\lambda\end{pmatrix}=i(R-\lambda )+i(S-\lambda )=0.$$Therefore $R\oplus S-\lambda $ is $B$-Weyl, and so $\lambda \notin \sigma _{BW}(R\oplus S)$, which implies $\sigma _{BW}(R\oplus S)\subseteq \sigma _{BW}(R)\cup \sigma _{BW}(S)$.  Conversely, suppose that $\lambda \notin \sigma _{BW}(R\oplus S)$.  Then $R\oplus S-\lambda $ is $B$-Fredholm with index $0$.  Since $i(R\oplus S-\lambda )=i(R-\lambda )+i(S-\lambda )$ and $i(R-\lambda )=0$, we must have $i(S-\lambda )=0$.  Therefore $R-\lambda $ and $S-\lambda $ are both $B$-Weyl.  Hence $\lambda \notin \sigma _{BW}(R)\cup \sigma _{BW}(S)$, which implies $\sigma _{BW}(R)\cup \sigma _{BW}(S)\subseteq \sigma _{BW}(R\oplus S)$.  Since $T_{1}$ is normal, we can now apply (\[eq2154\]) to $T_{1}$ in place of $R$ to show that $T_{1}-\lambda $ and $T_{2}-\lambda $ are both $B$-Weyl.  But since $\sigma _{p}(T_{2})=\emptyset $, we see that $T_{2}-\lambda $ is Weyl and injective.  Therefore $T_{2}-\lambda $ is invertible, and so $\lambda \in \sigma (T_{1})\setminus \sigma _{BW}(T_{1})$.  Since $T_{1}$ is normal, it follows from [@Berkani2 Theorem 4.5] that $T_{1}\in g\mathcal{W}$, which implies $\lambda \in \pi _{0}(T_{1})$.  Hence $\lambda $ is an isolated point of $\sigma (T_{1})$ and $T_{2}-\lambda $ is invertible.  Now observe that if $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ are Hilbert spaces then the following equality holds with no other restriction on either $R$ or $S$: $$\sigma _{BB}(R\oplus S)=\sigma _{BB}(R)\cup \sigma _{BB}(S), \label{eq2155}$$for every $R\in B(\mathcal{H}_{1})$ and $S\in B(\mathcal{H}_{2})$.  Indeed, if $\lambda \notin \sigma _{BB}(R)\cup \sigma _{BB}(S)$, then $R-\lambda $ and $S-\lambda $ are both $B$-Browder.  So $\lambda \notin \sigma _{BB}(R)$ and $\lambda \notin \sigma _{BB}(S)$, and hence there are finite rank operators $F_{1}$ and $F_{2}$ such that $RF_{1}=F_{1}R$, $SF_{2}=F_{2}S$, $R+F_{1}-\lambda $ and $S+F_{2}-\lambda $ are both Drazin invertible.  Set $$F:=\begin{pmatrix} F_{1} & 0 \\ 0 & F_{2}\end{pmatrix}\text{ and }V:=\begin{pmatrix} R & 0 \\ 0 & S\end{pmatrix}.$$Then $F$ is a finite rank operator such that $$VF=FV\text{ and }V+F-\lambda \equiv \begin{pmatrix} R+F_{1}-\lambda & 0 \\ 0 & S+F_{2}-\lambda\end{pmatrix}$$is Drazin invertible.  Therefore $\lambda \notin \sigma _{BB}\begin{pmatrix} R & 0 \\ 0 & S\end{pmatrix}$, and hence $\sigma _{BB}\begin{pmatrix} R & 0 \\ 0 & S\end{pmatrix}\subseteq \sigma _{BB}(R)\cup \sigma _{BB}(S)$.  Conversely, suppose that $\lambda \notin \sigma _{BB}\begin{pmatrix} R & 0 \\ 0 & S\end{pmatrix}$.  It follows from Theorem \[thm27\] that $\begin{pmatrix} R-\lambda & 0 \\ 0 & S-\lambda\end{pmatrix}$ is $B$-Fredholm and $\lambda $ is an isolated point of $\begin{pmatrix} R & 0 \\ 0 & S\end{pmatrix}$.  Since $\sigma \begin{pmatrix} R & 0 \\ 0 & S\end{pmatrix}=\sigma (R)\cup \sigma (S)$, it follows that $R-\lambda $ and $S-\lambda $ are both $B$-Fredholm, and $\lambda $ is an isolated point of $\sigma (R)$ and $\sigma (S)$, respectively.  It follows from Theorem \[thm27\] that $R-\lambda $ and $S-\lambda $ are both $B$-Browder.  Therefore $\lambda \notin \sigma _{BB}(R)\cup \sigma _{BB}(S)$, and hence $\sigma _{BB}(R)\cup \sigma _{BB}(S)\subseteq \sigma _{BB}\begin{pmatrix} R & 0 \\ 0 & S\end{pmatrix}$.  This proves (\[eq2155\]). By Theorem \[thm27\] and (\[eq2155\]), we have $\lambda \notin \sigma _{BB}(T)$.  This proves (\[eq2153\]) and completes the proof. In [@Oberai2], Oberai showed that if $T\in \mathcal{B(X)}$ is isoloid and if $T\in \mathcal{W}$ then for any polynomial $p$, $p(T)\in \mathcal{W}$ if and only if $\omega (p(T))=p(\omega (T))$.  We now show that a similar result holds for the generalized Weyl’s theorem.  We begin with the following two lemmas, essentially due to Oberai [@Oberai2]; we include proofs for the reader’s convenience. \[lem216\]Let $T\in \mathcal{B(X)}$ and let $f\in H(\sigma (T))$.  Then $$\sigma (f(T))\setminus \pi _{0}(f(T))\subseteq f(\sigma (T)\setminus \pi _{0}(T)).$$ Suppose that $\lambda \in \sigma (f(T))\setminus \pi _{0}(f(T))$.  By the spectral mapping theorem, it follows that $\lambda \in f(\sigma (T))\setminus \pi _{0}(f(T))$.  We consider two cases. **Case I.  **Suppose that $\lambda $ is not an isolated point of $f(\sigma (T))$.  Then there exists a sequence $\{\lambda _{n}\}\subseteq f(\sigma (T))$ such that $\lambda _{n}\rightarrow \lambda $.  Since $\lambda _{n}\in f(\sigma (T))$, $\lambda _{n}=f(\mu _{n})$ for some $\mu _{n}\in \sigma (T)$.  By the compactness of $\sigma (T)$, there is a convergent subsequence $\{\mu _{n_{k}}\}$ such that $\mu _{n_{k}}\rightarrow \mu \in \sigma (T)$.  It follows that $f(\mu _{n_{k}})\rightarrow \lambda $, and therefore $\lambda =f(\mu )$.  But $\mu \in \sigma (T)\setminus \pi _{0}(T)$, whence $\lambda =f(\mu )\in f(\sigma (T)\setminus \pi _{0}(T))$. **Case II.  **Suppose now that $\lambda $ is an isolated point of $f(\sigma (T))$.  Since $\lambda \in \pi _{0}(f(T))$ by assumption, it follows that $\lambda $ cannot be an eigenvalue of $f(T)$.  Let $$f(T)-\lambda =c_{0}(T-\lambda _{1})(T-\lambda _{2})\cdots (T-\lambda _{n})g(T), \label{eq2161}$$where $c_{0},\lambda _{1},\dots ,\lambda _{n}\in \mathbb{C}$ and $g(T)$ is invertible.  Since $f(T)-\lambda $ is injective, and the operators on the right-hand side of (\[eq2161\]) commute, none of $\lambda _{1},\lambda _{2},\ldots ,\lambda _{n}$ can be an eigenvalue of $T$.  Therefore $\lambda \in f(\sigma (T)\setminus \pi _{0}(T))$. From Cases I and II we obtain the desired conclusion. \[lem217\]Let $T\in \mathcal{B(X)}$ and assume that $T$ is isoloid. Then for any $f\in H(\sigma (T))$ we have $$\sigma (f(T))\setminus \pi _{0}(f(T))=f(\sigma (T)\setminus \pi _{0}(T)).$$ In view of Lemma \[lem216\] it suffices to prove that $f(\sigma (T)\setminus \pi _{0}(T))\subseteq \sigma (f(T))\setminus \pi _{0}(f(T))$. Suppose that $\lambda \in f(\sigma (T)\setminus \pi _{0}(T))$.  Then by the spectral mapping theorem, we must have $\lambda \in \sigma (f(T))$.  Assume that $\lambda \in \pi _{0}(f(T))$.  Then clearly, $\lambda $ is an isolated point of $\sigma (f(T))$.  Let $$f(T)-\lambda =c_{0}(T-\lambda _{1})(T-\lambda _{2})\cdots (T-\lambda _{n})g(T),$$where $c_{0},\lambda _{1},\dots ,\lambda _{n}\in \mathbb{C}$ and $g(T)$ is invertible.  If for some $i=1,...,n$, $\lambda _{i}\in \sigma (T)$, then $\lambda _{i}$ would be an isolated point of $\sigma (T)$.  But $T$ is isoloid, hence $\lambda _{i}$ would also be an eigenvalue of $T$.  Since $\lambda \in \pi _{0}(f(T))$, such $\lambda _{i}$ would belong to $\pi _{0}(T) $.  Thus, $\lambda =f(\lambda _{i})$ for some $\lambda _{i}\in \pi _{0}(T)$, and hence $\lambda \in f(\pi _{0}(T))$, a contradiction. Therefore $\lambda \notin \pi _{0}(f(T))$, so that $\lambda \in \sigma (f(T))\setminus \pi _{0}(f(T))$. \[thm218\] Suppose that $T\in \mathcal{B(X)}$ is isoloid and $T\in g\mathcal{W}$.  Then for any $f\in H(\sigma (T))$, $$f(T)\in g\mathcal{W}\Longleftrightarrow f(\sigma _{BW}(T))=\sigma _{BW}(f(T)).$$ $(\Longrightarrow )$ Suppose $f(T)\in g\mathcal{W}$.  Then $\sigma _{BW}(f(T))=\sigma (f(T))\setminus \pi _{0}(f(T))$.  Since $T$ is isoloid, it follows from Lemma \[lem217\] that $f(\sigma (T)\setminus \pi _{0}(T))=\sigma (f(T))\setminus \pi _{0}(f(T))$.  But $T\in g\mathcal{W}$, hence $\sigma _{BW}(T)=\sigma (T)\setminus \pi _{0}(T)$, which implies $f(\sigma _{BW}(T))=f(\sigma (T)\setminus \pi _{0}(T))$.  Therefore $$\begin{aligned} f(\sigma _{BW}(T)) &=&f(\sigma (T)\setminus \pi _{0}(T)) \\ &=&\sigma (f(T))\setminus \pi _{0}(f(T))=\sigma _{BW}(f(T)).\end{aligned}$$ $(\Longleftarrow )$ Suppose that $f(\sigma _{BW}(T))=\sigma _{BW}(f(T))$. Since $T$ is isoloid, it follows from Lemma \[lem217\] that $f(\sigma (T)\setminus \pi _{0}(T))=\sigma (f(T))\setminus \pi _{0}(f(T))$.  Since $T\in g\mathcal{W}$, we have $\sigma _{BW}(T)=\sigma (T)\setminus \pi _{0}(T)$.  Therefore $$\begin{aligned} \sigma _{BW}(f(T)) &=&f(\sigma _{BW}(T)) \\ &=&f(\sigma (T)\setminus \pi _{0}(T))=\sigma (f(T))\setminus \pi _{0}(f(T)),\end{aligned}$$and hence $f(T)\in g\mathcal{W}$. As applications of Theorems \[thm215\] and \[thm218\] we will obtain below several corollaries. \[cor219\] Suppose that $T\in \mathcal{B}(\mathcal{H})$ is reduced by each of its eigenspaces.  Then $f(T)\in g\mathcal{B}$ for every $f\in H(\sigma (T))$.  In particular, $T\in g\mathcal{B}$. By Theorem \[thm215\] we have $\sigma _{BW}(T)=\sigma _{BB}(T)$, so that $T\in g\mathcal{B}$ by Theorem \[thm21\].  On the other hand, since $T$ is reduced by each of its eigenspaces, $i(T-\lambda )i(T-\mu )\geq 0\ $for all $\lambda ,\mu \in \mathbb{C}\setminus \sigma _{BF}(T)$.  It follows that $T\in \mathcal{P(X)}$, so Theorem \[thm210\] implies that $$\sigma _{BW}(f(T))=f(\sigma _{BW}(T))=f(\sigma _{BB}(T))=\sigma _{BB}(f(T)).$$Hence $f(T)\in g\mathcal{B}$. In Example \[ex213\] we already noticed that the generalized Weyl’s theorem does not transfer to or from adjoints.  However, we have: \[cor220\] Suppose that $T\in \mathcal{B}(\mathcal{H})$ is reduced by each of its eigenspaces, and assume that $\sigma (T)$ has no isolated points.  Then $T,T^{\ast }\in g\mathcal{W}$.  Moreover, if $f\in H(\sigma (T))$ then $f(T)\in g\mathcal{W}$. We first show that $T\in g\mathcal{W}$.  Since $T$ is reduced by each of its eigenspaces, it follows from Theorem \[thm215\] that $T\in g\mathcal{B} $.  By Theorem \[thm21\], $\sigma (T)\setminus \sigma _{BW}(T)\subseteq \pi _{0}(T)$.  But $\operatorname*{iso}\;\sigma (T)=\emptyset $, hence $\pi _{0}(T)=\emptyset $, which implies $\sigma _{BW}(T)=\sigma (T)$. Therefore, $T\in g\mathcal{W}$.  On the other hand, observe that $$\sigma (T^{\ast })=\overline{\sigma (T)},\ \sigma _{BW}(T^{\ast })=\overline{\sigma _{BW}(T)},$$and $$\pi _{0}(T^{\ast })=\overline{\pi _{0}(T)}=\emptyset .$$Hence $T^{\ast }\in g\mathcal{W}$.  Let $f\in H(\sigma (T))$.  Since $T$ is reduced by each of its eigenvalues, $i(T-\lambda )i(T-\mu )\geq 0\ $ for all $\lambda ,\mu \in \mathbb{C}\setminus \sigma _{BF}(T)$.  Therefore $\sigma _{BW}(f(T))=f(\sigma _{BW}(T))$ by Theorem \[thm210\].  But $\sigma (T)$ has no isolated points, hence $T$ is isoloid.  It follows from Theorem \[thm218\] that generalized Weyl’s theorem holds for $f(T)$. For the next result, we recall that an operator $T$ is called *reduction-isoloid* if the restriction of $T$ to every reducing subspace is isoloid; it is well known that hyponormal operators are reduction-isoloid [@Sta]. \[cor221\] Suppose that $T\in \mathcal{B}(\mathcal{H})$ is both reduction-isoloid and reduced by each of its eigenspaces.  Then $f(T)\in g\mathcal{W}$ for every $f\in H(\sigma (T))$. We first show that $T\in g\mathcal{W}$.  In view of Theorem \[thm215\], it suffices to show that $\pi _{0}(T)\subseteq \sigma (T)\setminus \sigma _{BW}(T)$.  Suppose that $\lambda \in \pi _{0}(T)$.  Then, with the preceding notations, $$\lambda \in \pi _{0}(T_{1})\cap \lbrack \operatorname*{iso}\;\sigma (T_{2})\cup \rho (T_{2})].$$If $\lambda \in \operatorname*{iso}\;\sigma (T_{2})$, then since $T_{2}$ is isoloid we have $\lambda \in \sigma _{p}(T_{2})$.  But $\sigma _{p}(T_{2})=\emptyset $, hence we must have $\lambda \in \pi _{0}(T_{1})\cap \rho (T_{2})$.  Since $T_{1}$ is normal, $T_{1}\in g\mathcal{W}$.  Hence $T_{1}-\lambda $ is $B$-Weyl and so is $T-\lambda $, which implies $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Therefore $\pi _{0}(T)\subseteq \sigma (T)\setminus \sigma _{BW}(T)$, and hence $T\in g\mathcal{W}$.  Now, let $f\in H(\sigma (T))$.  Since $T$ is reduced by each of its eigenspaces, $i(T-\lambda )i(T-\mu )\geq 0\ $for all $\lambda ,\mu \in \mathbb{C}\setminus \sigma _{BF}(T)$.  It follows from Theorem \[thm210\] that $f(\sigma _{BW}(T))=\sigma _{BW}(f(T))$.  Therefore $f(T)\in g\mathcal{W}$ by Theorem \[thm218\]. Applications ============ In [@Berkani2] and [@Berkani3], the authors showed that the generalized Weyl’s theorem holds for normal operators.  In this section we extend this result to algebraically $M$-hyponormal operators and to algebraically paranormal operators, using the results in Sections \[sect2\] and \[sect3\].  We begin with the following definition. \[defMh\]An operator $T\in \mathcal{B}(\mathcal{H})$ is said to be *$M$-hyponormal* if there exists a positive real number $M$ such that $$M||(T-\lambda )x||\geq ||(T-\lambda )^{\ast }x||\quad \text{for all}\ x\in \mathcal{H}\text{, }\lambda \in \mathbb{C}.$$We say that $T\in \mathcal{B}(\mathcal{H})$ is *algebraically $M$-hyponormal* if there exists a nonconstant complex polynomial $p$ such that $p(T)$ is $M$-hyponormal. The following implications hold: $$\text{hyponormal}\Longrightarrow \text{$M$-hyponormal}\Longrightarrow \text{algebraically $M$-hyponormal}.$$The following result follows from Definition \[defMh\] and some well known facts about $M$-hyponormal operators. \(i) If $T$ is algebraically $M$-hyponormal then so is $T-\lambda $ for every $\lambda \in \mathbb{C}$.(ii) If $T$ is algebraically $M$-hyponormal and $\mathcal{M}\subseteq \mathcal{H}$ is invariant under $T$, then $T|\mathcal{M}$ is algebraically $M $-hyponormal.(iii) If $T$ is $M$-hyponormal, then $N(T-\lambda )\subseteq N(T-\lambda )^{\ast }$ for every $\lambda \in \mathbb{C}$.(iv) Every quasinilpotent $M$-hyponormal operator is the zero operator. In [@Arora], Arora and Kumar proved that Weyl’s theorem holds for every $M$-hyponormal operator.  We shall show that the generalized Weyl’s theorem holds for algebraically $M$-hyponormal operators.  To do this, we need several preliminary results. \[lem32\]Let $T\in \mathcal{B}(\mathcal{H})$ be $M$-hyponormal, let $\lambda \in \mathbb{C}$, and assume that $\sigma (T)=\{\lambda \}$.  Then $T=\lambda $. Since $T$ is $M$-hyponormal, $T-\lambda $ is also $M$-hyponormal.  Since $T-\lambda $ is quasinilpotent, (iv) above implies that $T-\lambda =0$. \[lem33\] Let $T\in \mathcal{B}(\mathcal{H})$ be a quasinilpotent algebraically $M$-hyponormal operator.  Then $T$ is nilpotent. Let $p$ be a nonconstant polynomial such that $p(T)$ is $M$-hyponormal. Since $\sigma (p(T))=p(\sigma (T))$, the operator $p(T)-p(0)$ is quasinilpotent.  It follows from Lemma \[lem32\] that $c\ T^{m}(T-\lambda _{1})\cdots (T-\lambda _{n})\equiv p(T)-p(0)=0$.  Since $T-\lambda _{i}$ is invertible for every $\lambda _{i}\neq 0$, we must have $T^{m}=0$. It is well known that every $M$-hyponormal operator is isoloid.  We can extend this result to the algebraically $M$-hyponormal operators. \[lem34\] Let $T\in \mathcal{B}(\mathcal{H})$ be an algebraically $M$-hyponormal operator.  Then $T$ is isoloid. Let $\lambda $ be an isolated point of $\sigma (T)$.  Using the spectral projection $P:=\frac{1}{2\pi i}\int_{\partial B}(\mu -T)^{-1}d\mu $, where $B $ is a closed disk of center $\lambda $ which contains no other points of $\sigma (T)$, we can represent $T$ as the direct sum $T=T_{1}\oplus T_{2}$,$\ $where$\ \sigma (T_{1})=\{\lambda \}\ $and$\ \sigma (T_{2})=\sigma (T)\setminus \{\lambda \}$. $\ $Since $T$ is algebraically $M$-hyponormal, $p(T)$ is $M$-hyponormal for some nonconstant polynomial $p$.  Since $\sigma (T_{1})=\{\lambda \}$, $\sigma (p(T_{1}))=p(\sigma (T_{1}))=\{p(\lambda )\}$.  Therefore $p(T_{1})-p(\lambda )$ is quasinilpotent.  Since $p(T_{1})$ is $M$-hyponormal, it follows from Lemma \[lem32\] that $p(T_{1})-p(\lambda )=0$.  Put $q(z):=p(z)-p(\lambda )$.  Then $q(T_{1})=0$, and hence $T_{1}$ is algebraically $M$-hyponormal.  Since $T_{1}-\lambda $ is quasinilpotent and algebraically $M$-hyponormal, it follows from Lemma \[lem33\] that $T_{1}-\lambda $ is nilpotent.  Therefore $\lambda \in \sigma _{p}(T_{1})$, and hence $\lambda \in \sigma _{p}(T)$.  This shows that $T$ is isoloid. \[lem35\] Let $T\in \mathcal{B}(\mathcal{H})$ be an algebraically $M$-hyponormal operator.  Then $T$ has finite ascent.  In particular, every algebraically $M$-hyponormal operator has SVEP. Suppose $p(T)$ is $M$-hyponormal for some nonconstant polynomial $p$. Since $M$-hyponormality is translation-invariant, we may assume $p(0)=0$. If $p(\lambda )\equiv a_{0}\lambda ^{m}$, then $N(T^{m})=N(T^{2m})$ because $M$-hyponormal operators are of ascent 1.  Thus we write $p(\lambda )\equiv a_{0}\,\lambda ^{m}(\lambda -\lambda _{1})\cdots (\lambda -\lambda _{n})$ ($m\neq 0$; $\lambda _{i}\neq 0$ for $1\leq i\leq n$).  We then claim that $$N(T^{m})=N(T^{m+1}). \label{eq351}$$To show (\[eq351\]), let $0\neq x\in N(T^{m+1})$.  Then we can write $$p(T)x=(-1)^{n}\,a_{0}\,\lambda _{1}\cdots \lambda _{n}\,T^{m}x.$$Thus we have $$\begin{aligned} |a_{0}\lambda _{1}\cdots \lambda _{n}|^{2}||T^{m}x||^{2}& =(p(T)x,\ p(T)x) \\ & \leq ||p(T)^{\ast }p(T)x||\,||x|| \\ & \leq M||p(T)^{2}x||\,||x||\quad \text{(because $p(T)$ is $M$-hyponormal)} \\ & =M||a_{0}^{2}\,(T-\lambda _{1}I)^{2}\cdots (T-\lambda _{n}I)^{2}T^{2m}x||\,||x|| \\ & =0,\end{aligned}$$which implies $x\in N(T^{m})$.  Therefore $N(T^{m+1})\subseteq N(T^{m})$ and the reverse inclusion is always true.  Since every algebraically $M$-hyponormal operator has finite ascent, it follows from [@Lau1 Proposition 1.8] that every algebraically $M$-hyponormal operator has SVEP. \[thm36\] Let $T\in \mathcal{B}(\mathcal{H})$ be an algebraically $M$-hyponormal operator.  Then $f(T)\in g\mathcal{W}$ for every $f\in H(\sigma (T))$. We first show that $T\in g\mathcal{W}$.  Suppose that $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Then $T-\lambda $ is $B$-Weyl but not invertible.  Since $T$ is algebraically $M$-hyponormal, there exists a nonconstant polynomial $p$ such that $p(T)$ is $M$-hyponormal.  Since every algebraically $M$-hyponormal operator has SVEP by Lemma \[lem35\], $T$ has SVEP.  It follows from Theorem \[thm212\] that $T\in g\mathcal{B}$. Therefore $\sigma _{BW}(T)=\sigma _{BB}(T)$.  But $\sigma _{BB}(T)=\sigma _{BW}(T)\cup $ $\operatorname*{acc}\;\sigma (T)$ by Theorem \[thm27\], hence $\lambda $ is an isolated point of $\sigma (T)$.  Since every algebraically $M$-hyponormal operator is isoloid by Lemma \[lem34\], $\lambda \in \pi _{0}(T)$. Conversely, suppose that $\lambda \in \pi _{0}(T)$.  Then $\lambda $ is an isolated eigenvalue of $T$.  Since $\lambda $ is an isolated point of $\sigma (T)$, using the Riesz idempotent $E:=\frac{1}{2\pi i}\int_{\partial D}(\mu -T)^{-1}d\mu $, where $D$ is a closed disk of center $\lambda $ which contains no other points of $\sigma (T)$, we can represent $T$ as the direct sum $T=T_{1}\oplus T_{2}$, where$\ \sigma (T_{1})=\{\lambda \}\ $and$\ \sigma (T_{2})=\sigma (T)\setminus \{\lambda \}$. $\ $Since $T$ is algebraically $M$-hyponormal, $p(T)$ is $M$-hyponormal for some nonconstant polynomial $p$.  Since $\sigma (T_{1})=\{\lambda _{1}\}$, we have $\sigma (p(T_{1}))=p(\sigma (T_{1}))=\{p(\lambda )\}$.  Therefore $p(T_{1})-p(\lambda )$ is quasinilpotent.  Since $p(T_{1})$ is $M$-hyponormal, it follows from Lemma \[lem32\] that $p(T_{1})-p(\lambda )=0$.  Define $q(z):=p(z)-p(\lambda )$.  Then $q(T_{1})=0$, and hence $T_{1}$ is algebraically $M$-hyponormal.  Since $T_{1}-\lambda $ is quasinilpotent and algebraically $M$-hyponormal, it follows from Lemma \[lem33\] that $T_{1}-\lambda $ is nilpotent.  Since $T-\lambda =(T_{1}-\lambda )\oplus (T_{2}-\lambda )$ is the direct sum of an invertible operator and a nilpotent operator, $T-\lambda $ is $B$-Weyl.  Hence $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Therefore $\sigma (T)\setminus \sigma _{BW}(T)=\pi _{0}(T)$, and hence $T\in g\mathcal{W}$. Now let $f\in H(\sigma (T))$.  We shall show that $\sigma _{BW}(f(T))=f(\sigma _{BW}(T))$.  In view of Theorem \[thm26\] it suffices to show that $f(\sigma _{BW}(T))\subseteq \sigma _{BW}(f(T))$. Suppose that $\lambda \notin \sigma _{BW}(f(T))$, and let $$f(T)-\lambda =c_{0}(T-\lambda _{1})(T-\lambda _{2})\cdots (T-\lambda _{n})g(T), \label{eq361}$$where $c_{0},\lambda _{1},\lambda _{2},\dots ,\lambda _{n}\in \mathbb{C}$ and $g(T)$ is invertible.  Since $\lambda \notin \sigma _{BW}(f(T))$, $f(T)-\lambda $ is $B$-Weyl.  Therefore $f(T)-\lambda $ is $B$-Fredholm with index 0.  Since the operators on the right-hand side of (\[eq361\]) commute, it follows from [@Berkani1 Corollary 3.3] that $T-\lambda _{i}$ is $B$-Fredholm for every $i=1,2,\dots ,n$.  Since $T$ is algebraically $M$-hyponormal, $T|\mathcal{M}$ is also algebraically $M$-hyponormal, where $\mathcal{M}$ is any closed invariant subspace of $T$.  It follow from Lemma \[lem35\] that $T$ has finite ascent.  Hence $T|\mathcal{M}$ has also finite ascent.  Therefore $i(T-\lambda )\leq 0$ for every $\lambda \in \mathbb{C}\setminus \sigma _{BF}(T)$.  Since $i(T-\lambda )\leq 0$ for every $\lambda \in \mathbb{C}\setminus \sigma _{BF}(T)$ and $$i(f(T)-\lambda )=\sum_{j=1}^{n}i(T-\lambda _{j})+i(g(T))=0,$$$T-\lambda _{i}$ is $B$-Weyl for every $i=1,2,\dots ,n$.  Therefore $\lambda \notin f(\sigma _{BW}(T))$, and hence $f(\sigma _{BW}(T))\subseteq \sigma _{BW}(f(T))$.  Since every algebraically $M$-hyponormal operator is isoloid by Lemma \[lem34\], it follows from Lemma \[lem217\] that $\sigma (f(T))\setminus \pi _{0}(f(T))=f(\sigma (T)\setminus \pi _{0}(T))$. Hence, $$\begin{aligned} \sigma (f(T))\setminus \pi _{0}(f(T)) &=&f(\sigma (T)\setminus \pi _{0}(T)) \\ &=&f(\sigma _{BW}(T))=\sigma _{BW}(f(T)),\end{aligned}$$which implies that $f(T)\in g\mathcal{W}$. \[defparanormal\]An operator $T\in \mathcal{B}(\mathcal{H})$ is said to be *paranormal* if $$||Tx||^{2}\leq ||T^{2}x||\quad \text{for all }x\in \mathcal{H}\text{, }||x||=1.$$We say that $T\in \mathcal{B}(\mathcal{H})$ is *algebraically paranormal* if there exists a nonconstant complex polynomial $p$ such that $p(T)$ is paranormal. The following implications hold: $$\begin{aligned} \text{hyponormal} &\Longrightarrow &\text{$p$-hyponormal} \\ &\Longrightarrow &\text{paranormal}\Longrightarrow \text{algebraically paranormal}.\end{aligned}$$The following facts follow from Definition \[defparanormal\] and some well known facts about paranormal operators. \(i) If $T\in \mathcal{B}(\mathcal{H})$ is algebraically paranormal then so is $T-\lambda $ for every $\lambda \in \mathbb{C}$.(ii) If $T\in \mathcal{B}(\mathcal{H})$ is algebraically paranormal and $\mathcal{M}\subseteq \mathcal{H}$ is invariant under $T$, then $T|\mathcal{M} $ is algebraically paranormal. In [@Curto2] we showed that if $T$ is an algebraically paranormal operator then $f(T)\in \mathcal{W}$ for every $f\in H(\sigma (T))$.  We can now extend this result to the generalized Weyl’s theorem.  To prove this we need several lemmas. \[lem38\]Let $T\in \mathcal{B}(\mathcal{H})$ be $B$-Fredholm.  The following statements are equivalent:(i) $T$ does not have SVEP at $0$;(ii) $a(T)=\infty $;(iii) $0\in $ $\operatorname*{acc}\;\sigma _{p}(T)$. Suppose that $T$ is $B$-Fredholm.  It follows from [Berkani1]{} that $T$ can be decomposed as $$T=T_{1}\oplus T_{2}\,\;\text{(}T_{1}\text{ Fredholm, }T_{2}\text{ nilpotent)}.$$(i)$\Longleftrightarrow $(ii): Suppose that $T$ does not have SVEP at $0$. Since $T_{2}$ is nilpotent, $T_{2}$ has SVEP.  Therefore $T_{1}$ does not have SVEP.  Since $T_{1}$ is Fredholm, it follows from [Aie]{} that $a(T)=\infty $. Conversely, suppose that $a(T)=\infty $.  Since $T_{2}$ is nilpotent, $T_{2} $ has finite ascent.  Therefore $a(T_{1})=\infty $.  But $T_{1}$ is Fredholm, hence $T_{1}$ does not have SVEP by [@Aie Theorem 2.6]. (i)$\Longleftrightarrow $(iii): Suppose that $T$ does not have SVEP at $0$.  Then $T_{1}$ does not have SVEP.  Since $T_{1}$ is Fredholm, it follows from [@Aie Theorem 2.6] that $0\in \operatorname*{acc}\;\sigma _{p}(T_{1})$.  Therefore $0\in \operatorname*{acc}\;\sigma _{p}(T)$. Conversely, suppose that $0\in \operatorname*{acc}\;\sigma _{p}(T)$.  Since $T_{2} $ is nilpotent, $0\in \operatorname*{acc}\;\sigma _{p}(T_{1})$.  But $T_{1}$ is Fredholm, hence $T_{1}$ does not have SVEP by [@Aie Theorem 2.6]. Therefore $T$ does not have SVEP. Suppose that $T\in \mathcal{B}(\mathcal{H})$ is $B$-Fredholm with $i(T)>0$.  Then $T$ does not have SVEP at $0$. Suppose that $T$ is $B$-Fredholm with $i(T)>0$.  Then by [Berkani1]{}, $T$ can be decomposed by $$T=T_{1}\oplus T_{2}\,\;\text{(}T_{1}\text{ Fredholm, }T_{2}\text{ nilpotent)}.$$Moreover, $i(T)=i(T_{1})$.  But $i(T)>0$, hence $i(T_{1})>0$.  Since $T_{1} $ is Fredholm, it follows from [@Fin Corollary 11] that $T_{1}$ does not have SVEP at $0$.  Therefore $T$ does not have SVEP at $0$. Suppose that $T\in \mathcal{B}(\mathcal{H})$ is $B$-Fredholm.  Then $$T^{\ast }\text{ does not have SVEP at }0\ \Longleftrightarrow \ d(T)=\infty .$$Moreover, if $T$ and $T^{\ast }$ have SVEP at $0$ then $T$ is $B$-Fredholm with index $0$. Since $T$ is $B$-Fredholm, $T$ can be decomposed by $$T=T_{1}\oplus T_{2}\,\;\text{(}T_{1}\text{ Fredholm, }T_{2}\text{ nilpotent)}.$$But $T_{1}$ is Fredholm if and only if $T_{1}^{\ast }$ is Fredholm, hence $T$ is $B$-Fredholm if and only if $T^{\ast }$ is $B$-Fredholm.  Since $T_{1}$ is Fredholm, $a(T_{1})=d(T_{1}^{\ast })$.  Also, since $T_{2}$ is nilpotent, $a(T_{2})=d(T_{2})=a(T_{2}^{\ast })=d(T_{2}^{\ast })$.  It follows from [@A.E.Taylor Theorem 6.1] that $$\begin{split} a(T^{\ast })& =a(T_{1}^{\ast }\oplus T_{2}^{\ast }) \\ & =\max \{a(T_{1}^{\ast }),a(T_{2}^{\ast })\} \\ & =\max \{d(T_{1}),d(T_{2})\} \\ & =d(T_{1}\oplus T_{2}) \\ & =d(T). \end{split}$$Therefore by Lemma \[lem38\], $$T^{\ast }\ \text{does not have SVEP at }0\Longleftrightarrow \ a(T^{\ast })=\infty \Longleftrightarrow \ d(T)=\infty .$$Moreover, suppose that $T$ and $T^{\ast }$ have SVEP at $0$.  Then by Lemma \[lem38\], $a(T)=d(T)<\infty $, and hence $T$ is $B$-Fredholm with index $0 $. \[lem311\][@Curto2 Lemmas 2.1, 2.2, 2.3]) Let $T\in \mathcal{B(H)}$ be an algebraically paranormal operator. Then(i) If $\sigma (T)=\{\lambda \}$, then $T=\lambda $;(ii) If $T$ is quasinilpotent, then it is nilpotent;(iii) $T$ is isoloid. \[algpara\]Let $T\in \mathcal{B(H)}$ be an algebraically paranormal operator.  Then $f(T)\in g\mathcal{W}$ $\ $for every $f\in H(\sigma (T))$. We first show that $T\in g\mathcal{W}$.  Suppose that $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$.  Then $T-\lambda $ is $B$-Weyl but not invertible.  Since $T$ is an algebraically paranormal operator, there exists a nonconstant polynomial $p$ such that $p(T)$ is paranormal.  Since every paranormal operator has SVEP, $p(T)$ has SVEP.  Therefore $T$ has SVEP.  It follows from Theorem \[thm212\] that $T\in g\mathcal{B}$. Therefore $\sigma _{BW}(T)=\sigma _{BB}(T)$.  But $\sigma _{BB}(T)=\sigma _{BW}(T)\cup \operatorname*{acc}\;\sigma (T)$ by Theorem \[thm27\], hence $\lambda $ is an isolated point of $\sigma (T)$.  Since every algebraically paranormal operator is isoloid by Lemma \[lem311\], $\lambda \in \pi _{0}(T)$. Conversely, suppose that $\lambda \in \pi _{0}(T)$.  Let $P:=\frac{1}{2\pi i}\int_{\partial D}(\mu -T)^{-1}d\mu $ be the associated Riesz idempotent, where $D$ is an open disk of center $\lambda $ which contains no other points of $\sigma (T)$, we can represent $T$ as the direct sum $T=T_{1}\oplus T_{2}$, where$\ \sigma (T_{1})=\{\lambda \}\ $and$\ \sigma (T_{2})=\sigma (T)\setminus \{\lambda \}$. $\ $Now we consider two cases: **Case I.**  Suppose that $\lambda =0$.  Then $T_{1}$ is algebraically paranormal and quasinilpotent.  It follows from Lemma [lem311]{} that $T_{1}$ is nilpotent.  Therefore $T$ is the direct sum of an invertible operator and nilpotent, and hence $T$ is $B$-Weyl by [@Berkani2 Lemma 4.1].  Thus, $0\in \sigma (T)\setminus \sigma _{BW}(T)$. **Case II.  **Suppose that $\lambda \neq 0$.  Since $T$ is algebraically paranormal, $p(T)$ is paranormal for some nonconstant polynomial $p$.  Since $\sigma (T_{1})=\{\lambda _{1}\}$, we have $\sigma (p(T_{1}))=p(\sigma (T_{1}))=\{p(\lambda )\}$.  Therefore $p(T_{1})-p(\lambda )$ is quasinilpotent.  Since $p(T_{1})$ is paranormal, it follows from Lemma \[lem311\] that $p(T_{1})-p(\lambda )=0$.  Define $q(z):=p(z)-p(\lambda )$.  Then $q(T_{1})=0$, and hence $T_{1}$ is algebraically paranormal.  Since $T_{1}-\lambda $ is quasinilpotent and algebraically paranormal, it follows from Lemma \[lem311\] that $T_{1}-\lambda $ is nilpotent.  Since $T-\lambda =\begin{pmatrix} T_{1}-\lambda & 0 \\ 0 & T_{2}-\lambda\end{pmatrix}$ is the direct sum of an invertible operator and nilpotent, $T-\lambda $ is $B$-Weyl.  Therefore $\lambda \in \sigma (T)\setminus \sigma _{BW}(T)$. Thus $T\in g\mathcal{W}$. Now we claim that $\sigma _{BW}(f(T))=f(\sigma _{BW}(T))$ for every $f\in H(\sigma (T))$.  Let $f\in H(\sigma (T))$.  Since $\sigma _{BW}(f(T))\subseteq f(\sigma _{BW}(T))$ with no other restriction on $T$ by Theorem \[thm26\], it suffices to show that $f(\sigma _{BW}(T))\subseteq \sigma _{BW}(f(T))$.  Suppose that $\lambda \notin \sigma _{BW}(f(T))$. Then $f(T)-\lambda $ is $B$-Weyl and $$f(T)-\lambda \equiv c_{0}(T-\lambda _{1})(T-\lambda _{2})\cdots (T-\lambda _{n})g(T), \label{eq3121}$$where $c_{0},\lambda _{1},\lambda _{2},\dots ,\lambda _{n}\in \mathbb{C}$ and $g(T)$ is invertible.  Since the operators on the right-hand side of (\[eq3121\]) commute, every $T-\lambda _{i}$ is $B$-Fredholm by [@Berkani1 Corollary 3.3].  Since $T$ is algebraically paranormal, $T$ has SVEP.  It follows from Lemma \[lem38\] that $i(T-\lambda _{i})\leq 0$ (all $i=1,2,\dots ,n$).  Therefore $\lambda \notin f(\sigma _{BW}(T))$, and hence $\sigma _{BW}(f(T))=f(\sigma _{BW}(T))$.  Since $T$ is algebraically paranormal, it follows from Lemma \[lem311\] that $T$ is isoloid. Therefore by Lemma \[lem217\], $$\sigma (f(T))\setminus \pi _{0}(f(T))=f(\sigma (T)\setminus \pi _{0}(T)).$$Hence $$\sigma (f(T))\setminus \pi _{0}(f(T))=f(\sigma (T)\setminus \pi _{0}(T))=f(\sigma _{BW}(T))=\sigma _{BW}(f(T)),$$which implies that $f(T)\in g\mathcal{W}$. [99]{} P. Aiena and O. Monsalve, Operators which do not have the single valued extension property*,* *J. Math. Anal. Appl.* **250** (2000), 435–448. S.C. Arora and R. Kumar, $M$-hyponormal operators*,* *Yokohama Math. J*. **28** (1980), 41–44. S.K. Berberian, An extension of Weyl’s theorem to a class of not necessarily normal operators*,* *Michigan Math. J.* **16** (1969), 273–279. S.K. Berberian, The Weyl spectrum of an operator,*Indiana Univ. Math. J*. **20** (1970), 529–544. M. Berkani, On a class of quasi-Fredholm operators, *Integral Equations Operator Theory,* **34** (1999), 244–249. M. Berkani, Index of $B$-Fredholm operators and generalization of a Weyl theorem, *Proc. Amer. Math. Soc.* **130** (2002), 1717–1723. M. Berkani, $B$-Weyl spectrum and poles of the resolvent, *J. Math. Anal. Appl.* **272** (2002), 596–603. M. Berkani and J.J. Koliha, Weyl type theorems for bounded linear operators, *Acta Sci. Math. (Szeged)* **69** (2003), 359–376. M. Berkani and M. Sarih, On semi $B$-Fredholm operators, *Glasgow Math. J.* **43** (2001), 457–465. L.A. Coburn, Weyl’s theorem for nonnormal operators, *Michigan Math. J.* **13** (1966), 285–288. R.E. Curto and Y.M. Han, Weyl’s theorem, $a$-Weyl’s theorem, and local spectral theory, *J. London Math. Soc.* (2) **67** (2003), 499–509. R.E. Curto and Y.M. Han, Weyl’s theorem holds for algebraically paranormal operators, *Integral Equations Operator Theory* **47** (2003), 307–314. J.K. Finch, The single valued extension property on a Banach space, *Pacific J. Math.* **58** (1975), 61–69. R.E. Harte, Fredholm, Weyl and Browder theory, *Proc. Royal Irish Acad.* **85A** (1985), 151–176. R.E. Harte, *Invertibility and Singularity for Bounded Linear Operators*,* *Marcel Dekker, New York, 1988. R.E. Harte and W.Y. Lee, Another note on Weyl’s theorem, *Trans. Amer. Math. Soc.* **349** (1997), 2115–2124. K.B. Laursen, Operators with finite ascent, *Pacific J. Math.* **152** (1992), 323–336. K.B. Laursen and M.M. Neumann, *An Introduction to Local Spectral Theory*, London Mathematical Society Monographs New Series 20, Clarendon Press, Oxford, 2000. K.K. Oberai, On the Weyl spectrum (II), *Illinois J. Math.* **21** (1977), 84–90. J. Stampfli, Hyponormal operators, *Pacific J. Math.* 12(1962), 1453-1458. A.E. Taylor, Theorems on ascent, descent, nullity and defect of linear operators, *Math. Ann.* **163** (1966), 18–49. H. Weyl, Über beschränkte quadratische Formen, deren Differenz vollsteig ist, *Rend. Circ. Mat. Palermo* **27** (1909), 373–392. [^1]: The first named author was partially supported by NSF grants DMS-0099357 and DMS-0400741. The second named author was supported by Kyung Hee University Research Fund grant KHU - 20040910
--- abstract: 'In this paper we discuss a natural generalization of the Stern Brocot tree which comes from the introduction of weighted mediants. We focus our attention on the case $k = 3$, in which $(2a + c)/(2b + d)$ and $(a + 2c)/(b + 2d)$ are the two mediants inserted between $a/b$ and $c/d$. Our main result is a determination of which rational numbers between the starting terms appear in the tree. We extend this result to arbitrary reduction schemes as well.' author: - | Dhroova Aiylam\ MIT\ <[email protected]> - | Tanya Khovanova\ MIT\ <[email protected]> title: 'Stern-Brocot Trees from Weighted Mediants' --- Introduction {#sec:intro} ============ The Stern–Brocot tree is an object of classical interest in number theory. Discovered independently by Moritz Stern [@Stern] in 1858 and Achille Brocot [@Brocot] in 1861, it was originally used as a way to find rational approximations of certain kinds to specific numbers. As a consequence, the Stern-Brocot tree is deeply connected to the theory of continued fractions [@CTN]. It also comes up in a variety of other contexts, including Farey Sequences, Ford Circles, and Hurwitz’ theorem [@CW; @SDS; @Reg; @Gold]. The classical Stern-Brocot tree is generated row by row, as follows: the zeroth row has entries $\frac{0}{1}$ and $\frac{1}{0}$. In each subsequent row, all entries from the previous row are copied and between every pair of neighboring entries $\frac{a}{b}$ and $\frac{c}{d}$ the mediant fraction $\frac{a + c}{b + d}$ is inserted. This process is repeated ad infinitum; the result is the Stern–Brocot tree [@SB]. The classical Stern-Brocot tree is well-understood, but there are several different variants that are natural candidates for study. For instance, one could consider varying the starting terms of the tree and ask which of the properties of the classical Stern-Brocot tree continue to hold, and to what extent. This question was addressed in detail in [@PRIMES]. The main result of that paper is a proof that regardless of the initial terms the Stern-Brocot tree contains every rational number between them. In Section \[sec:defs\], we define precisely what is meant by the Stern-Brocot tree from weighted mediants given a pair of starting terms, an idea originally proposed by Prof. James Propp [@Propp]. We also define the cross-determinant and discuss its role in fraction reduction. In Section \[sec:numbersappear\], for an arbitrary Stern-Brocot tree we characterize which rational numbers between the starting terms appear. We turn this characterization into a simple, explicit description of these fractions. Finally, in Section \[sec:reduction\] we consider how non-uniform reduction of fractions impacts the Stern-Brocot tree and, in the process, introduce the idea of a reduction scheme. We expand our earlier result to deal with arbitrary reduction schemes. Weighted Mediants: Notation and Definitions {#sec:defs} =========================================== For a fixed parameter $k$, we say the weighted mediants of two fractions $a/b$, $c/d$ are $$\frac{(k - 1)a + c}{(k - 1)b + d}, \; \; \frac{(k - 2)a + 2c}{(k - 2)b + 2d}, \; \; \dots, \; \; \frac{a + (k - 1)c}{b + (k - 1)d}$$ whence there are $k - 1$ mediants in all. We stipulate that each of these fractions be reduced to lowest terms. As in the classical Stern-Brocot tree, the tree begins with two starting terms and each row is obtained by inserting mediants between consecutive fractions in the previous row. With this notation, the classical Stern-Brocot tree is the case $k = 2$ with starting terms $0/1$ and $1/0$. The next row of this tree is $0/1$, $1/1$, and $1/0$. The two halves of the tree with respect to the mid-line are equivalent. Indeed if we swap numerators and denominators and reverse the order, the first part of the tree becomes the second part of the tree. For this reason, many researchers study only the first half of the tree. In this paper we will restrict our attention to the case $k = 3$. While we treat the problem in fully general terms, one tree of particular interest to us is the one with starting terms $0/1$ and $1/1$. We call this $k = 3$ Stern-Brocot tree the *unit tree*. Here is what the unit tree looks like: $$\frac{0}{1} \; \; \; \frac{1}{1}$$ $$\frac{0}{1} \; \; \; \frac{1}{3} \; \; \; \frac{2}{3} \; \; \; \frac{1}{1}$$ $$\frac{0}{1} \; \; \; \frac{1}{5} \; \; \; \frac{2}{7} \; \; \; \frac{1}{3} \; \; \; \frac{4}{9} \; \; \; \frac{5}{9} \; \; \; \frac{2}{3} \; \; \; \frac{5}{7} \; \; \; \frac{4}{5} \; \; \; \frac{1}{1}$$ It is easy to see that all the denominators in this tree must be odd. Later we will show that any number between 0 and 1 with an odd denominator in lowest terms appear in the tree. Let us now introduce some notation and definitions. If $\frac{p}{q}$ and $\frac{r}{s}$ are rational numbers in lowest terms, their *weighted mediants* are the numbers $\frac{2p + r}{2q + s}$ and $\frac{p + 2r}{q + 2s}$ in lowest terms. We call these the left and right mediants of $\frac{p}{q}$ and $\frac{r}{s}$, respectively. Next let $SB(\frac{a}{b}, \frac{c}{d})$ stand for the ($k = 3$) Stern-Brocot tree with starting terms $\frac{a}{b}, \frac{c}{d}$, and denote the $i$-th row of this tree with $SB_i(\frac{a}{b}, \frac{c}{d})$. Thus $SB_0(\frac{a}{b}, \frac{c}{d}) = \{ \frac{a}{b}, \frac{c}{d}\}$ is the $0$-th row of the tree, and in general $SB_{i + 1}(\frac{a}{b}, \frac{c}{d})$ is obtained by copying all terms from $SB_{i}(\frac{a}{b}, \frac{c}{d})$ and inserting between every pair of consecutive fractions $\frac{p}{q}, \frac{r}{s} \in SB_{i}(\frac{a}{b}, \frac{c}{d})$ their weighted mediants. Thus the $i$-th row of the tree has $3^i+1$ numbers. As a matter of convention, we assume $\frac{a}{b} < \frac{c}{d}$ (of course, the reverse tree would simply be the reflection) and that $b, d \ge 0$. At first it might seem odd to permit $b, d = 0$, but recall the starting terms of the classical Stern-Brocot tree are $\frac{0}{1}$ and $\frac{1}{0}$. In the classical case, $\frac{1}{0}$ is interpreted as $+ \infty$ and all the relevant conclusions (and their proofs) are the same. It stands to reason we should allow $b = 0$ or $d = 0$ — not both; if $b=d=0$, all denominators of the fractions between them would be $0$ and it is a trivial case. We say the $\emph{cross-determinant}$ of two fractions $\frac{p}{q}$ and $\frac{r}{s}$ is $\mathcal{C}(\frac{p}{q}, \frac{r}{s}) = qr - ps$. We will be most interested in the cross-determinant of consecutive numbers in $SB_i(\frac{a}{b}, \frac{c}{d})$; as was the case when $k = 2$ [@PRIMES], the cross-determinant essentially determines how fractions in the Stern-Brocot tree are capable of reducing. In particular, the factor by which the ratio of a weighted mediant is reduced to its lower terms is a factor of $\mathcal{C}(\frac{p}{q}, \frac{r}{s})$, as we will prove in Lemma \[thm:cdetred\]. We will see that the cross determinant of the starting terms is also important in determining which fractions will ultimately appear in the tree. \[thm:cdetred\] The factor by which a weighted mediant of two fractions $\frac{p}{q}$ and $\frac{r}{s}$ is reduced divides $\mathcal{C}(\frac{p}{q}, \frac{r}{s})$. Before reduction, the left mediant of $\frac{p}{q}$ and $\frac{r}{s}$ is $\frac{2p + r}{2q + s}$ and the right mediant is $\frac{p + 2r}{q + 2s}$. Suppose the left mediant iss reduced by a factor $g$, so that after reduction it has numerator $\frac{2p + r}{g}$ and denominator $\frac{2q + s}{g}$. Then $$\mathcal{C} \left( \frac{p}{q}, \frac{\frac{2p + r}{g}}{\frac{2q + s}{g}} \right) = \frac{qr - ps}{g}.$$ Of course, $\mathcal{C}$ only takes integer values, so $g | qr - ps$. By analogous reasoning, the same is true for the right mediant. The cross-determinant of two fractions is positive if the second fraction is larger than the first. The cross-determinant is zero if and only if two fractions represent the same number. It is important to remember that the cross-determinant depends on the representation of rational numbers, not just on the numbers themselves. The cross-determinant is the smallest when both rational numbers are in their lowest terms. Finally, given a rational number $x/y$ and an interval $I = [\frac{p}{q}, \frac{r}{s}]$, whose endpoints are ratios, we define the *modulus* $m_I(x/y)$ of the number with respect to the interval’s representation as the sum of cross-determinants with its end-points: $$m_I(x/y) = \mathcal{C}\left(\frac{p}{q}, \frac{x}{y}\right) + \mathcal{C}\left(\frac{x}{y}, \frac{r}{s}\right).$$ We usually consider the modulus only of $\frac{x}{y} \in [\frac{p}{q}, \frac{r}{s}]$, so that $m_I(x/y) > 0$. Notice that if $m_I(x/y) = 1$, then $x/y$ must coincide with one of the end points of the interval. As we will see in Section \[sec:numbersappear\], the modulus is critical in the proof of which rationals appear in the Stern-Brocot tree. In Section \[sec:numbersappear\] we classify the numbers which appear in the Stern-Brocot tree. Rational Numbers in the Stern-Brocot tree {#sec:numbersappear} ========================================= Given a Stern-Brocot tree $SB(\frac{a}{b}, \frac{c}{d})$, imagine we want to determine whether some target rational $\frac{x}{y} \in [\frac{a}{b}, \frac{c}{d}]$ appears in the tree. Writing out the first few rows of any such tree makes it fairly clear that not all $x/y$ between the endpoints will appear. Indeed, from the first few rows of $SB(\frac{0}{1}, \frac{1}{1})$ it seems only fractions with odd denominator can ever appear in the tree. In fact, this is a special case of the following lemma: \[thm:modular\] Let $\frac{a}{b}$ and $\frac{c}{d}$ be fractions in lowest terms. All rational numbers $\frac{x}{y} \in SB(\frac{a}{b}, \frac{c}{d})$ satisfy either $(x, y) \equiv (a, b) \pmod{2}$ or $(x, y) \equiv (c, d) \pmod{2}$ where congruence is componentwise. Moreover, if $(a, b) \not\equiv (c, d) \pmod{2}$, then two consecutive fractions in the tree, alternate which of the parity equations they satisfy. We prove by induction on $i$ that this holds for $SB_i(\frac{a}{b}, \frac{c}{d})$. Clearly the result holds when $i = 0$. Now suppose it holds when $i = n$, and consider $SB_{n + 1}(\frac{a}{b}, \frac{c}{d})$. All terms in this row were either also in $SB_{n}(\frac{a}{b}, \frac{c}{d})$, whence they satisfy the claim by the induction hypothesis, or the mediant of two consecutive terms $\frac{x}{y}, \frac{z}{w} \in SB_{n}(\frac{a}{b}, \frac{c}{d})$. The left mediant of these two fractions is the fraction $\frac{2x + z}{2y + w}$ in lowest terms. Notice that the numerator and denominator are not both even, since this would force $z$ and $w$ to both be even so that $\frac{z}{w}$ was not in lowest terms. Then whatever factor we reduce this fraction by must be odd, so that the parities of the numberator and denominator of the left mediant are $(2x + z, 2y + w) \equiv (z, w) \pmod{2}$. Yet by the induction hypothesis, $(z, w) \equiv (a, b) \pmod{2}$ or $(z, w) \equiv (c, d) \pmod{2}$. Thus the same is true for the left mediant of $\frac{x}{y}, \frac{z}{w}$, and by analogous reasoning, for the right mediant. The claim now follows by induction. \[thm:parity\] If $(a, b) \not\equiv (c, d) \pmod{2}$, then $\mathcal{C}\left(\frac{a}{b}, \frac{c}{d}\right)$ is odd, and for any number $\frac{x}{y}$ in the tree one of the cross determinants $\mathcal{C}\left(\frac{a}{b}, \frac{x}{y}\right)$ or $\mathcal{C}\left(\frac{x}{y}, \frac{c}{d}\right)$ is odd. For example, each row in the unit tree has fractions between $0/1$ and $1/1$ whose numerators alternate in terms of parity. Later, we will see that all the rational numbers with odd denominators in the range from 0 to 1 appear in the unit tree. This is a good starting point; in many cases, such as the unit tree, the numbers which do not appear in the tree are precisely the ones forbidden by the lemma. Yet consider the tree $SB(\frac{1}{3}, \frac{3}{1})$: $$\frac{1}{3} \; \; \; \frac{3}{1}$$ $$\frac{1}{3} \; \; \; \frac{5}{7} \; \; \; \frac{7}{5} \; \; \; \frac{3}{1}$$ $$\frac{1}{3} \; \; \; \frac{7}{13} \; \; \; \frac{11}{17} \; \; \; \frac{5}{7} \; \; \; \frac{17}{19} \; \; \; \frac{19}{17} \; \; \; \frac{7}{5} \; \; \; \frac{17}{11} \; \; \; \frac{13}{7} \; \; \; \frac{3}{1}$$ This tree has reciprocal symmetry about its midline; in particular, since each mediant operation produces two (distinct) fractions there is no way for the fraction $\frac{1}{1}$ to ever appear in this tree. Yet it is not ruled out by Lemma \[thm:modular\]. To cover such cases as these, we need a refinement of Lemma \[thm:modular\]. Let $\nu_{p}(n)$ denote the $p$-adic valuation of $n$. \[thm:2adic\] Let $\frac{a}{b}$ and $\frac{c}{d}$ be fractions in lowest terms. For all rational numbers $\frac{x}{y} \in SB(\frac{a}{b}, \frac{c}{d})$ $$\min \left(\nu_{2}\left(\mathcal{C}\left(\frac{a}{b}, \frac{x}{y}\right)\right), \nu_{2}\left(\mathcal{C}\left(\frac{x}{y}, \frac{c}{d}\right)\right) \right) = \nu_{2}\left(\mathcal{C}\left(\frac{a}{b}, \frac{c}{d}\right)\right)\ \text{and}$$ $$\nu_{2}\left(\mathcal{C}\left(\frac{a}{b}, \frac{c}{d}\right)\right) < \max \left(\nu_{2}\left(\mathcal{C}\left(\frac{a}{b}, \frac{x}{y}\right)\right), \nu_{2}\left(\mathcal{C}\left(\frac{x}{y}, \frac{c}{d}\right)\right) \right).$$ Moreover, if $\frac{p}{q}$ is an even-indexed fraction (where we start indexing with 0) in a row of the tree, then $\nu_{2} \left(\mathcal{C}\left(\frac{a}{b}, \frac{p}{q}\right)\right) > \nu_{2} \left(\mathcal{C}\left(\frac{p}{q}, \frac{c}{d}\right)\right)$. If instead $\frac{p}{q}$ has odd index, then $\nu_{2} \left(\mathcal{C}\left(\frac{a}{b}, \frac{p}{q}\right)\right) < \nu_{2} \left(\mathcal{C}\left(\frac{p}{q}, \frac{c}{d}\right)\right)$. We prove by induction on $i$ that this holds for $SB_i(\frac{a}{b}, \frac{c}{d})$. Clearly the result holds when $i = 0$. Now suppose it holds when $i = n$, and consider $SB_{n + 1}(\frac{a}{b}, \frac{c}{d})$. All terms in this row are either also in $SB_{n}(\frac{a}{b}, \frac{c}{d})$, or else the mediant of two consecutive terms $\frac{p}{q}, \frac{r}{s} \in SB_{n}(\frac{a}{b}, \frac{c}{d})$. In the first case, suppose the fraction occurs at index $I$ in $SB_{n}(\frac{a}{b}, \frac{c}{d})$; then in the next row $SB_{n + 1}(\frac{a}{b}, \frac{c}{d})$ it has index $3I \equiv I \pmod{2}$ whence the claim follows from the induction hypothesis. Now assume we are in the second case. The left mediant of these two fractions is the fraction $\frac{2p + r}{2q + s}$ in lowest terms. As we saw in the proof of Lemma \[thm:modular\], the factor by which we reduce to lowest terms must be odd, and thus does not affect the $2$-adic valuation. Then we compute: $$\mathcal{C}\left(\frac{a}{b}, \frac{2p + r}{2q + s}\right) = 2(bp - aq) + (br - as) = 2\mathcal{C}\left(\frac{a}{b}, \frac{p}{q}\right) + \mathcal{C}\left(\frac{a}{b}, \frac{r}{s}\right)$$ and $$\mathcal{C}\left(\frac{2p + r}{2q + s}, \frac{c}{d}\right) = 2(cq - pd) + (cs - qd) = 2\mathcal{C}\left(\frac{p}{q}, \frac{c}{d}\right) + \mathcal{C}\left(\frac{r}{s}, \frac{c}{d}\right).$$ Suppose $\frac{p}{q}$ has even index in $SB_n(\frac{a}{b}, \frac{c}{d})$. Then $\nu_2 \left(\mathcal{C}(\frac{a}{b}, \frac{p}{q}) \right) > \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{c}{d})\right) = \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{r}{s}) \right)$ since $\frac{r}{s}$ has odd index ($\frac{p}{q}, \frac{r}{s}$ are consecutive). It follows that $\mathcal{C}\left(\frac{a}{b}, \frac{2p + r}{2q + s}\right) = \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{r}{s}) \right) = \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{c}{d})\right)$. On the other hand, $\nu_2 \left(\mathcal{C}(\frac{p}{q}, \frac{c}{d}) \right) = \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{c}{d})\right) < \nu_2\left(\mathcal{C}(\frac{r}{s}, \frac{c}{d}) \right)$ so $\mathcal{C}\left(\frac{2p + r}{2q + s}, \frac{c}{d}\right)$ is at least $1 + \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{c}{d})\right) > \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{c}{d})\right)$. Instead if $\frac{p}{q}$ has odd index in $SB_n(\frac{a}{b}, \frac{c}{d})$, then $\nu_2 \left(\mathcal{C}(\frac{a}{b}, \frac{p}{q}) \right) = \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{c}{d})\right) < \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{r}{s}) \right)$ since $\frac{r}{s}$ has even index ($\frac{p}{q}, \frac{r}{s}$ are consecutive). It follows that $\mathcal{C}\left(\frac{a}{b}, \frac{2p + r}{2q + s}\right)$ is at least $1 + \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{p}{q}) \right) > \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{c}{d})\right)$. On the other hand, $\nu_2 \left(\mathcal{C}(\frac{p}{q}, \frac{c}{d}) \right) > \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{c}{d})\right) = \nu_2\left(\mathcal{C}(\frac{r}{s}, \frac{c}{d}) \right)$ so $\mathcal{C}\left(\frac{2p + r}{2q + s}, \frac{c}{d}\right)$ is $ \nu_2\left(\mathcal{C}(\frac{r}{s}, \frac{c}{d})\right) = \nu_2\left(\mathcal{C}(\frac{a}{b}, \frac{c}{d})\right)$. In either case, if $\frac{p}{q}$ has index $I$ in $SB_n(\frac{a}{b}, \frac{c}{d})$ then the left mediant of $\frac{p}{q}, \frac{r}{s}$ has index $3I + 1$ which is of opposite parity, and so the claim holds. The reasoning for the right mediant of $\frac{p}{q}, \frac{r}{s}$ is entirely analogous, and we are done by induction. Now let us characterize those numbers which appear in $SB(\frac{a}{b}, \frac{c}{d})$; we turn this into a precise description afterwards. Before we do so, we need a short technical lemma. \[thm:onethird\] The difference between the left and right mediants of $\frac{p}{q}$ and $\frac{r}{s}$ is at most one third of the difference between $\frac{p}{q}$ and $\frac{r}{s}$. Of course, the possibility that the mediants are reduced is irrelevant; the value of the numbers is unchanged. Thus we compute $$\frac{p + 2r}{q + 2s} - \frac{2p + r}{2q + s} = \frac{3(qr - ps)}{(q + 2s)(2q + s)}.$$ Now $$\frac{r}{s} - \frac{p}{q} = \frac{qr - ps}{qs} = \frac{3(qr - qs)}{3qs}$$ whence it is enough to show that $$9qs \le (q + 2s)(2q + s) = 2q^2 + 2s^2 + 5qs \iff 2(q - s)^2 \ge 0$$ which is clear, and so we are done. We can now characterize numbers which appear in a particular tree $SB(\frac{a}{b}, \frac{c}{d})$. As we will see, it is actually more natural to characterize numbers which do *not* appear in this tree. We have the following characterization: \[thm:numbersappear\] If a number $\frac{x}{y} \in [\frac{a}{b}, \frac{c}{d}]$ does not appear in $SB(\frac{a}{b}, \frac{c}{d})$, then $\frac{x}{y}$ is the mediant of two consecutive terms in some row of the tree. The mediant we speak of here is the ordinary mediant which appears in the case $k = 2$; thus the mediant of $\frac{p}{q}$ and $\frac{r}{s}$ is $\frac{p + r}{q + s}$. Assuming that $\frac{x}{y} \in [\frac{a}{b}, \frac{c}{d}]$ does not appear in the tree, we can find a sequence $\{I_n\}_{n \ge 0}$ with $I_0 = [\frac{a}{b}, \frac{c}{d}]$ and $I_n \supset I_{n+ 1}$ so that $x/y \in I_n$ and $I_n$ is the interval between two consecutive terms of the row $SB_{n}(\frac{a}{b}, \frac{c}{d})$. Once we have found the endpoints $\frac{a_n}{b_n}$, $\frac{c_n}{d_n}$ of $I_n$, the next row of the tree divides the interval $I_n = [a_n/b_n, c_n/d_n]$ into three sub-intervals, namely $$I_n = \left[\frac{a_n}{b_n}, \frac{2a_n + c_n}{2b_n + d_n}\right] \cup \left[\frac{2a_n + c_n}{2b_n + d_n}, \frac{a_n + 2c_n}{b_n + 2d_n}\right] \cup \left[\frac{a_n + 2c_n}{b_n + 2d_n}, \frac{c_n}{d_n}\right].$$ Now consider $m_{I_n}(\frac{x}{y})$. If $I_{n + 1}$ is the first segment: $[a_n/b_n, e/f]$, where $e/f = (2a_n + c_n)/(2b_n + d_n)$, or its reduced form, then $$m_{I_{n + 1}}\left(\frac{x}{y}\right) \leq (xb_n - ya_n) + (2a_n + c_n)y - (2b_n + d_n)x = m_{I_n}\left(\frac{x}{y}\right) - 2(xb_n - ya_n) < m_{I_n}\left(\frac{x}{y}\right).$$ The same is true if $I_{n + 1} = [(a_n + 2c_n)/(b_n + 2d_n), c_n/d_n]$ is the last segment (where the first number might be reduced). Finally if $I_{n + 1}$ is the middle interval, then $$m_{I_{n + 1}}\left(\frac{x}{y}\right) \leq x(2b_n+d_n) - y(2a_n+c_n) + (a_n+2c_n)y - (b_n+2d_n)x = m_{I_{n}}\left(\frac{x}{y}\right).$$ Equality holds only when there is no reduction. Thus $x/y$ falls into the middle interval, the endpoints of which are never reduced, all but finitely many times. Equivalently, there is some interval $I_N = [a_N/b_N, c_N/d_N] \ni x/y$ such that in every following row of the tree $x/y$ lies in the middle one of the three new intervals created and the endpoints are not reduced. These intervals are nested closed intervals with diameter tending to $0$ (by Lemma \[thm:onethird\]), so their intersection is a single point. This point is $(a_N + c_N)/(b_N + d_N)$; indeed, the mediant of two fractions always lies between them, and $\frac{a_N + c_N}{b_N + d_N}$ is the mediant of every $I_n$, $n \ge N$. It follows that $x/y = (a_N + c_N)/(b_N + d_N)$ is the mediant of two consecutive terms in $SB(\frac{a}{b}, \frac{c}{d})$. We can now turn this characterization into an explicit description of the fractions which appear in $SB(\frac{a}{b}, \frac{c}{d})$ using Lemmas \[thm:modular\] and \[thm:2adic\]. In particular, the criteria of Lemmas \[thm:modular\] and \[thm:2adic\] are both necessary and sufficient. \[thm:containsallrational\] Let $\frac{a}{b}$ and $\frac{c}{d}$ be fractions in lowest terms. The Stern-Brocot tree $SB(\frac{a}{b}, \frac{c}{d})$ contains all rational numbers $\frac{x}{y}$ between $\frac{a}{b}$ and $\frac{c}{d}$ which satisfy the following conditions: - $(x, y) \equiv (a, b) \pmod{2}$ or $(x, y) \equiv (c, d) \pmod{2}$ - $\min \left(\nu_{2}\left(\mathcal{C}\left(\frac{a}{b}, \frac{x}{y}\right)\right), \nu_{2}\left(\mathcal{C}\left(\frac{x}{y}, \frac{c}{d}\right)\right) \right) = \nu_{2}\left(\mathcal{C}\left(\frac{a}{b}, \frac{c}{d}\right)\right) < \max \left(\nu_{2}\left(\mathcal{C}\left(\frac{a}{b}, \frac{x}{y}\right)\right), \nu_{2}\left(\mathcal{C}\left(\frac{x}{y}, \frac{c}{d}\right)\right) \right)$ By Lemma \[thm:modular\] and Lemma \[thm:2adic\], these conditions are necessarily satisfied by $\frac{x}{y} \in SB(\frac{a}{b}, \frac{c}{d})$. On the other hand, by Theorem \[thm:numbersappear\] we know the only rational numbers between $\frac{a}{b}$, $\frac{c}{d}$ that fail to appear in $SB(\frac{a}{b}$ and $\frac{c}{d})$ are the mediants of consecutive terms in the tree. Now consider the mediant $\frac{p + r}{q + s}$ of two consecutive terms $\frac{p}{q}$ and $\frac{r}{s}$ in $SB(\frac{a}{b}, \frac{c}{d})$. It is easy to see that unless this mediant is reduced by an even factor, it fails to meet the first condition of the theorem. Indeed, by Lemma \[thm:modular\] we have $(p, q) \equiv (a, b)$ or $(p, q) \equiv (c, d)$ modulo $2$, and the same for $(r, s)$. As the equivalencies alternate $(p + r, q + s) \equiv (a + c, b + d) \pmod{2}$. The mediant fraction cannot satisfy the first condition of the theorem unless either $p$ and $q$ are both even or $r$ and $s$ are both even. Either way, this contradicts the fact that fractions have been reduced to lowest terms. Note that $$\mathcal{C}\left(\frac{a}{b}, \frac{p + r}{q + s}\right)= \mathcal{C}\left(\frac{a}{b}, \frac{p}{q}\right) + \mathcal{C}\left(\frac{a}{b}, \frac{r}{s}\right).$$ Since, $\nu_2(\mathcal{C}(\frac{a}{b}, \frac{p}{q})) \neq \nu_2( \mathcal{C}(\frac{a}{b}, \frac{r}{s}))$ and $\min(\nu_2(\mathcal{C}(\frac{a}{b}, \frac{p}{q})),\nu_2( \mathcal{C}(\frac{a}{b}, \frac{r}{s}))) = \nu_2(\mathcal{C}(\frac{a}{b}, \frac{c}{d}))$, we have $$\nu_{2}\left(\mathcal{C}\left(\frac{a}{b}, \frac{p + r}{q + s}\right)\right) = \nu_2\left(\mathcal{C}\left(\frac{a}{b}, \frac{c}{d}\right)\right).$$ Similarly, $$\nu_{2}\left(\mathcal{C}\left(\frac{p + r}{q + s},\frac{c}{d}\right)\right) = \nu_2\left(\mathcal{C}\left(\frac{a}{b}, \frac{c}{d}\right)\right).$$ Taking the reduction into account we see that $$\max \left( \nu_{2}\left(\mathcal{C}\left(\frac{a}{b}, \frac{p + r}{q + s}\right)\right), \nu_{2}\left(\mathcal{C}\left(\frac{p + r}{q + s}, \frac{c}{d}\right)\right)\right) < \nu_2\left(\mathcal{C}\left(\frac{a}{b}, \frac{c}{d}\right)\right).$$ The theorem follows. Consider for example the tree $SB(\frac{1}{3},\frac{3}{1})$ we discussed above. The cross-determinant of the initial terms is 8: $\mathcal{C}\left(\frac{1}{3}, \frac{3}{1}\right)=8$. It follows from the theorem that the numbers in the tree are all numbers $\frac{x}{y}$ in the range from $\frac{1}{3}$ to $\frac{3}{1}$ with odd numerators and denominators such that 8 divides $3x-y$ and $3y-x$. As another example, consider the unit tree. We have already seen that numerators in each row alternate in parity and denominators are always odd. This means that the mediant of two consecutive numbers in a row has an odd numerator and an even denominator, and by Theorem \[thm:numbersappear\] these are exactly the numbers that do not appear in the tree. It follows that any rational number between 0 and 1 with an odd denominator (in lowest terms) appears in the tree. Reduction {#sec:reduction} ========= So far we have stipulated, according to the definition of the tree, that all fractions should appear in lowest terms. This condition was important; for instance, the second row of $SB(\frac{0}{1}, \frac{1}{1})$ contains the two consecutive entries $1/3$ and $4/9$. Their weighted mediants before reduction are $6/15$ and $9/21$, both of which are reducible by 3 and the new entries in the third row are $2/5$ and $3/7$. In this section, we consider what happens when we relax this assumption. In the unit $k=2$ Stern-Brocot tree, fractions are always in lowest terms. Indeed, the cross-determinant is uniformly 1 in the unit $k = 2$ tree, and the factor by which fractions are reduced must divide their cross-determinants [@PRIMES]. In the case $k = 3$, on the other hand, reduction is unavoidable. That is, regardless of the choice of starting terms there will be mediants which need to be reduced. To see why, consider the Stern-Brocot tree $SB(\frac{a}{b}, \frac{c}{d})$, and suppose no fractions were reduced in the first two rows. Then the fractions $\frac{2a + c}{2b + d}$ and $\frac{5a + 4c}{5b + 4d}$ appear consecutively in the second row, and their mediants $\frac{9a + 6c}{9b + 6d}$, $\frac{12a + 9c}{12b + 9d}$ are both reducible. Let $R$ stand for a reduction scheme (a rule for how we reduce reducible fractions that appear in certain positions of the tree) and say the Stern-Brocot tree $SB(\frac{a}{b}, \frac{c}{d}, R)$ is generated exactly as the tree $SB(\frac{a}{b}, \frac{c}{d})$, except that reducible fractions are reduced according to $R$. We will use $R_u$ to represent uniform reduction to lowest term, so that all the above results are with respect to this reduction scheme. We also use $R_0$ to represent no reduction. These are the two most natural reduction schemes. Reduction schemes can in general be quite complex; for instance, we could flip a fair coin to decide whether or not to reduce a particular fraction, and make this choice independently for each fraction. We now generalize Theorem \[thm:containsallrational\] for a general reduction scheme. As we shall see, the proof proceeds almost identically once we critically examine which steps in the proof for $R = R_{u}$ depend, perhaps implicitly, on the reduction scheme and strengthen the necessary hypotheses. \[thm:withred\] Let $\frac{a}{b}$ and $\frac{c}{d}$ be fractions (in lowest terms) and $R$ a reduction scheme. If neither $a \equiv b \equiv 0 \pmod{2}$ nor $c \equiv d \equiv 0 \pmod{2}$ (in particular, this is true if the starting terms are in lowest terms), then the Stern-Brocot tree $SB(\frac{a}{b}, \frac{c}{d}, R)$ contains a unique fraction representative of each one of the rational numbers $\frac{x}{y}$ (in lowest terms) between $\frac{a}{b}$ and $\frac{c}{d}$ which satisfy the following conditions: - $(x, y) \equiv (a, b) \pmod{2}$ or $(x, y) \equiv (c, d) \pmod{2}$ - $\min \left(\nu_{2}\left(\mathcal{C}\left(\frac{a}{b}, \frac{x}{y}\right)\right), \nu_{2}\left(\mathcal{C}\left(\frac{x}{y}, \frac{c}{d}\right)\right) \right) = \nu_{2}\left(\mathcal{C}\left(\frac{a}{b}, \frac{c}{d}\right)\right) < \max \left(\nu_{2}\left(\mathcal{C}\left(\frac{a}{b}, \frac{x}{y}\right)\right), \nu_{2}\left(\mathcal{C}\left(\frac{x}{y}, \frac{c}{d}\right)\right) \right).$ First we see that Lemma \[thm:modular\] is no longer true, since it relies on the fact that no fractions with even numerator and even denominator even appear. Luckily, there is a simple fix. If we inspect the proof we see that fractions with both even numerator and even denominator appear only when such fractions appeared in the previous row. Thus as long as we stipulate that neither $a \equiv b \equiv 0 \pmod{2}$ nor $c \equiv d \equiv 0 \pmod{2}$ where $\frac{a}{b}$ and $\frac{c}{d}$ denote the starting terms as usual, the proof of Lemma \[thm:modular\] proceeds as before. Now that we are guaranteed that fractions cannot reduce by an even factor, the proof of Lemma \[thm:2adic\] extends immediately to arbitrary $R$. Indeed, this is the only fact on which the proof depends. The technical Lemma \[thm:onethird\] does not depend on reduction even implicitly, and so its proof goes unmodified. Next we consider Theorem \[thm:containsallrational\]. Again from the fact that the modulus is non-decreasing we can conclude that unless a target rational $x/y$ falls into the middle interval, the endpoint of which are not reduced, all but finitely many times, it appears in the tree. Once more, the unique intersection point of these nested closed intervals is the (ordinary) mediant of two consecutive terms in $SB(\frac{a}{b}, \frac{c}{d}, R)$. Finally, Theorem \[thm:containsallrational\] is proved from Lemmas \[thm:modular\] and \[thm:2adic\] (which are both true with the appropriate strengthened hypothesis) and some basic analysis of $2$-adic valuations. By our assumption that fractions in $SB(\frac{a}{b}, \frac{c}{d}, R)$ do not reduce by an even factor, the analysis is unaffected. Thus we have the desired extension of Theorem \[thm:containsallrational\]. We conclude with two remarks. First, notice Theorem \[thm:withred\] applies even when $\frac{a}{b}$, $\frac{c}{d}$ are not in lowest terms, as long as they cannot be reduced by an even factor. On the other hand, if, for instance, $a \equiv b \equiv 0 \pmod{2}$, there is no reasonable way to classify the numbers which appear in the tree for general $R$. This is because we can carry fractions with even numerator and denominator as far through as we like, and then reduce them to obtain fractions which may be in a new parity class altogether. For a specific example, let us consider the tree $SB(\frac{0}{2}, \frac{1}{1})$. In one reduction scheme, which we will call $R'$, new terms are reduced uniformly to lowest terms starting with the second row. The corresponding tree is: $$\frac{0}{2} \; \; \; \frac{1}{1}$$ $$\frac{0}{2} \; \; \; \frac{1}{5} \; \; \; \frac{2}{4} \; \; \; \frac{1}{1}$$ $$\frac{0}{2} \; \; \; \frac{1}{9} \; \; \; \frac{1}{6} \; \; \; \frac{1}{5} \; \; \; \frac{2}{7} \; \; \; \frac{5}{13} \; \; \; \frac{2}{4} \; \; \; \frac{5}{9} \; \; \; \frac{2}{3} \; \; \; \frac{1}{1}$$ Observe that this tree contains fractions which have even numerator and odd denominator. On the other hand, we can also consider this tree with the familiar reduction scheme $R_u$ for the new terms. In this case, $SB(\frac{0}{2}, \frac{1}{1}, R_u)$ is $$\frac{0}{2} \; \; \; \frac{1}{1}$$ $$\frac{0}{2} \; \; \; \frac{1}{5} \; \; \; \frac{1}{2} \; \; \; \frac{1}{1}$$ $$\frac{0}{2} \; \; \; \frac{1}{9} \; \; \; \frac{1}{6} \; \; \; \frac{1}{5} \; \; \; \frac{1}{4} \; \; \; \frac{1}{3} \; \; \; \frac{1}{2} \; \; \; \frac{3}{5} \; \; \; \frac{3}{4} \; \; \; \frac{1}{1}$$ Since all fractions except $\frac{0}{2}$ have an odd numerator, all mediants will have an odd numerator before reduction (and therefore, after reduction) except potentially the right mediant of $\frac{0}{2}$ and its neighbor. Yet it is easy to see (by induction) that the fraction to the right of $\frac{0}{2}$ in row $k$ of the tree is $\frac{1}{4k + 1}$, whence the right mediant of $\frac{0}{2}$ and its neighbor is $\frac{2}{4k + 4}$ which becomes $\frac{1}{2k + 2}$ when reduced to lowest terms. Thus $SB(\frac{0}{2}, \frac{1}{1})$ contains fractions with even numerator and an odd denominator under some reduction schemes, but not others. So in some sense, Theorem 4.1 is the strongest possible statement. Acknowledgements ================ We would like to thank James Propp for suggesting the project and discussing it with us. [9]{} D. Aiylam, Modified Stern-Brocot Sequences, <http://arxiv.org/pdf/1301.6807v1.pdf> M. Benito, J. Javier Escribano, An Easy Proof of Hurwitz’s Theorem, *Amer. Math. Monthly*, Vol. 109, No. 10 (2002), pp. 916–918. A. Bogomolny, Stern-Brocot Tree, <http://www.cut-the-knot.org/blue/Stern.shtml> A. Brocot, Calcul des rouages par approximation, nouvelle methode, *Revue Chronometrique*, Vol. 3 (1861), 186–194. N. Calkin, H. S. Wilf, Recounting the Rationals, *Amer. Math. Monthly*, Vol. 107, No. 4 (2000), pp. 360–363. D. H. Lehmer, On Stern’s Diatomic Series, *Amer. Math. Monthly*, Vol. 36, No. 2 (1929), pp. 59–67. J. Propp, Farey-ish fractions from weighted mediants, (2001) avalable at: <http://faculty.uml.edu/jpropp/CCCC-Apr2011.pdf> B. Reznick, Regularity properties of the Stern enumeration of the rationals, *J. Integer Seq.*, Vol. 11 (2008), Article 08.4.1 M. A. Stern, Ueber eine zahlentheoretische Funktion, *Journal fur die reine und angewandte Mathematik*, Vol. 55 (1858), 193–220. E. W. Weisstein, Stern-Brocot Tree, Wolfram Mathworld, <http://mathworld.wolfram.com/Stern-BrocotTree.html>
--- abstract: 'We study an inverse scattering problem for Maxwell’s equations in terminating waveguides, where localized reflectors are to be imaged using a remote array of sensors. The array probes the waveguide with waves and measures the scattered returns. The mathematical formulation of the inverse scattering problem is based on the electromagnetic Lippmann-Schwinger integral equation and an explicit calculation of the Green tensor. The image formation is carried with reverse time migration and with $\ell_1$ optimization.' author: - Liliana Borcea - 'Dinh-Liem Nguyen[^1]' bibliography: - 'ip-biblio.bib' title: Imaging with electromagnetic waves in terminating waveguides --- electromagnetic, terminating waveguide, inverse scattering. Introduction ============ We consider an inverse scattering problem for Maxwell’s equations in a waveguide which contains a few unknown reflectors. The setup is illustrated in Figure \[fig:setup\], where an array of sensors probes the waveguide with waves and records the returns over the duration of some time window. The inverse problem is to reconstruct the reflectors from these measurements. To carry out explicit calculations we assume that the waveguide has a simple geometry, with rectangular cross-section $\Omega = (0,L_1)\times(0,L_2)$, and introduce the system of coordinates $\vx = (\bx,x_3)$, with $\bx = (x_1,x_2) \in \Omega$ and $x_3 \le 0$. The waveguide terminates at $x_3 = 0$ and we denote its domain by $$W = (0,L_1)\times(0,L_2)\times(-\infty,0),$$ with boundary $\pa W$. For convenience we model the boundaries as perfectly conducting, but other boundary conditions may be used. The electric field $\vec{\bm E}(\om,\vx)$, decomposed over frequencies $\om$, satisfies the equation $$\begin{aligned} \curl \curl \vec{\bm E}(\om,\vbx) - \om^2 \mu_o \eps(\om,\vbx) \vec{\bm E}(\om,\vbx) = i \om \mu_o \vbJ(\om, \bm x) \delta(x_3 + L), \label{eq:Maxwell1} \qquad \vx \in W,\end{aligned}$$ with boundary conditions $$\begin{aligned} \label{eq:boundaryCond} \vbn(\vx)\times\vec{\bm E}(\omega,\vbx) = 0 \quad \text{on } \pa W,\end{aligned}$$ where $\curl$ is the curl operator in $\mathbb{R}^3$ and $\vbn(\vx)$ is the unit outer normal at $\pa W$. There is also a radiation condition at $x_3 \to -\infty$, which states that $\vec{\bm E}(\om,\vx)$ is bounded and outgoing. The current source density $\vbJ$ models the excitation from the array located at distance $L$ from the terminating boundary at $x_3 = 0$. The waveguide is filled with a linear and isotropic homogeneous medium with electric permittivity $\eps_o$ and magnetic permeability $\mu_o$, and a few reflectors supported in the compact domain $D \subset W$, located between the array and the terminating boundary. The reflectors are modeled as linear and possibly anisotropic dielectrics with Hermitian, positive definite relative electric permitivity matrix $\eps_r(\om,\vbx)$. The term $\eps \vec{\bm E}$ in (\[eq:Maxwell1\]) is the electric displacement, and $\eps$ is the electric permittivity tensor satisfying $$\eps(\om,\vx) = \eps_o \Big[ 1_{_D}(\vx) \Big(\eps_r(\om,\vx) - I) + I \Big]. \label{eq:3}$$ Here $I$ is the $3\times 3$ identity matrix and $1_{_D}(\vx)$ is the indicator function, equal to one for $\vx \in D$ and zero otherwise. ![Schematic of the imaging setup in a terminating waveguide with rectangular cross-section. The unknown reflector is supported in $D$. The array of sensors is far away from it, at distance $L$ from the terminating boundary. []{data-label="fig:setup"}](SCHEMATIC){width="12cm"} The inverse problem is to reconstruct the perturbation $\eps_r-I$ in (\[eq:3\]), or at least its support $D$, from measurements of the electric field $\vec{\bm E}(\om,\vbx)$ at points $\vbx =(\bx,-L)$ in the array aperture $A$, a subset of $\Omega$. Inverse scattering and inverse source problems in waveguides have been considered in the past in various setups relevant to applications in ocean acoustics, non-destructive evaluation and imaging in tunnels. We refer to [@Dediu2006; @Bourg2008; @Bourg2011; @Bourg2012; @Bourg2013; @Monk2012; @Arens2011; @Xu2000; @Roux2000; @tsogka2013selective] for mathematical studies of inverse scattering problems in acoustic and elastic waveguides with straight walls, and filled with homogeneous media. Random acoustic waveguides with finite cross-section are considered in [@Borce2010; @Issa2010], and with unbounded cross-section, as encountered in ocean acoustics, in [@borcea2014paraxial; @sabra2004blind]. Examples of inverse scattering problems in planar electromagnetic waveguides are in [@tamil1991spectral; @Mills1992; @Jorda1996], where the problem is reduced to one for the scalar Helmholtz equation by considering a single type of waves, transverse electric or magnetic. In this paper we give the mathematical formulation of the electromagnetic scattering problem in terminating waveguides and study with numerical simulations two imaging methods. The first is a reverse time migration approach, where the wave field measured at the array is time reversed and propagated to the imaging region using the electromagnetic Green’s tensor in the unperturbed waveguide. The second method uses $\ell_1$ optimization, and is motivated by the assumption that the perturbation of the electric permittivity has small spatial support $D$. The paper is organized as follows: We begin in section \[sect:FP\] with the formulation of the forward problem. We define the scattered electric field and show that it satisfies a Lipmann-Schwinger type equation. The solvability of this equation is analyzed using the Fredholm alternative. The data model used for inversion is given in section \[sect:FP4\] and the imaging methods are formulated in section \[sect:imag\]. The imaging results obtained with numerical simulations are in section \[sect:num\]. We end with a summary in section \[sect:sum\]. The scattering problem {#sect:FP} ====================== In this section we formulate the scattering problem. We begin in section \[sect:FP1\] with the expression of the electric field in the unperturbed (homogeneous) waveguide. Then we define in section \[sect:FP2\] the scattered wave field at the unknown reflectors and derive the radiation condition at $x_3 \to -\infty$. We state the scattering problem as a Lippmann-Schwinger integral equation and prove its Fredholm property in section \[sect:FP3\]. The homogeneous waveguide {#sect:FP1} ------------------------- In the absence of any reflector in the waveguide the electric field is denoted by $\vec{\bm E}^o$, and solves the boundary value problem $$\begin{aligned} \nonumber \curl\curl \vec{\bm{E}}^{o}(\vx) - k^2 \vec{\bm{E}}^{o}(\vx) &= i\omega\mu_o\vbJ(\bm x)\delta(x_3+L) \qquad \vx = (\bx,x_3)\in W, \\ \label{eq:forward4} \vbn(\vbx) \times\vec{\bm{E}}^{o}(\vx) &= 0 \qquad \vx \in \pa W, \end{aligned}$$ where $k = \om \sqrt{\eps_o \mu_o}$ is the wavenumber. Obviously, $\vec{\bm E}^o$ and $\vbJ$ depend on the frequency $\om$, but since we consider a constant $\om$ we simplify notation and drop it henceforth from the arguments of all fields. The expression of the electric field in infinite homogeneous waveguides is well known. See for example [@jackson chapter 8]. It is a superposition of a countable set of transverse electric and magnetic waves, called modes, which are either propagating away from the source or are decaying. In the terminating waveguide we have a similar mode decomposition of $\vec{\bm E}^o$, as stated in Lemma \[lem.1\], but there are both outgoing (forward propagating) and incoming (backward propagating) waves due to the reflection at the terminating boundary at $x_3 = 0$, and the evanescent waves may be growing or decaying away from the source, in the interval $x_3 \in (-L,0)$. The mode decomposition in Lemma \[lem.1\] is obtained by expanding at each $x_3$ the field $\vec{\bm E}^o(\vx)$ in the eigenfunctions $\vec \Phi_n^{(s)}(\bx)$ of the vectorial Laplacian $$\begin{aligned} -\Delta \vec \Phi_n^{(s)}(\bx) &= \lambda_n \vec \Phi_n^{(s)}(\bx) \quad \bx \in \Omega, \nonumber \\ \bm{n}^\perp(\bx)\cdot \Phi_n^{(s)}(\bx) &= \Phi_{n,3}(\bx) = 0\quad \bx \in \pa\Omega, \nonumber \\ \nabla\cdot {\Phi}(\bx) &= 0 \quad \bx \in \pa\Omega. \label{eq:vectLapl}\end{aligned}$$ We refer to appendix \[sect:VEP\] for a proof that $\{\vec\Phi_n^{(s)}(\bm x)\}_{n \in \N^2_0, 1 \le s \le {m}_n}$ is an orthogonal basis of $\big(L^2(\Omega)\big)^3$, and to [@alonso2015electromagnetic section 3] for an explanation of why the basis is useful for the analysis of electromagnetic waves in waveguides with perfectly conducting boundaries. In (\[eq:vectLapl\]) the Laplacian $\Delta$ and divergence $\nabla \cdot$ are with respect to $\bx \in \Omega$, $\bm{n}$ is the outer normal at $\pa \Omega$, and $\bm{n}^\perp$ is its rotation by $90$ degrees, counter-clockwise. The vectors $\vec \Phi_n^{(s)} = (\Phi_n^{(s)},\Phi_{n,3}^{(s)})$ are written in terms of their two dimensional projection $\Phi_n^{(s)}$ in the cross-section plane and the longitudinal part $\Phi_{n,3}^{(s)}$. The eigenvalues $\lambda_n$ and eigenvectors $\vec \Phi_n^{(s)}$ are indexed by $n \in \N^2_0 = \{(n_1,n_2): n_1^2+n_2^2 \neq 0\}$ and the multiplicity index $s = 1, \ldots, m_n$. \[lem.1\] The solution of (\[eq:forward4\]) has the following mode decomposition $$\begin{aligned} \label{eq:Eo1} \vec{\bm{E}}^{o}(\vbx) = \sum_{ n\in\N^2_0}\sum_{s=1}^{m_n} \vec\Phi_n^{(s)}(\bm x) \Big(a^{+(s)}_{o,n} e^{i\beta_nx_3} + b^{+(s)}_{o,n}e^{-i\beta_nx_3}\Big), \quad \text{for } x_3 \in (-L,0),\end{aligned}$$ and $$\begin{aligned} \vec{\bm{E}}^{o}(\vbx) = \sum_{ n\in\N^2_0}\sum_{s=1}^{m_n} \vec\Phi_n^{(s)}(\bm x) \, b^{-(s)}_{o,n}e^{-i\beta_nx_3}, \quad \text{for } x_3 < -L, \label{eq:Eo2}\end{aligned}$$ where $a_{o,n}^+(s)$ and $b_o^{\pm(s)}$ are constant mode amplitudes determined by the current excitation $\vbJ(\bx)$, and the superscripts $\pm$ remind us that the field is evaluated in the forward direction (toward the terminating boundary) or away from it. The modes are waves with wavenumber $$\begin{aligned} \beta_n= \begin{cases} \sqrt{k^2 - \lambda_n}, & k^2 \geq \lambda_n, \\ i \sqrt{\lambda_n- k^2}, & k^2 < \lambda_n. \end{cases}\end{aligned}$$ For a finite number of indexes $n \in \N^2_0$ the wavenumbers $\beta_n$ are real valued and the waves are propagating. The remaining infinitely many waves are evanescent. Equations (\[eq:Eo1\])-(\[eq:Eo2\]) are obtained by solving (\[eq:forward4\]) with separation of variables. Since the eigenfunctions of the vectorial Laplacian in (\[eq:vectLapl\]) form an orthogonal basis of $\big(L^2(\Omega)\big)^3$, as shown in Appendix \[sect:VEP\], we can expand $\vec{\bm E}^o$ in this basis for each $x_3 \ne - L$. Equations (\[eq:Eo1\])-(\[eq:Eo2\]) follow by substitution in (\[eq:forward4\]) and straightforward calculation given in appendix \[sect:REF\]. The mode amplitudes are derived from jump conditions at the source coordinate $x_3 = -L$, reflection conditions at the terminating boundary at $x_3=0$, and the radiation condition at $x_3 \to -\infty$. The boundary conditions at $\pa \Omega$ are built into the expansion in the basis $\{\vec \Phi_n^{(s)}\}$. Note that in the interval $x_3 \in (-L,0)$ between the source and the terminating boundary there are both forward and backward propagating waves and decaying and growing evanescent waves. On the other side of the source, for $x_3 < -L$, the propagating waves are outgoing and the evanescent waves are decaying, as imposed by the radiation condition. The simple geometry of the waveguide, with rectangular cross-section, allows us to write explicitly the mode decomposition in (\[eq:Eo1\])–(\[eq:Eo2\]). The eigenvalues are $$\lambda_n = \left(\frac{\pi n_1}{L_1}\right)^2 + \left(\frac{\pi n_2}{L_2}\right)^2, \qquad n = (n_1,n_2)\in \N_o^2, \label{eq:eigenvals}$$ and by assuming that $(L_1/L_2)^2$ is not a rational number, we ensure that $\lambda_n \ne \lambda_{n'}$ if $n = (n_1,n_2) \ne n' = (n_1',n_2')$. This limits the multiplicity $m_n$ of the eigenvalues to $$m_n = \left\{ \begin{array}{ll} 1 &\mbox{if}~ ~ n_1 n_2 = 0, \\ 3 &\mbox{otherwise}. \end{array} \right. \label{eq:multiplic}$$ For the index pairs satisfying $n_1 n_2 = 0$, the eigenvalues are simple, with eigenvectors $$\vec \Phi_n^{(1)}(\bx) = \delta_{n_20}\left( \begin{matrix} 0 \\ \\\sin \big( \frac{\pi n_1x_1}{L_1}\big) \\ 0 \end{matrix} \right) + \delta_{n_10} \left( \begin{matrix} \sin \big(\frac{\pi n_2x_2}{L_2}\big) \\ 0 \\ 0 \end{matrix} \right), \label{eq:eigvect1}$$ satisfying the divergence free condition $\vec{\nabla} \cdot \vec \Phi^{(1)}(\vx) = 0$. Otherwise, there is triple multiplicity of the eigenvalues, and the eigenvectors are given by $$\begin{aligned} \vec\Phi^{(1)}_n(\bx) &= \left( \begin{matrix} \frac{\pi n_2}{L_2} \cos \big(\frac{\pi n_1x_1}{L_1}\big) \sin \big(\frac{\pi n_2x_2}{L_2}\big) \\ -\frac{\pi n_1}{L_1}\sin \big( \frac{\pi n_1x_1}{L_1}\big) \cos \big( \frac{\pi n_2x_2}{L_2}\big) \\ 0 \end{matrix} \right), \label{eq:eigvect2} \\ \vec\Phi^{(2)}_n(\bx) &= \left( \begin{matrix} \frac{\pi n_1}{L_1} \cos \big(\frac{\pi n_1x_1}{L_1}\big) \sin \big(\frac{\pi n_2x_2}{L_2}\big)\\ \frac{\pi n_2}{L_2}\sin \big( \frac{\pi n_1x_1}{L_1}\big) \cos \big( \frac{\pi n_2x_2}{L_2}\big) \\ 0 \end{matrix}\right), \label{eq:eigvect3}\end{aligned}$$ which are vectors in the cross-range plane, satisfying the divergence free condition $\vec{\nabla} \cdot \vec \Phi^{(1)}(\vx) = 0$ and the curl free condition $\curl \vec \Phi^{(2)}(\bx) = 0$, and $$\begin{aligned} \vec\Phi^{(3)}_n(\bx) &= \left( \begin{matrix} 0 \\ 0 \\ \sin \big( \frac{\pi n_1x_1}{L_1}\big) \sin \big( \frac{\pi n_2x_2}{L_2}\big) \end{matrix}\right), \label{eq:eigvect4}\end{aligned}$$ which is in the longitudinal direction. Equations (\[eq:Eo1\])–(\[eq:Eo2\]) take the explicit form $$\begin{aligned} \label{eq:reference} \vec{\bm{E}}^{o}(\vbx) = \sum_{ n\in\N^2_0}\sum_{s=1}^{m_n} \Big[ & \delta_{s1} \vec\Phi_n^{(1)}(\bm x) (a^{+(1)}_{o,n} e^{i\beta_nx_3} + b^{+(1)}_{o,n}e^{-i\beta_nx_3}) \nonumber \\ &+ \Big( \delta_{s2} \vec\Phi_n^{(2)}(\bm x) - \frac{i \lambda_n}{\beta_n} \delta_{s3} \vec\Phi_n^{(3)}(\bx)\Big) a_{o,n}^{+(2)}e^{i \beta_n x_3} \nonumber \\ &+ \Big( \delta_{s2} \vec\Phi_n^{(2)}(\bm x) + \frac{i \lambda_n}{\beta_n} \delta_{s3} \vec\Phi_n^{(3)}(\bx)\Big) b_{o,n}^{+(2)} e^{-i \beta_n x_3} \Big], \quad \text{for } x_3 > -L,\end{aligned}$$ and $$\begin{aligned} \vec{\bm{E}}^{o}(\vbx) = \sum_{n\in\N^2_0}\sum_{s=1}^{m_n} \Big[& \delta_{s1} \vec\Phi_n^{(1)}(\bm x) b^{-(1)}_{o,n}e^{-i\beta_nx_3} + \nonumber \\ & \Big(\delta_{s2} \vec\Phi_n^{(2)}(\bm x) + \frac{i\lambda_n}{\beta_n} \delta_{s3} \vec\Phi_n^{(3)}(\bm x)\Big) b^{-(2)}_{o,n} e^{-i\beta_nx_3} \Big] , \quad \text{for } x_3 < -L. \label{eq:refLeft}\end{aligned}$$ The field $\vec{\bm E}^o$ is a superposition of transverse electric waves with amplitudes $a_{n,o}^{+(1)}$ and $b_{n,o}^{\pm(1)}$, and transverse magnetic waves with amplitudes $a_{n,o}^{+(2)}$ and $b_{n,o}^{\pm(2)}$. The name transverse electric refers to the fact that the third component of $\vec \Phi_n^{(1)}(\bx)$, corresponding to the longitudinal electric field, equals zero. Similarly, the name transverse magnetic refers to the fact that $$\vec{\bf e}_3 \cdot \curl \Big( \vec\Phi_n^{(2)}(\bm x) \pm \frac{i \lambda_n}{\beta_n} \vec\Phi_n^{(3)}(\bx) \Big) = \vec{\bf e}_3 \cdot \curl \vec\Phi_n^{(2)}(\bm x) = 0,$$ and thus the longitudinal magnetic field is zero by Faraday’s law. The transverse electric mode amplitudes are given by $$\begin{aligned} \label{eq:ampA1} a^{+(1)}_{o,n} = -\frac{\omega\mu_o\lin\vec\Phi^{(1)}_n,\vbJ\rin}{2\beta_n \|\vec \Phi_n^{(1)}\|^2} e^{i\beta_nL}, \qquad b^{+(1)}_{o,n} = \frac{\omega\mu_o\lin\vec\Phi^{(1)}_n,\vbJ\rin}{2\beta_n \|\vec \Phi_n^{(1)}\|^2}e^{i\beta_nL},\end{aligned}$$ for $x_3 \in (-L,0)$ and by $$\label{eq:ampB1} b^{-(1)}_{o,n} = \frac{\omega\mu_o \lin \vec\Phi^{(1)}_n,\vbJ \rin}{2\beta_n\|\vec \Phi_n^{(1)}\|^2} \left[ e^{i\beta_nL} - e^{-i\beta_nL} \right],$$ for $x_3 < -L$. Here $\lin \cdot, \cdot \rin$ denotes the inner product in $\big(L^2(\Omega)\big)^3$ and $\| \cdot \|$ is the induced norm. The transverse magnetic mode amplitudes are $$\begin{aligned} a^{+(2)}_{o,n} &= \left[-\frac{\om \mu_o\beta_n}{2k^2}\frac{\lin \vec\Phi^{(2)}_n,\vbJ\rin}{\|\vec \Phi_n^{(2)}\|^2} - \frac{i \om \mu_o}{2\lambda_n} \frac{\lin\vec\Phi^{(3)}_n,\vbJ\rin}{\|\vec \Phi_n^{(3)}\|^2}\right] e^{i\beta_nL}, \nonumber \\ b^{+(2)}_{o,n} &= \left[\frac{\omega\mu_o\beta_n}{2k^2}\frac{\lin\vec\Phi^{(2)}_n,\vbJ\rin}{ \|\vec\Phi^{(2)}_n\|^2} + \frac{i\omega\mu_o}{2\lambda_n}\frac{\lin\vec\Phi^{(3)}_n,\vbJ\rin}{ \|\vec\Phi^{(3)}_n\|^2} \right] e^{i\beta_nL}, \label{eq:ampAB2}\end{aligned}$$ for $x_3 \in (-L,0)$ and $$b^{-(2)}_{o,n} = \frac{\omega\mu_o\beta_n}{2k^2}\frac{\lin \vec\Phi^{(2)}_n,\vbJ \rin}{\|\vec\Phi^{(2)}_n\|^2} \left[ e^{i\beta_nL} - e^{-i\beta_nL} \right] + \frac{i\omega\mu_o}{2\lambda_n}\frac{\lin\vec\Phi^{(3)}_n,\vbJ\rin}{ \|\vec\Phi^{(3)}_n\|^2} \left[ e^{i\beta_nL} + e^{-i\beta_nL} \right], \label{eq:ampB2}$$ for $x_3 < -L$. The scattered field and radiation condition {#sect:FP2} ------------------------------------------- The scattered field due to the reflectors supported in $D \subset W$ is defined by $$\vec{\bm E}^{sc}(\vx) = \vec{\bm E}(\vx)-\vec{\bm E}^o(\vx), \label{eq:SC1}$$ where $\vec{\bm E}(\vx)$ is the solution of equation (\[eq:Maxwell1\]), with the electric permittivity tensor (\[eq:3\]). Explicitly, $\vec{\bm E}^{sc}$ satisfies $$\begin{aligned} \nonumber \curl\curl \vec{\bm{E}}^{sc}(\vx) - k^2\vec{\bm{E}}^{sc}(\vx) &= k^2 V(\vx) \vec{\bm{E}}(\vx) \qquad \vx \in W, \\ \label{eq:Scattered2} \vbn(\vx) \times\vec{\bm{E}}^{sc}(\vx) &= 0 \qquad \vx \in \pa W,\end{aligned}$$ where $$\label{eq:defV} V(\vx) = \frac{\eps(\vx)}{\eps_o} - I = 1_{_D}(\vx) \big(\eps_r(\vx) - I\big)$$ is the scattering potential. The radiation condition, which states that the scattered field is bounded and outgoing away from the reflectors, takes the form $$\begin{aligned} \hspace{-0.1in}\vec{\bm{E}}^{sc}(\vbx) = \sum_{n\in\N^2_0}\sum_{s=1}^{m_n} \Big[& \delta_{s1} \vec\Phi_n^{(1)}(\bm x) b^{-(1)}_{n}e^{-i\beta_nx_3} \nonumber \\&+ \Big(\delta_{s2} \vec\Phi_n^{(2)}(\bm x) + \frac{i\lambda_n}{\beta_n} \delta_{s3} \vec\Phi_n^{(3)}(\bm x)\Big) b^{-(2)}_{n} e^{-i\beta_nx_3}\Big] ,\label{eq:radiationCond}\end{aligned}$$ for locations $\vx = (\bx,x_3) \in W$ satisfying $x_3 < \inf\{x_3: \vbx=(x_1,x_2,x_3) \in D\}$. Note the similarity of (\[eq:radiationCond\]) with (\[eq:refLeft\]), the expression of the reference field $\vec{\bm E}^{(o)}$ on the left of the source. The mode amplitudes $b_n^{-(1)}$ and $b_n^{-(2)}$ contain the information about the reflectors supported in $D$ and their expression follows from the calculations in the next section. Solvability of the forward problem {#sect:FP3} ---------------------------------- Here we study the solvability of the forward scattering problem (\[eq:Scattered2\])–(\[eq:radiationCond\]). We begin with the derivation of the Green’s tensor $\mathbb{G}(\vx,\vby)$ and then restate the scattering problem as an electromagnetic Lippmann-Schwinger equation, for which we can prove the Fredholm property. The discussion assumes that the domain $D$ that supports the reflectors does not touch the boundary, and that the scattering potential $V$ is bounded, entrywise. The Green’s tensor $\mathbb{G}(\vx,\vby) \in \mathbb{C}^{3 \times 3}$ satisfies $$\begin{aligned} \nonumber \curl\curl \mathbb{G}(\vx,\vby) - k^2 \mathbb{G}(\vx,\vby) &= -\delta(\vx -\vby) I \qquad \vx \in W, \\ \label{eq:Tensor2} \vbn(\vx) \times\mathbb{G}(\vx,\vby) &= 0\qquad \vx \in \pa W,\end{aligned}$$ where we recall that $I$ is the $3\times 3$ identity matrix, and the curl is taken columnwise. In addition, each column of $\mathbb{G}(\vx,\vby)$ satisfies a radiation condition similar to (\[eq:radiationCond\]) for $x_3 < y_3$, which says that the Green’s function is bounded and outgoing. The expression of $\mathbb{G}$ is given in the next lemma, proved in appendix \[sect:proofLem2\]. \[lem.2\] Let $\vx \ne \vby$ and $\vx, \vby \in W$. The Green’s tensor $\mathbb{G}(\cdot, \vby)$ is given by $$\begin{aligned} \label{eq:TensorForm} \mathbb{G}(\vx, \vby)= (\vec{G}_1,\vec{G}_2,\vec{G}_3)(\vx, \vby) + \frac{1}{k^2} \vec\nabla \divv(\vec{G}_1,\vec{G}_2,\vec{G}_3)(\vx, \vby),\end{aligned}$$ with divergence taken columnwise. The vectors $\vec{G}_j$ with $j = 1, \ldots, 3$ are defined by $$\label{eq:Gj} \vec{G}_j(\vx, \vby) = \sum_{ n\in\N^2_0} \sum_{s=1}^{m_n} \frac{\vec{\bm{e}}_j\cdot \vec\Phi^{(s)}_{n}(\bm{y})}{\| \vec\Phi^{(s)}_{n}\|^2}\big[e^{i\beta_n|x_3-y_3|} + (2\delta_{s3}-1)e^{-i\beta_n(x_3+y_3)}\big] \frac{\vec\Phi^{(s)}_n(\bx)}{2 i \beta_n}.$$ They satisfy equations $$\begin{aligned} \nonumber \Delta \vec{G}_j(\vx,\vby) + k^2 \vec{G}_j(\vx,\vby) &= \delta(\vx -\vby) \vec{\bm{e}_j} \qquad \vx \in W, \\ \label{eq:Green2} \vbn(\vx) \times \left[k^2\vec{G}_j(\vx,\vby) + \vec\nabla \divv\vec{G}_j(\vx,\vby)\right] &= 0\qquad \vx \in \pa W,\end{aligned}$$ and a radiation condition similar to for $x_3 < y_3$, which says that the components of $\vec{G}_j$ are outgoing or decaying waves. To state the scattering problem (\[eq:Scattered2\]) as a Lippmann-Schwinger equation, we follow the approach in [@Kirsc2007a]. For a finite $\bar{L}\ge L$, we define the truncated waveguide $$W_{\bar L} = (0,L_1)\times(0,L_2) \times(-\bar{L},0) \subset W,$$ and introduce the space $$H(\curll,W_{\bar{L}}):=\left\{\vbu \in (L^2(W_{\bar L}))^3: \curl \vbu \in (L^2(W_{\bar L}))^3\right\},$$ equipped with the inner product $$(\vec{\bm u},\vec{\bm v})_\icu = \int_{W_{\bar L}} d \vx \Big[ \vec{\bm u}(\vx) \cdot \overline{\vec{\bm v}(\vx)} + \curl \vec{\bm u}(\vx) \cdot \curl \overline{\vec{\bm u}(\vx)} \Big],$$ where the bar denotes complex conjugate. The induced norm is $ \|\vec{\bm u}\|_\icu = \sqrt{(\vec{\bm u},\vec{\bm u})_\icu}. $ From [@Kirsc2007a] we known that $\mathcal{M}:\big(L^2(D)\big)^3 \to H(\curll,W_{\bar{L}})$ defined by $$\mathcal{M}(\vec{\bm u})(\vx) = (k^2+\vec\nabla\divv)\int_D \, \frac{e^{i k |\vx-\vby|}}{4 \pi |\vx-\vby|} \vec{\bm u}(\vby)\d \vby,$$ is a linear bounded mapping. Moreover, $\vec{\bm v} = \mathcal{M}(\vec{\bm u})$ is the unique radiating variational solution of $\curl\curl \vbv - k^2\vbv = k^2\vec{\bm u}$, meaning that $$\begin{aligned} \int_{W_{\bar{L}}} \left(\curl \vbv\cdot\curl \ol\vbphi - k^2\vbv\cdot\ol\vbphi\right)\d \vx = k^2\int_D\vbg\cdot\ol\vbphi \d \vx\end{aligned}$$ for all $\vbphi \in H(\curll,W_{\bar{L}})$, with compact support in $W_{\bar{L}}$. This result can be extended to our problem because the difference of Green’s functions $\vec{G}_j(\vx,\vby) - \frac{e^{i k |\vx-\vby|}}{4 \pi |\vx-\vby|}\vec{\bm e}_j$ is analytic and satisfies $$\big(\Delta + k^2 \big) \Big(\vec{G}_j(\vx,\vby) - \frac{e^{i k |\vx-\vby|}}{4 \pi |\vx-\vby|}\vec{\bm e}_j\Big) = 0.$$ Thus, the mapping $\mathcal{L}:\big(L^2(D)\big)^3 \to H(\curll,W_{\bar{L}})$ defined by $$\mathcal{L}(\vec{\bm u}) = (k^2+\vec\nabla\divv)\int_D \big(\vec{G}_1,\vec{G}_2,\vec{G}_3\Big)(\cdot,\vby)\, \vec{\bm u}(\vby)\d \vby, \label{eq:Ki1}$$ is linear and bounded, and $\vec{\bm v} = \cL(\vbu)$ is the radiating variational solution of the equation $\curl\curl \vbv - k^2\vbv = k^2\vec{\bm u}$ in the waveguide. We are interested in $\vec{\bm u} = V \vec{\bm E}$, so that $\mathcal{L}(V \vec{\bm E})$ satisfies the partial differential equation (\[eq:Scattered2\]). To show that this is $\vec{\bm E}^{sc}$ it remains to check that $\mathcal{L}(V \vec{\bm E})$ satisfies the perfectly conducting boundary conditions. This follows from the boundary conditions in (\[eq:Green2\]), because $D$ does not touch the boundary, so we can write $$\vec{\bm n}(\vec{\bm x})\times \mathcal{L}(\vec{\bm u}) = \int_{D} \vec{\bm n}(\vx) \times (k^2+\vec\nabla\divv)\big(\vec{G}_1,\vec{G}_2,\vec{G}_3\big)(\cdot,\vby)\, \vec{\bm u}(\vby)\d \vby = 0, \qquad \vx \in \partial W.$$ We have now shown that $\vec{\bm E}^{sc}(\vx) = \mathcal{L}(V \vec{\bm E})$, or equivalently, that it solves the Lippmann-Schwinger equation $$\vec{\bm E}^{sc}(\vx) = (k^2+\vec\nabla\divv)\int_D \big(\vec{G}_1,\vec{G}_2,\vec{G}_3\Big)(\cdot,\vby)\, V(\vby) \vec{\bm E}(\vby)\d \vby. \label{eq:Ki2}$$ The next Theorem proves a Grding inequality from which we can conclude the Fredholm property. \[thm.1\] There exists a compact operator $\mathcal{K}:H(\curll,W_{\bar{L}}) \to H(\curll,W_{\bar{L}})$ and a positive constant $C$ such that $$\Re \big(\vec{\bm u} - \mathcal{L}(V\vec{\bm u}) + \mathcal{K} \vbu ,\vec{\bm u}\big)_\icu \ge C \|\vec{\bf u}\|^2_\icu, \qquad \forall \, \vbu \in H(\curll,W_{\bar{L}}). \label{eq:Gard}$$ Therefore, $I - \mathcal{L}(V\cdot)$ is a Fredholm operator. Let us define an auxilliary operator $\mathcal{L}_o:\big(L^2(D)\big)^3 \to H(\curll,W_{\bar{L}})$, $$\mathcal{L}_o(\vec{\bm u}) = (-1+\vec\nabla\divv)\int_D \big(\vec{\mathcal{G}}_1,\vec{\mathcal{G}}_2,\vec{\mathcal{G}}_3\Big)(\cdot,\vby)\, \vec{\bm u}(\vby)\d \vby, \label{eq:PF1}$$ where $\vec{\mathcal{G}}_j$ solve $$\Delta \vec{\mathcal{G}}_j(\vx,\vby) - \vec{\mathcal{G}}_j(\vx,\vby) = \delta(\vx-\vby) \vec{\bf e}_j, \quad \vx \in W_{\bar{L}}.$$ These are like the partial differential equations in (\[eq:Green2\]), with $k$ replaced by the imaginary number $i$. From the analysis in [@Kirsc2007a], which applies to imaginary wavenumbers like $i$, we obtain that $\cL_o$ is a bounded linear operator and $\vbu = \cL_o(\vec{\bm f})$ is the weak solution of $\curl \curl \vbu + \vbu = - \vec{\bm f}$. Explicitly, we have for all $\vbphi \in H(\curll,W_{\bar{L}})$, $$\begin{aligned} \Big( \cL_o(\vec{\bm f}),\vbphi \Big)_\icu &= \int_{W_{\bar{L}}} d \vx \Big[\curl \cL_o(\vec{\bm f}) \cdot\curl \ol\vbphi + \cL_o(\vec{\bm f }) \cdot\ol\vbphi\Big] \nonumber \\ &= -\int_D d \vx \, \vec{\bm f }\cdot\ol\vbphi- \int_{\pa W_{\bar{L}}} ds \, \Big[\vec{\bm n} \times \curl \cL_o(\vec{\bm f})\Big] \cdot \Big[(\vec{\bm n} \times \ol\vbphi) \times \vec{\bm n}\Big], \label{eq:IBP}\end{aligned}$$ where we used the integration by parts result in [@Monk2003a Theorem 3.31]. Using this auxiliary operator we write $$\begin{aligned} \Big( \vbu - \cL(V\vbu),\vbu \Big)_\icu &= \Big( \vbu - \cL_o(V\vbu),\vbu \Big)_\icu \hspace{-0.05in} - \Big( (\cL-\cL_o)(V \vbu),\vbu)_\icu \nonumber \\ &= \|\vbu\|_\icu^2 - \Big( \cL_o(V\vbu),\vbu \Big)_\icu \hspace{-0.05in}- \Big( (\cL-\cL_o)(V \vbu),\vbu \Big)_\icu,\end{aligned}$$ and from (\[eq:IBP\]) with $\vec{\bm f} = V \vbu$ and $\vbphi = \vbu$, we get $$\begin{aligned} \Big( \vbu - \cL(V\vbu),\vbu \Big)_\icu =&\|\vbu\|_\icu^2 + \int_D d \vx \, (\eps_r(\vx)-I) \vbu \cdot \ol\vbu \\ &\hspace{-0.8in}+ \int_{\pa W_{\bar{L}}} ds \, \Big[\vec{\bm n} \times \curl \cL_o( V\vbu)\Big] \cdot \Big[(\vec{\bm n} \times \ol\vbu) \times \vec{\bm n}\Big] - \Big( (\cL-\cL_o)(V \vbu),\vbu \Big)_\icu.\end{aligned}$$ Here we used the expression (\[eq:defV\]) of $V$. Because $\eps_r$ is positive definite by assumption, we conclude that there exists a positive constant $C$ such that $$\begin{aligned} \|\vbu\|_\icu^2 + \int_D d \vx \, (\eps_r(\vx)-I) \vbu \cdot \ol\vbu \ge C \|\vbu\|_\icu^2, \quad \forall \, \vbu \in H(\curll,W_{\bar L}).\end{aligned}$$ Substituting in the equation above and introducing the linear operators $\mathcal{K}_1$ and $\mathcal{K}_2$ from $H(\curll,W_{\bar L})$ to $H(\curll,W_{\bar L})$, defined by $$\begin{aligned} \mathcal{K}_1(\vbu) &= (\cL-\cL_o)(\vbu), \label{eq:K1} \\ \mathcal{K}_2(\vbu) &= -\int_{\pa W_{\bar{L}}} ds \, \Big[\vec{\bm n} \times \curl \cL_o( V\vbu)\Big] \cdot \Big[(\vec{\bm n} \times \ol\vbu) \times \vec{\bm n}\Big], \label{eq:K2}\end{aligned}$$ we obtain $$\Re \Big(\vbu - \cL(V \vbu) + \mathcal{K}_1(\vbu) + \mathcal{K}_2(\vbu),\vbu\Big)_\icu \ge C \|\vbu\|_\icu^2, \quad \forall \, \vbu \in H(\curll,W_{\bar L}). \label{eq:K3}$$ Result (\[eq:Ki2\]) follows once we show that $\mathcal{K}_1$ and $\mathcal{K}_2$ are compact operators. Since the differences $\vec{G}_j(\vx,\vby) - \frac{e^{i k |\vx-\vby|}}{4 \pi |\vx-\vby|}\vec{\bm e}_j$ and $\vec{\mathcal{G}}_j(\vx,\vby) - \frac{e^{- |\vx-\vby|}}{4 \pi |\vx-\vby|}\vec{\bm e}_j$ are analytic, we conclude that the singularity of the kernel in $\cL-\cL_o$ is as strong as that of $\Big[\frac{e^{i k |\vx-\vby|}}{4 \pi |\vx-\vby|} - \frac{e^{- |\vx-\vby|}}{4 \pi |\vx-\vby|}\Big]I$. Thus, we can use the results in [@Kirsc2007a] to conclude that $\mathcal{K}_1$ is a compact operator. To prove that $\mathcal{K}_2$ is compact, let us consider a neighborhood $\Gamma$ of the boundary $\pa W_{\bar L}$, such that $\Gamma \subset W_{\bar L}$ and $\Gamma$ does not intersect the support $D$ of the scattering potential. We define the operator $\mathcal{T}$ from $(L^2(D))^3$ to $(H^s(\Gamma))^3$, with $s > 1$, by restricting $\curl \cL_o(\vec{\bm f})$ to $\Gamma$, for all $\vec{\bm f} \in (L^2(D))^3$, $$\mathcal{T}(\vec{\bm f}) = - \int_{D} d \vby\, \nabla \times \Big(\vec{\mathcal{G}}_1,\vec{\mathcal{G}}_2,\vec{\mathcal{G}}_3\Big)(\cdot, \vby) \vec{\bm f}(\vby), \quad \mbox{in}~ \Gamma.$$ This operator is compact because its kernel is an analytic function on $\Gamma \times D$. Define also the trace space $$H^{-1/2}_\idiv(\pa W_{\bar L}) = \Big\{ \vec{\bm f} \in \Big(H^{-1/2}(\pa W_{\bar L})\Big)^3: ~\exists~ \vbu \in H(\curll, W_{\bar L}) ~ \mbox{satisfying}~ \vec{\bm n} \times \vbu|_{\pa W_{\bar L}} = \vec{\bm f}\Big\},$$ with norm $$\|\vec{\bm f}\|_{H^{-1/2}_\idiv(\pa W_{\bar L})} = \inf_{\vbu \in H(\curll, W_{\bar L}), \vec{\bm n} \times \vbu|_{\pa W_{\bar L}} = \vec{\bm f}} \|\vbu\|_\icu.$$ It is shown in [@Monk2003a Section 3.5] that $H^{-1/2}_\idiv(\pa W_{\bar L})$ is a Banach space. Due to the compactness of $\mathcal{T}$, the mapping $\vbu \to \vec{\bm n} \times \mathcal{T}(V \vbu)|_{\partial W_{\bar L}}$ is a compact operator from $H(\curll,W_{\bar L})$ to $H^{-1/2}_\idiv(\pa W_{\bar L})$. Note that the mapping $\vbu \to V \vbu$ is bounded from $H(\curll,W_{\bar L})$ to $(L^2(D))^3$ and $\mathcal{T}(\vbu) \to \vec{\bm n} \times \mathcal{T}(\vbu)|_{\partial W_{\bar L}}$ is bounded from $(H^s(D))^3$ to $H^{-1/2}_\idiv(\pa W_{\bar L})$. We also have from [@Monk2003a Section 3.5] that $\vbu \to (\vec{\bm n} \times \vbu|_{\pa W_{\bar L}}) \times \vec{\bm n}$ is a linear bounded mapping from $H(\curll,W_{\bar L})$ to $H^{-1/2}_\icu(\pa W_{\bar L})$, the dual space of $H^{-1/2}_\idiv(\pa W_{\bar L})$. To show that $\mathcal{K}_2$ is compact, let $\{\vbu_j\}$ be a sequence in $H(\curll,W_{\bar L})$ that converges weakly to $0$, and prove that $\{\mathcal{K}_2(\vbu_j)\}$ converges strongly to $0$ in $H(\curll,W_{\bar L})$. Indeed we have $$\begin{aligned} \|\mathcal{K}_2 (\vbu_j)\|_\icu &= \sup_{\vec{\bm v} \in H(\curll,W_{\bar L})\setminus\{0\}} \frac{\Big| (\mathcal{K}_2(\vbu_j),\vec{\bm v})_\icu \Big|}{\|\vec{\bm v}\|_\icu} \nonumber \\ &\leq \sup_{ \vbv\in H(\curll,W_{ \bar L})\setminus \{0\}} \frac{ \| \vbn \times \curl \cL_o(V\vbu_j)\|_{H^{-1/2}_\idiv(\pa W_{\bar L})} \| \vbn\times \vbv \times \vbn \|_{H^{-1/2}_\icu(\pa W_{\bar L})}}{\| \vbv\|_\icu} \nonumber \\ &\leq C\| \vbn \times \curl \cL_o(V\vbu_j)\|_{H^{-1/2}_{\div}(\pa W_{\bar L})} \nonumber \\ &= C\| \vbn \times \mathcal{T}(V\vbu_j) \|_{H^{-1/2}_\idiv(\pa W_{\bar L})} \\ & \to 0, \qquad \mbox{as} ~ j \to \infty.\end{aligned}$$ where the first line is a definition, the second line follows by duality, the third line is due to the boundedness of the mapping $\vbv \to \vbn \times \vbv \times \vbn$ and the fourth line is by the definition of $\mathcal{T}$. The convergence to zero is by the compactness of the mapping $\vbu \to \vec{\bm n} \times \mathcal{T}(V \vbu)|_{\partial W_{\bar L}}$. We have now proved the Grding inequality (\[eq:Gard\]), with $\mathcal{K} = \mathcal{K}_1 + \mathcal{K}_2$. We obtain from it that $I - \cL(V \cdot)$ is the sum of the coercive operator $I - \cL(V \cdot) - \mathcal{K}$ and the compact operator $\mathcal{K}$. Thus, $I-\cL(V\cdot)$ is a Fredholm operator [@McLea2000]. We conclude the discussion on the solvability of the forward problem with the remark that when $\eps_r$ is $C^1$, one can extend the results in [@Kirsc2007a] to prove uniqueness of solution of equation (\[eq:Ki2\]). The existence of the solution follows from the Fredholm property. Data model {#sect:FP4} ========== Since the array is far away from the support $D$ of the scattering potential $V$, at coordinate $x_3 = -L$, the results in the previous section give $$\vec{\bm E}^{sc}(\vx) \approx k^2 \int \mathbb{G}^P(\vx,\vby)\, \vec{\bm u}(\vby) \d \vby, \quad \vx = (\bx,-L). \label{eq:B01}$$ Here $$\vec{\bm u}(\vby) = V(\vby) \vec{\bm E}(\vby), \label{eq:B02}$$ is an effective source supported in $D$, representing the wave emitted by the unknown reflectors illuminated by the field $\vec{\bm E}(\vby)$. The approximation in (\[eq:B01\]) is because we replaced the Green tensor $\mathbb{G}$ defined in Lemma \[lem.2\] by its approximation $\mathbb{G}^P$ which neglects the evanescent waves. Explicitly, if we denote by $P$ the set of indexes of the propagating modes $$P = \{n \in \N_o^2 : \lambda_n < k^2\},$$ we have $$\mathbb{G}^P(\vx, \vby)= (\vec{G}^P_1,\vec{G}^P_2,\vec{G}^P_3)(\vx, \vby) + \frac{1}{k^2} \vec\nabla \divv(\vec{G}^P_1,\vec{G}^P_2,\vec{G}^P_3)(\vx, \vby), \label{eq:B2}$$ with $$\label{eq:B3} \vec{G}^P_j(\vx, \vby) = \sum_{ n\in P} \sum_{s=1}^{m_n}\frac{\vec{\bm{e}}_j\cdot \vec\Phi^{(s)}_{n}(\bm{y})}{\| \vec\Phi^{(s)}_{n}\|^2} \big[e^{i\beta_n(y_3+L)} + (2\delta_{s3}-1)e^{i\beta_n(L-y_3)}\big] \frac{\vec\Phi^{(s)}_n(\bx)}{2 i \beta_n},$$ where we used that $x_3 = -L$ at the array. Let us denote by $\mathcal{S}_q$ the linear mapping from the effective source (\[eq:B02\]) to the $q-$th component of the scattered field at the array $$\big[\mathcal{S}_q(\vec{\bm u})\big](\bx) = k^2\int \vec{\bm e}_q \cdot \mathbb{G}^P((\bx,-L),\vby)\, \vec{\bm u}(\vby) \d \vby, 1 \le q \le 3. \label{eq:B03}$$ Since the support of the source (\[eq:B02\]) is included in $D$, we may seek to reconstruct the domain $D$ by inverting approximately $\mathcal{S}_q$. The mapping that takes the scattering potential $V$ to the measurements is nonlinear, because the scattered field $E^{sc}$ enters the definition (\[eq:B02\]). Thus, we linearize it, meaning that we make the single scattering (Born) approximation $$\vec{\bm u}(\vby) \approx V(\vby) \vec{\bm E}^o(\vby). \label{eq:B04}$$ We denote by $\mathcal{B}$ the linear mapping from the scattering potential $V$ to the effective source $$\big[\mathcal{B}(V)](\vby) = V(\vby) \vec{\bm E}^o(\vby). \label{eq:B05}$$ Then, the forward map $\cF_q$ from the scattering potential $V$ to the $q-th$ component of the electric field measured at the array is the composition of the mappings in (\[eq:B04\]) and (\[eq:B05\]), $$\cF_q(V) = \mathcal{S}_q \circ \mathcal{B} (V) \label{eq:B4}$$ The data are denoted by $d_q(\bx)$, for components $q = 1, \ldots, Q$, with $Q \le 3$, and $\bx \in A$, the aperture of the array, which is a subset of the waveguide cross-section $\Omega$. Imaging {#sect:imag} ======= Let ${\bf d}$ be the data vector, with entries given by $d_q(\bx)$ for all $\bx$ in $A$ and $q = 1, \ldots, Q$. Let also $\bV$ be the reflectivity vector consisting of the unknown components of the scattering potential $V$, discretized in the imaging window $D_I$ that contains the unknown support $D$. Then, we can state the imaging problem as finding an approximate solution $\bV$ of the linear system of equations $${\bf d} = \bF \bV. \label{eq:B5}$$ The reflectivity to data matrix $\bF$ is defined by the discretization of the forward mapping (\[eq:B4\]). The system of equations (\[eq:B5\]) is usually undertermined, so to find a unique approximation we regularize the inversion by minimizing either the $\ell_2$ or the $\ell_1$ norm of $\bV$. The first regularization is related to the reverse time migration approach, as described in section \[sect:TR\]. The imaging with $\ell_1$ minimization is discussed in section \[sect:L1\]. Reverse time migration {#sect:TR} ---------------------- The minimum $\ell_2$ norm solution of (\[eq:B5\]) is $$\bV = \bF^\dagger {\bf d}, \label{eq:TR1}$$ where $\bF^\dagger$ is the pseudo-inverse of $\bF$. If $\bF$ is full row rank, $\bF^\dagger = \bF^\star (\bF \bF^\star)^{-1}$, where the superscript denotes the adjoint. Moreover, if the rows of $\bF$ are nearly orthogonal, which requires proper placement of the receiver locations in the array aperture $A$, at distance of the order of the wavelength, matrix $\bF \bF^\star$ is nearly diagonal, so by replacing $\bF^\dagger$ in (\[eq:TR1\]) with $\bF^\star$ we get a similar answer, up to multiplicative factors. This replacement does not affect the support of the reconstruction and we denote the result by $$\bV^{^{\tiny \mbox{TR}}} = \bF^\star {\bf d}, \label{eq:TR2}$$ with superscript TR for “time reversal". To explain where time reversal comes in, let us compute the adjoint of the forward mapping . Before discretizing the imaging window we have $$\begin{aligned} \big(\mathcal{F}(V),{\bf d}\big) &= \sum_{q=1}^Q \sum_{\bx \in A} \big[\mathcal{F}_q(V)\big](\bx) \overline{d_q(\bx)} \nonumber \\ &= k^2 \sum_{q=1}^Q \sum_{\bx \in A} \int d \vby \, \vec{\bf e}_q \cdot \mathbb{G}^P((\bx,-L),\vby) V(\vby) \vec{\bm E}^o(\vby) \overline{d_q(\bx)},\end{aligned}$$ by the definition of the forward map and equation . We rewrite this as $$\begin{aligned} \big(\mathcal{F}(V),{\bf d}\big) &= \sum_{l=1}^3 \int d \vby \, \big[V(\vby) \vec{\bm E}^o(\vby)\big]_l \, \Big[k^2 \sum_{q=1}^Q \sum_{\bx \in A} \mathbb{G}^P_{lq}(\vby, (\bx,-L)) \overline{d_q(\bx)}\Big], \label{eq:TR3}\end{aligned}$$ using the Rayleigh-Carson reciprocity relation $\mathbb{G}^P(\vbx,\vby) =\big[ \mathbb{G}^P(\vby,\vx)\big]^T$ of the Green’s tensor. The last factor, in the square brackets, is the electric field evaluated at points $\vby$ in the imaging window $D_I$, due to a source at the array which emits the data recordings $d_q$ reversed in time. The time reversal is equivalent to complex conjugation in the Fourier domain. The adjoint of the forward map follows from (\[eq:TR3\]), $$\begin{aligned} \big(\mathcal{F}(V),{\bf d}\big) &= \sum_{l,m=1}^3 \int V_{lm}(\vby) E^o_m(\vby) \Big[k^2 \sum_{q=1}^Q \sum_{\bx \in A} \mathbb{G}^P_{lq}(\vby, (\bx,-L)) \overline{d_q(\bx)}\Big] \nonumber= \big(V,\cF^\star({\bf d})),\end{aligned}$$ where the inner product in the right hand side is $$\big(V,U\big) = \int d\vby \, \mbox{trace} \big[V(\vby) U(\vby)\big],$$ for any complex valued matrix $U$. Recall that $V(\vby)$ is Hermitian. Thus, $\cF^\star(\bd)$ is a $3 \times 3$ complex matrix valued field, with components $$\big[\cF^\star(\bd)\big]_{ml}(\vby) = \Big[k^2 \sum_{q=1}^Q \sum_{\bx \in A} \mathbb{G}^P_{lq}(\vby, (\bx,-L)) \overline{d_q(\bx)}\Big]E_m^o(\vby). \label{eq:TR4}$$ The right hand side in the imaging formula (\[eq:TR2\]) is the discretization of (\[eq:TR4\]) over points $\vby$ in the imaging window. In the particular case of a diagonal scattering potential $V(\vby)$, which corresponds to the coordinate axes being the same as the principal axes of the dielectric material in the support of the reflectors, the adjoint operator acts from the data space to the space of diagonal, positive definite matrices. The reconstruction is given by $$V^{^{\tiny \mbox{TR}}}_{ll}(\vby) = \Big[k^2 \sum_{q=1}^Q \sum_{\bx \in A} \mathbb{G}^P_{lq}(\vby, (\bx,-L)) \overline{d_q(\bx)}\Big]E_l^o(\vby), \label{eq:TR5}$$ where $\vby$ are the discretization points in $D_I$. Moreover, if the material is isotropic, so that $V$ is a multiple of the identity, the reconstruction is $V^{^{\tiny \mbox{TR}}} I$, with $$V^{^{\tiny \mbox{TR}}}(\vby) = \sum_{l=1}^3 \Big[k^2 \sum_{q=1}^Q \sum_{\bx \in A} \mathbb{G}^P_{lq}(\vby, (\bx,-L)) \overline{d_q(\bx)}\Big]E_l^o(\vby). \label{eq:TR6}$$ None of these formulae are quantitative approximations of $V$, so we may drop the factor $k^2$ and display their absolute values at points $\vby$ in the imaging window $D_I$. The estimate of the support $D$ of $V$ is given by the subset in $D_I$ where the displayed values are large. Imaging with $\ell_1$ optimization {#sect:L1} ---------------------------------- To incorporate the prior information that the reflectors have small support in the imaging window, we may reconstruct the scattering potential using $\ell_1$ optimization. This means solving the optimization problem $$\min \|\bV\|_{\ell_1} \quad \mbox{such that} \quad \bd = \bF \bV. \label{eq:L1}$$ The equality constraint may be replaced by the inequality $\|\bd - \bF \bV\|_{\ell_2}^2 \le $ some user defined tolerance, which deals better with measurement and modeling noise. The $\ell_1$ optimization is carried with the cvx package “http://cvxr.com/cvx/". Numerical simulations {#sect:num} ===================== We present in this section examples of reconstructions of the reflectors with reverse time migration and $\ell_1$ optimization. The simulations are for a waveguide with cross-section $\Omega =\big(0,13.9\lambda)\times(0,14.2\lambda\big)$, and the array is at distance $L = 41.8 \lambda$ from the end wall, where $\lambda$ is the wavelength. The source density in  is $$\label{eq:JSource} \vbJ(\bx) = \vec{p}\, \delta\Big(\bx - (6.95,7.1) \lambda\Big),$$ for constant vector $\vec{p}$, and the receiver sensors are located at uniform spacing of approximately $\lambda/18$ in the array aperture $A$. We present results with full aperture, where $A = \Omega$ and with $75\%$ aperture, where $A \subset \Omega$ is a rectangle of sides $10.5 \lambda$ and $10.65 \lambda$, with center at the waveguide axis. The receivers measure only the $2-$nd component of $\vec{\bm E}^{sc}$ . We compared the results with those obtained from all components of $\vec{\bm E}^{sc}$ at the array, and the images were essentially the same. The images displayed in Figures \[fi:2\]–\[fi:5\] are obtained with an approximation of the formulae in section \[sect:imag\], where only a subset of the $648$ propagating modes are used. This is because in practice the sensors record over a finite time window, and only the modes that propagate fast enough to arrive at the array during the duration of the measurements contribute. The polarization vector $\vec{p}$ in equals $(0,1,0)^T$ in the simulations with isotropic permittivity and $(1,1,1)^T$ in the case of anisotropic permittivity. \ [![Reverse time migration images of a point-like reflector located at $(6.95,4.73,-10.44)\lambda$. The images in the first two rows are with $75\%$ aperture and those in the last row with the full aperture. The first row is for $100$ modes and the other two rows for $350$ modes. We display in the left column the images in the plane $y_1 = 6.95\lambda$, and in the right column the images in the cross-range plane $y_3 = -10.44 \lambda$. The axes are in units of $\lambda$.[]{data-label="fi:2"}](One_350aper075_range.eps "fig:"){width="6cm"}]{}\ [![Reverse time migration images of a point-like reflector located at $(6.95,4.73,-10.44)\lambda$. The images in the first two rows are with $75\%$ aperture and those in the last row with the full aperture. The first row is for $100$ modes and the other two rows for $350$ modes. We display in the left column the images in the plane $y_1 = 6.95\lambda$, and in the right column the images in the cross-range plane $y_3 = -10.44 \lambda$. The axes are in units of $\lambda$.[]{data-label="fi:2"}](One_350_range.eps "fig:"){width="6cm"}]{} In figure \[fi:2\] we display the reverse time migration image of a point-like reflector located at $(6.95,4.73,-10.44)\lambda$, modeled by an isotropic scattering potential $V = v(\vby)I$ supported on a mesh cell in the imaging region. The mesh size is $\lambda/18$ in cross-range plane and $\lambda/6$ in range. We note that the reflector is well localized in range and cross-range, and the results improve, as expected when more modes are used to form the image. Moreover, the image at $75\%$ aperture is almost as good as that with full aperture. Naturally, the image deteriorates for smaller apertures. The images of the same reflector obtained with $\ell_1$ optimization are shown in Figure \[fi:4p\]. They are obtained with the first $350$ arriving modes and a $75\%$ aperture. The discretization of these images is in steps of $0.29\lambda$ in cross-range and $ 0.87 \lambda$ in range. As expected, these images give a sharper estimate of the support, in the sense that the spurious faint peaks in Figure \[fi:2\] are suppressed in Figure \[fi:4p\] by the sparsity promoting optimization. ![Reconstructions of the same reflector as in Figure \[fi:2\] using $\ell_1$ optimization, $350$ modes and $75\%$ aperture. The axes are in units of $\lambda$. []{data-label="fi:4p"}](One_350aper075_l1_range.eps "fig:"){width="5.5cm"} ![Reconstructions of the same reflector as in Figure \[fi:2\] using $\ell_1$ optimization, $350$ modes and $75\%$ aperture. The axes are in units of $\lambda$. []{data-label="fi:4p"}](One_350aper075_l1_cross.eps "fig:"){width="4.cm"} \ [![Reverse time migration images of a rectangular shell. The results in the first row are in the terminated waveguide. Those in the second row are in an infinite waveguide. We use the $350$ first arriving modes, $75\% $ aperture. The images in the left column are in the plane $y_1 = 6.96\lambda$ and in the right column in the cross-range plane at $y_3 = -11.14\lambda$.[]{data-label="fi:3"}](Rectangle_350aper075Noback_range.eps "fig:"){width="6cm"}]{} [![Reverse time migration images of a rectangular shell. The results in the first row are in the terminated waveguide. Those in the second row are in an infinite waveguide. We use the $350$ first arriving modes, $75\% $ aperture. The images in the left column are in the plane $y_1 = 6.96\lambda$ and in the right column in the cross-range plane at $y_3 = -11.14\lambda$.[]{data-label="fi:3"}](Rectangle_350aper075Noback_cross.eps "fig:"){width="6cm"}]{}\ In Figures \[fi:3\] and \[fi:5\] we show images of an extended reflector shaped like a rectangular shell of The discretization of the imaging window in Figure \[fi:3\] is the same as in Figure \[fi:2\]. We note that the reverse time migration method estimates better the support of the reflector, specially its back, in the terminating waveguide (top left image) than in the infinite waveguide (bottom left image). The $\ell_1$ optimization images are in Figure \[fi:5\], where the discretization of the imaging window is the same as in Figure \[fi:4p\]. \ [![Reconstructions of the same rectangle shell as in Figure \[fi:3\], using $\ell_1$ optimization, 350 modes and 75% aperture. In the top line we show the images in the terminating waveguide and in the bottom line those in the infinite waveguide. The images in the left column are in the plane $y_1 = 6.96\lambda$ and in the right column in the cross-range plane at $y_3 = -11.14\lambda$.[]{data-label="fi:5"}](Rectangle_350aper075_l1_range.eps "fig:"){width="6cm"}]{} [![Reconstructions of the same rectangle shell as in Figure \[fi:3\], using $\ell_1$ optimization, 350 modes and 75% aperture. In the top line we show the images in the terminating waveguide and in the bottom line those in the infinite waveguide. The images in the left column are in the plane $y_1 = 6.96\lambda$ and in the right column in the cross-range plane at $y_3 = -11.14\lambda$.[]{data-label="fi:5"}](Rectangle_350aper075_l1_cross.eps "fig:"){width="6cm"}]{} In the last simulations in Figure \[fi:4\] we present the images of an anisotropic point-like reflector, whose scattering potential is a diagonal matrix $V(\vby) = \mbox{diag} \big(3,1,5)v(\vby), $ with the same $v(\vby)$ as in Figure \[fi:2\]. We present only reverse time migration images and note that the estimates of the support of the components of $V(\vby)$ are similar to those in Figure \[fi:2\]. Specifically, we plot the absolute value of the right hand side of equation (\[eq:TR5\]) for $l = 1, 2, 3$. \ [![Reverse time migration images of an anisotropic point-like reflector located at $(6.95,4.73,-10.44)\lambda$, using reverse time migration. We use the first 350 arriving modes and 75% aperture. We display the absolute value of (\[eq:TR5\]) for $l = 1$ in the first line, $2$ in the second and $3$ in the third. The images in the left column are in the plane $y_1 = 6.95\lambda$ and in the right column in the plane $y_3 = -10.44\lambda$. The axes are in units of $\lambda$.[]{data-label="fi:4"}](One_350aper075Aniso22_range.eps "fig:"){width="6cm"}]{} [![Reverse time migration images of an anisotropic point-like reflector located at $(6.95,4.73,-10.44)\lambda$, using reverse time migration. We use the first 350 arriving modes and 75% aperture. We display the absolute value of (\[eq:TR5\]) for $l = 1$ in the first line, $2$ in the second and $3$ in the third. The images in the left column are in the plane $y_1 = 6.95\lambda$ and in the right column in the plane $y_3 = -10.44\lambda$. The axes are in units of $\lambda$.[]{data-label="fi:4"}](One_350aper075Aniso22_cross.eps "fig:"){width="6cm"}]{}\ [![Reverse time migration images of an anisotropic point-like reflector located at $(6.95,4.73,-10.44)\lambda$, using reverse time migration. We use the first 350 arriving modes and 75% aperture. We display the absolute value of (\[eq:TR5\]) for $l = 1$ in the first line, $2$ in the second and $3$ in the third. The images in the left column are in the plane $y_1 = 6.95\lambda$ and in the right column in the plane $y_3 = -10.44\lambda$. The axes are in units of $\lambda$.[]{data-label="fi:4"}](One_350aper075Aniso33_range.eps "fig:"){width="6cm"}]{} [![Reverse time migration images of an anisotropic point-like reflector located at $(6.95,4.73,-10.44)\lambda$, using reverse time migration. We use the first 350 arriving modes and 75% aperture. We display the absolute value of (\[eq:TR5\]) for $l = 1$ in the first line, $2$ in the second and $3$ in the third. The images in the left column are in the plane $y_1 = 6.95\lambda$ and in the right column in the plane $y_3 = -10.44\lambda$. The axes are in units of $\lambda$.[]{data-label="fi:4"}](One_350aper075Aniso33_cross.eps "fig:"){width="6cm"}]{}\ Summary {#sect:sum} ======= We study imaging with electromagnetic waves in terminating waveguides, using measurements of the electric field at an array of sensors. The goal of imaging is to localize compactly supported reflectors that lie between the array and the end wall. We derive the data model using Maxwell’s equations. We define the scattered electric field due to an incident wave from one sensor in the array and show that it satisfies a Lipmann-Schwinger type equation. We analyze the solvability of this equation and write explicitly the data model using a modal decomposition of the wave field in the waveguide. This model is based on the single scattering approximation at the unknown reflectors. We use it to formulate two imaging methods: The first forms an image by calculating the action of the adjoint of the forward operator on the data. It has a time reversal interpretation. The second uses $\ell_1$ i.e., sparsity enhancing optimization. We present numerical results with both imaging methods for point-like and extended reflectors. Acknowledgements {#acknowledgements .unnumbered} ================ This work was partially supported by AFOSR GrantFA9550-12-1-0117 (DLN) and AFOSR grant FA9550-15-1-0118 (LB). LB also acknowledges support from the Simons Foundation and ONR Grant N000141410077. Vectorial eigenvalue problem {#sect:VEP} ============================ Spectral decomposition of the Laplacian --------------------------------------- Let $\vbf = (\bm f, f_3)^\top\in (L^2(\Omega))^3$, and consider the linear differential operator associated with the vectorial Laplacian problem $$\begin{aligned} -\Delta \vec{\bm{u}}(\bx) = \vbf(\bx) \quad &\bx \in \Omega, \nonumber \\ \bm{n}^\bot(\bx) \cdot \bm{u}(\bx) = \nabla\cdot \bm{u}(\bx) = 0 \quad &\bx \in \pa\Omega, \\ u_3(\bx) = 0 \quad &\bx \in \pa\Omega, \label{eq:A1}\end{aligned}$$ for $\vec{\bm{u}} = (\bm{u},u_3)$. Since $\Delta \vec{\bm{u}} = (\Delta \bm{u}, \Delta u_3)^\top $, we have two decoupled problems. One is the standard Poisson problem for the longitudinal component $u_3$, $$\begin{aligned} -\Delta u_3(\bx) &= f_3(\bx) \quad \bx \in \Omega, \nonumber \\ u_3(\bx) & =0 \quad \bx \in \pa \Omega,\label{eq:u3}\end{aligned}$$ whose weak solution is in $H^1_0(\Omega)$ and satisfies $$b(u_3,v) = \int_\Omega \nabla u_3(\bx)\cdot \nabla \ol{v}(\bx) \,\d \bx = (f_3,v)_{L^2}, \quad \text{for all } v\in \bm{H}^1_{0}(\Omega), \label{eq:weak01}$$ where $(\cdot,\cdot)_{L^2}$ denotes the inner product in $L^2(\Omega)$. The other problem is for the two dimensional transverse vector $\bm{u}$, $$\begin{aligned} -\Delta \bm u(\bx) &= \bm f(\bx) \quad \bx \in \Omega, \nonumber \\ \bm{n}^\bot(\bx) \cdot \bm{u}(\bx) &= 0 \quad \bx \in \pa\Omega, \nonumber \\ \nabla \cdot \bm u(\bx) &= 0 \quad \bx \in \pa\Omega. \label{eq:ub}\end{aligned}$$ It is studied in [@Kangr1999] for a more general $\Omega$ than the rectangle considered here. The results there establish the existence and uniqueness of weak solutions in the space $$\bm{H}^1_{0t}(\Omega) = \{\bm u \in \big(H^1(\Omega)\big)^2: \bm{n}^\bot \cdot \bm{u} =0 \text{ on } \pa\Omega \},$$ with the standard inner $H^1$ product $(\bm u,\bm v)_{H^1}$. These solutions satisfy the variational problem $$\begin{aligned} \label{eq:weak1} a(\bm u,\bm v) = \int_\Omega (\nabla^\perp\cdot \bm u (\bx) \, \nabla^\perp \cdot \ol{\bm v}(\bx)+ \nabla\cdot\bm u(\bx) \,\nabla\cdot \ol{\bm v}(\bx)) \d \bx = (\bm f, \bm v)_{L^2}, \end{aligned}$$ for all $\bm v\in \bm{H}^1_{0t}(\Omega)$, where $\nabla^\perp$ is the rotated gradient operator, playing the role of curl in two-dimensions, and $(\cdot,\cdot)_{L^2}$ is the inner product in $\big(L^2(\Omega)\big)^2$. The results in [@Kangr1999] also give a proper interpretation of $\nabla \cdot \bm u|_{\pa \Omega}$ in terms of the curvature of the boundary. In our case the boundary is the union of four line segments $\partial \Omega_j$, for $j = 1, \ldots, 4$, so the curvature is zero on each segment. It is shown in [@Kangr1999] that $\nabla \cdot \bm u|_{\pa \Omega}$ exists and belongs to $H^{-1/2}(\pa \Omega_j)$ on each piece of the boundary, and the weak solution satisfies the estimate $$\begin{aligned} \label{eq:estimate} \|\bm u\|_{H^1} \leq C\|\bm f\|_{L^2}.\end{aligned}$$ To arrive at the spectral decomposition of the vectorial Laplacian in (\[eq:A1\]), we study its “inverse” i.e., the solution operator $\mathcal{L}: (L^2(\Omega))^3 \to (L^2(\Omega))^3$ defined by $\mathcal{L}(\vec{\bm f}) = (\bm u, u_3)$, where $\bm u$ solves (\[eq:weak1\]) and $u_3$ solves (\[eq:weak01\]). Obviously, $\mathcal{L}$ is a linear operator. It is also injective, bounded, self-adjoint and compact. The injectivity follows from the uniqueness of solutions of (\[eq:weak01\]) and (\[eq:weak1\]). The boundedness and compactness follow from the estimate (\[eq:estimate\]) on $\bm u$ and a similar one on $u_3$, together with the imbedding of $H_{0t}^1(\Omega)$ in $(L^2(\Omega))^2$ and of $H_0^1(\Omega)$ in $L^2(\Omega)$. To see that $\mathcal L$ is self-adjoint, let $\vbf$ and $\vbg$ be arbitrary in $(L^2(\Omega))^3$ and denote by $(\bm u,u_3)$ and $(\bm v,v_3)$ their image in $\Im(\mathcal{L}) \subset (L^2(\Omega))^3$, such that $\mathcal{L}(\vbf) = (\bm u,u_3)$ and $\mathcal{L}(\vbg) = (\bm v,v_3)$. Then equations (\[eq:weak01\]) and (\[eq:weak1\]) give $$\begin{aligned} (\mathcal{L}(\vbf),\vbg)_{L^2} &= (\bm u, \bm g)_{L^2} + (u_3, g_3)_{L^2} = \ol{a(\bm v, \bm u)} + \ol{b(v_3,u_3)}\\ &= a(\bm u, \bm v) + b(u_3,v_3) = (\bm f, \bm v)_{L^2} +(f_3,v_3)_{L^2}\\ &= (\vbf, \mathcal{L}(\vbg))_{L^2}. \end{aligned}$$ We conclude from the spectral theorem for self-adjoint, compact operators [@evans appendix D] that there is an orthogonal basis of $(L^2(\Omega))^3$ consisting of the eigenfunctions $\vec{\bm u}_j$ of $\mathcal{L}$, for eigenvalues $\gamma_j$ that tend to $0$ as $j \to \infty$. The eigenvalues cannot be zero, because $\mathcal{L}$ is injective, so we can divide by them and get $$\vec{\bm u}_j = \gamma_j^{-1} \mathcal{L}(\vec{\bm u}_j).$$ Consequently, by estimate (\[eq:estimate\]) and a similar one for the standard problem (\[eq:weak01\]), we obtain that $\vec{\bm u}_j = (\bm u_j,u_{3,j}) \in \mathcal{H}$, the space of three dimensional vectors with components $\bm u_j \in H_{0t}^1(\Omega)$ and $u_{3,j} \in H_0^1(\Omega)$. To finish the argument, let $\vec{v} = (\bm v, v_3) \in \mathcal{H}$ and consider the bilinear form $A: \mathcal{H} \times \mathcal{H} \to \mathbb{C}$ defined in the obvious way $$A(\vec{\bm u},\vec{\bm v}) = a(\bm u, \bm v) + b(u_3,v_3).$$ By letting $\vec{\bm u} = \vec{\bm u}_j$ we get $$\begin{aligned} \gamma_j A(\vec{\bm u}_j,\vec{\bm v}) = A \big( \mathcal{L}(\vec{\bm u}_j),\vec{\bm v} \big) = ({\bm u}_j,\bm v)_{L^2} + (u_{3_j},v_3)_{L^2} = \lin\vec{\bm u}_j,\vec{\bm v}\rin,\end{aligned}$$ where we used equations (\[eq:weak01\]) and (\[eq:weak1\]) and recall that $\lin \cdot, \cdot \rin$ is the inner product in $\big(L^2(\Omega)\big)^3$. This relation states that $\vec{\bm u}_j$ are weak eigenfunctions of the vectorial Laplacian, for eigenvalues $\lambda_j = \gamma_j^{-1}$. Finally, the expression (\[eq:eigenvals\]) of the eigenvalues and (\[eq:eigvect1\])–(\[eq:eigvect4\]) of the eigenfunctions follow by direct calculation i.e., the method of separation of variables. See [@alonso2015electromagnetic section 3]. The eigenfunctions $\vec{\bm u}_j$ are denoted in the paper by $\vec \Phi_j^{(s)}(\bx)$, with index $s = 1, \ldots, m_j,$ the multiplicity of the eigenvalue $\lambda_j$. The reference field {#sect:REF} =================== Because the eigenfunctions $\vec \Phi_j^{(s)}(\bx)$ are an orthogonal basis, we can seek the solution $\vec{\bm E}^o$ of equations (\[eq:forward4\]) in the form $$\begin{aligned} \label{eq:reference} \vec{\bm{E}}^{o}(\vx) = \sum_{ j\in\N^2_0}\sum_{s=1}^{m_j} g^{(s)}_{j}(x_3) \vec\Phi^{(s)}_j(\bx),\end{aligned}$$ for each given $x_3< 0$. It remains to determine the coefficients $g_{j}^{(s)}(x_3).$ We substitute (\[eq:reference\]) in (\[eq:forward4\]), and calculating $$\begin{aligned} \curl\curl\left[g^{(1)}_{n}(x_3)\Phi^{(1)}_j(\bm x)\right] &= [\lambda_ng^{(1)}_{n}(x_3) - \pa_{x_3}g^{(1)}_{j}(x_3)]\vec\Phi^{(1)}_j(\bx), \nonumber \\ \label{eq:formula} \curl\curl\left[g^{(2)}_{n}(x_3)\Phi^{(2)}_j(\bm x)\right] &= -\pa^2_{x_3}g^{(2)}_{j}(x_3) \Phi^{(2)}_j(\bx)- \lambda_j \pa_{x_3}g^{(2)}_{j}(x_3) \vec \Phi^{(3)}_j(\bx), \\ \curl\curl\left[g^{(3)}_{n}(x_3)\Phi^{(2)}_j(\bm x)\right]&= \lambda_j g^{(3)}_{j}(x_3) \Phi^{(3)}_j(\bx) + \pa_{x_3} g^{(3)}_{j}(x_3)\Phi^{(2)}_j(\bx), \nonumber\end{aligned}$$ we get $$\begin{aligned} i \om \mu_o \vbJ(\bx) \delta(x_3+L) = \sum_{ j\in\N^2_0}\sum_{s=1}^{m_j} \Big\{ \big[ (\lambda_j-k^2) g_j^{(1)}(x_3) - \pa_{x_3}^2 g_j^{(1)}(x_3) \big] \vec \Phi_j^{(1)}(\bx) \delta_{s,1} \nonumber \\ +\big[ -(k^2 + \pa_{x_3}^2 ) g_j^{(2)}(x_3) + \pa_{x_3}g_j^{(3)}(x_3) \big] \vec \Phi_j^{(2)}(\bx) \delta_{s,2} \nonumber \\ + \big[ (\lambda_j - k^2) g_j^{(3)}(x_3) -\lambda_j \pa_{x_3}g_j^{(2)}(x_3) \big] \vec \Phi_j^{(3)}(\bx) \delta_{s,3} \Big\}. \label{eq:NEW}\end{aligned}$$ The equations for $g_j^{(s)}(x_3)$ follow from (\[eq:NEW\]) and the orthogonality of the eigenfunctions, $$\begin{aligned} \pa_{x_3}^2 g^{(1)}_{j}(x_3) &= -(k^2-\lambda_j)g^{(1)}_{j}(x_3), \\ \pa_{x_3}^2 g^{(2)}_{j}(x_3) &= -(k^2-\lambda_j)g^{(2)}_{j}(x_3), \\ g^{(3)}_{j}(x_3) &= \frac{\lambda_j}{\lambda_j-k^2}\pa_{x_3} g^{(2)}_{j}(x_3), \quad x_3 \ne -L.\end{aligned}$$ The solution of these equations is $$\begin{aligned} \label{eq:g12} g^{(s)}_{j}(x_3) = a^{\pm(s)}_{j} e^{i\beta_jx_3} + b^{\pm(s)}_{j} e^{-i\beta_jx_3}, \quad \text{for } s=1,2, \end{aligned}$$ and $$\begin{aligned} \label{eq:g3} g^{(3)}_{j}(x_3) = \frac{\lambda_j}{\lambda_j-k^2}\left[i\beta_ja^{\pm(s)}_{j} e^{i\beta_jx_3} -i\beta_j b^{\pm(s)}_{j} e^{-i\beta_jx_3}\right],\end{aligned}$$ where $\pm$ stands for the right and left of source. The amplitudes $a_j^{\pm (s)}$ and $b^{\pm (s)}_j$ have the expression given in (\[eq:ampA1\])-(\[eq:ampB2\]). They are derived from the jump conditions at the source, $$\begin{aligned} -\left[\pa_{x_3}g^{(1)}_{j}\right]_{-L} &= \frac{i\omega\mu_o (\vec\Phi^{(1)}_j,\vbJ)}{\|\vec\Phi^{(1)}_j\|^2},\quad \left[g^{(1)}_{j}\right]_{-L} = 0,\\ -\left[\pa_{x_3}g^{(2)}_{j}\right]_{-L} + \left[g^{(3)}_{j}\right]_{-L} &= \frac{i\omega\mu_o(\vec\Phi^{(2)}_j,\vbJ)}{\|\vec\Phi^{(2)}_j\|^2}, \\ -\lambda_j \left[g^{(2)}_{j}\right]_{-L} &= \frac{i\omega\mu_o(\vec\Phi^{(3)}_j,\vbJ)}{\|\vec\Phi^{(3)}_j\|^2},\end{aligned}$$ the boundary conditions $ \vec{\bm e}_3 \times \vec{\bm{E}}^{o}|_{x_3 = 0} = 0$, which imply $$\begin{aligned} a^{+(1)}_{j} + b^{+(1)}_{j} = 0, \\ a^{+(2)}_{j} + b^{+(2)}_{j} = 0,\end{aligned}$$ and the radiation conditions $ a^{-(1)}_{j} = a^{-(2)}_j = 0$ for $x_3 < -L$. Derivation of the dyadic Green’s function {#sect:proofLem2} ========================================= It is straightforward to check that $\mathbb{G}$ given in satisfies equation (\[eq:Tensor2\]), provided that $\vec{G}_j$ satisfies . To calculate $\vec{G}_j$, we make the following observations. On $\pa\Omega$, where $\vbn = (\bm n, 0)$, $$\begin{aligned} \vbn \times \vec\Phi^{(s)}_j = -\big[(\bm n^\perp,0)\cdot \vec\Phi^{(s)}_j\big] \vec{\bm{e}}_3 = 0, \text{ for } s=1,2, \text{ and } \vbn \times \vec\Phi^{(3)}_j = 0.\end{aligned}$$ Moreover, for a regular function $g(x_3)$ we have $$\begin{aligned} \vec\nabla[\vec\nabla \cdot(g(x_3)\vec \Phi^{(1)}_j(\bm x)] &= 0, \nonumber \\ \vec\nabla[\vec\nabla \cdot(g(x_3)\vec \Phi^{(2)}_j(\bm x)] &= -\lambda_j g(x_3) \vec \Phi^{(2)}_j(\bm x) - \lambda_j \pa_{x_3} g(x_3) \vec \Phi^{(3)}_j(\bm x), \\ \vec\nabla[\vec\nabla \cdot(g(x_3)\vec \Phi^{(3)}_j(\bm x)] &= \pa_{x_3} g(x_3) \vec \Phi^{(2)}_j(\bm x) +\pa_{x_3}^2 g(x_3) \vec \Phi^{(3)}_j(\bm x). \nonumber\end{aligned}$$ These observations imply that for all $s$ and $\vx = (\bx,x_3)$, with $\bx \in \pa \Omega$, $$\begin{aligned} \vbn(\vx) \times \vec\nabla[\vec\nabla \cdot \big(g(x_3)\vec\Phi^{(s)}_j(\bm x)\big)] = \vbn(\vx) \times \big[g(x_3)\vec \Phi^{(s)}_j(\bm x)\big] = 0.\end{aligned}$$ This allows us to seek $ \vec{G}_j(\cdot,\vby)$ as an expansion in the orthogonal basis $\{\vec\Phi^{(s)}_n(\bx)\}$ of eigenfunctions of the vectorial Laplacian $$\label{eq:G1} \vec{G}_j(\vx, \vby) = \sum_{ n\in\N^2_0}\sum_{s=1}^{m_n} g^{j(s)}_n(x_3,\vby) \vec\Phi^{(s)}_n(\bx),$$ because each term satisfies the required boundary conditions at $\pa \Omega$. Substituting in gives $$\sum_{ n\in\N^2_0}\sum_{s=1}^{m_n}[\pa_{x_3}^2 g^{j(s)}_n(x_3,\vby) + (k^2-\lambda_n) g^{j(s)}_n(x_3,\vby)] \vec\Phi^{(s)}_n(\bx) = \delta(\bx -\bm{y})\vec{\bm{e}}_j \delta(x_3-y_3),$$ and using the orthogonality of the eigenfunctions we obtain the following ordinary differential equations $$\pa_{x_3}^2 g^{j(s)}_n(x_3,\vby) + (k^2-\lambda_n) g^{j(s)}_n(x_3,\vby) = \frac{ \vec{\bm{e}}_j\cdot \vec\Phi^{(s)}_{n}(\bm{y})}{\|\vec\Phi^{(s)}_{n}\|^2}\delta(x_3-y_3).$$ The solutions of these equations, which satisfy the radiation condition at $x_3 < y_3$, are $$\begin{aligned} \label{eq:g} g^j_n(x_3,\vby) = \begin{cases} (a^{(s)}_n e^{i\beta_nx_3} + b^{(s)}_ne^{-i\beta_nx_3})\vec{\bm{e}}_j\cdot \vec\Phi^{(s)}_{n}(\bm{y})/ \| \vec\Phi^{(s)}_{n}\|^2,\quad x_3>y_3,\\ c^{(s)}_ne^{-i\beta_nx_3} \vec{\bm{e}}_j\cdot \vec\Phi^{(s)}_{n}(\bm{y}) / \| \vec\Phi^{(s)}_{n}\|^2,\quad x_3<y_3. \end{cases}\end{aligned}$$ The coefficients $a^{(s)}_n, b^{(s)}_n$ and $c^{(s)}_n$ are determined by jump conditions at $y_3$ $$\begin{aligned} \nonumber g^{j(s)}_n(y_3^+,\vby) - g^{j(s)}_n(y_3^-,\vby) &= 0, \\ \label{eq:condition2} \pa_{x_3}g^{j(s)}_n(y_3^+,\vby)- \pa_{x_3}g^{j(s)}_n(y_3^-,\vby) &= \frac{\vec{\bm{e}}_j\cdot \vec\Phi^{(s)}_{n}(\bm{y})}{\|\vec\Phi^{(s)}_{n}\|^2}, \end{aligned}$$ and at $x_3 = 0$, $$\label{eq:condition3} \vec{\bm{e}}_3 \times (k^2+ \vec\nabla \divv)\vec{\bm G}_j(\vx,\vby) = 0.$$ The jump conditions (\[eq:condition2\]) imply $$\begin{aligned} \nonumber a^{(s)}_n e^{i\beta_ny_3} + b^{(s)}_n e^{-i\beta_ny_3} - c^{(s)}_n e^{-i\beta_ny_3} =0, \\ \label{eq:eq2} a^{(s)}_n e^{i\beta_ny_3} - b^{(s)}_n e^{-i\beta_ny_3} + c^{(s)}_n e^{-i\beta_ny_3} = \frac{1}{i\beta_n}.\end{aligned}$$ For the boundary condition , we need the formulae $$\begin{aligned} \vec{\bm{e}}_3 \times \vec\Phi^{(1)}_n &= \left( \begin{matrix} \frac{\pi n_1}{L_1} \sin \left(\frac{\pi n_1x_1}{L_1}\right) \cos \left(\frac{\pi n_2x_2}{L_2}\right) \\ \frac{\pi n_2}{L_2}\cos \left( \frac{\pi n_1x_1}{L_1}\right) \sin \left( \frac{\pi n_2x_2}{L_2}\right) \\ 0 \end{matrix}\right), \\ \vec{\bm{e}}_3 \times \vec\Phi^{(2)}_n &= \left( \begin{matrix} -\frac{\pi n_2}{L_2} \sin \left(\frac{\pi n_1x_1}{L_1}\right) \cos \left(\frac{\pi n_2x_2}{L_2}\right) \\ \frac{\pi n_1}{L_1}\cos \left( \frac{\pi n_1x_1}{L_1}\right) \sin \left( \frac{\pi n_2x_2}{L_2}\right) \\ 0 \end{matrix}\right), \\ \vec{\bm{e}}_3 \times \vec\Phi^{(3)}_n &= 0,\end{aligned}$$ and $$\begin{aligned} \vec{\bm{e}}_3 \times\vec\nabla[\vec\nabla \cdot(g^{j(1)}_n(x_3)\vec\Phi^{(1)}_n(\bm x)] &= 0, \\ \vec{\bm{e}}_3 \times \vec\nabla[\vec\nabla \cdot(g^{j(2)}_n(x_3)\vec\Phi^{(2)}_n(\bm x)] &= -\lambda_n g^{j(2)}_n(x_3) \left( \begin{matrix} -\frac{\pi n_2}{L_2} \sin \left(\frac{\pi n_1x_1}{L_1}\right) \cos \left(\frac{\pi n_2x_2}{L_2}\right) \\ \frac{\pi n_1}{L_1}\cos \left( \frac{\pi n_1x_1}{L_1}\right) \sin \left( \frac{\pi n_2x_2}{L_2}\right) \\ 0 \end{matrix}\right), \\ \vec{\bm{e}}_3 \times\vec\nabla[\vec\nabla \cdot(g^{j(3)}_n(x_3)\vec\Phi^{(3)}_j(\bm x)] &= \pa_{x_3} g^{j(3)}_n(x_3) \left( \begin{matrix} -\frac{\pi n_2}{L_2} \sin \left(\frac{\pi n_1x_1}{L_1}\right) \cos \left(\frac{\pi n_2x_2}{L_2}\right) \\ \frac{\pi n_1}{L_1}\cos \left( \frac{\pi n_1x_1}{L_1}\right) \sin \left( \frac{\pi n_2x_2}{L_2}\right) \\ 0 \end{matrix}\right).\end{aligned}$$ Substituting in we get $$\begin{aligned} g_n^{j(1)}(0,\vby) = 0 \text{ and } (k^2-\lambda_n)g^{j(2)}_n(0,\vby) +\pa_{x_3}g^{j(3)}_n(0,\vby) = 0,\end{aligned}$$ or, equivalently, $$\begin{aligned} \label{eq:eq3p} a_n^{(1)} + b_n^{(1)} = 0,\end{aligned}$$ and $$\begin{aligned} (k^2-\lambda_n) \big(a_n^{(2)} + b_n^{(2)} \big) \frac{\vec{\bm e}_j \cdot \vec \Phi_j^{(2)}(\vby)}{\|\vec \Phi_j^{(2)}\|^2} + i \beta_n \big(a_n^{(3)} - b_n^{(3)} \big) \frac{\vec{\bm e}_j \cdot \vec \Phi_j^{(3)}(\vby)}{\|\vec \Phi_j^{(3)}\|^2} &= 0.\label{eq:eq3}\end{aligned}$$ We now have a linear system of eight equations (\[eq:eq2\]), (\[eq:eq3p\]) and (\[eq:eq3\]) for the nine unknowns $a_n^{(s)}$, $b_n^{(s)}$ and $c_n^{(s)}$. The system is underdetermined, so $\vec{\bm G}_j$ is not uniquely defined. However, $\mathbb{G}(\cdot, \vby)$ given by  is unique, because a straightforward computation shows that the coefficients with $s = 2$ or $3$ appear only in the combinations $$(b_n^{(3)} + i\beta_nb_n^{(2)}) \left( \vec\Phi^{(2)}_n + \frac{i\lambda_n}{\beta_n} \vec\Phi^{(3)}_n(\bm x) \right) e^{-i\beta_nx_3}$$ and $$(a_n^{(3)} - i\beta_n a_n^{(2)}) \left(\vec\Phi^{(2)}_n - \frac{i\lambda_n}{\beta_n} \vec\Phi^{(3)}_n(\bm x) \right) e^{i\beta_nx_3}.$$ Thus, we can calculate the most convenient solution of the underdetermined system (\[eq:eq2\]), (\[eq:eq3p\]) and (\[eq:eq3\]), corresponding to $a_n^{(3)} = b_n^{(3)}$. This gives $\pa_{x_3} g_n^{j(3)}(0) = 0.$ The expression of $\vec{\bm G}_j$ in Lemma \[lem.2\] follows. [^1]: Department of Mathematics, University of Michigan, Ann Arbor, MI, 48109, USA; `[email protected] and [email protected]`
--- abstract: 'A hydrodynamic model for the energy transport between the components of a contact binary is presented. Energy is transported by a large-scale, steady circulation carrying high entropy matter from the primary to secondary component. The circulation is driven by the baroclinic structure of the common envelope, which is a direct consequence of the nonuniform heating at the inner critical Roche lobes due to unequal emergent energy fluxes of the components. The mass stream flowing around the secondary is bound to the equatorial region by the Coriolis force and its width is determined primarily by the flow velocity. Its bottom is separated from the underlying secondary’s convection zone by a radiative transition layer acting as an insulator. For a typically observed degree of contact the heat capacity of the stream matter is much larger than radiative losses during its flow around the secondary. As a result, its effective temperature and entropy decrease very little before it returns to the primary. The existence of the stream changes insignificantly specific entropies of both convective envelopes and sizes of the components. Substantial oversize of the secondaries, required by the Roche geometry, cannot be explained in this way. The situation can, however, be explained by assuming that the primary is a main sequence star whereas the secondary is in an advanced evolutionary stage with hydrogen depleted in its core. Such a configuration is reached past mass transfer with mass ratio reversal. Good agreement with observations is demonstrated by model calculations applied to actual W UMa-type binaries. In particular, a presence of the equatorial bulge moving with a relative velocity of 10-30 kms$^{-!}$ around both components of AW UMa is accounted for.' author: - | K. Stȩpień$^{1}$[^1]\ $^{1}$Warsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland date: 'Accepted –. Received – ; in original form –' title: Large scale circulations and energy transport in contact binaries --- \[firstpage\] stars: contact – stars: eclipsing – stars: binary – stars: evolution Introduction {#sect:intro} ============ Contact binaries have been defined by @kui41 as binaries with components surrounded by a common envelope. Within the Roche approximation for the total potential in a rotating frame of reference, the envelope lies between the inner and outer critical equipotential surfaces, defined respectively by the Lagrangian points L$_1$ and L$_2$. Cool contact binaries with spectral types later than F0 are called W UMa type binaries [@moch81]. W UMa type stars are fairly common in space. @ruc02 estimates that one such binary occurs per 500 main sequence stars in the solar vicinity but these stars do not appear among members of young and intermediate age clusters. Their occurrence rapidly increases in old open and globular clusters [@kr93; @ruc98; @ruc00]. This result indicates that cool contact configurations are not formed at, or near the zero age main sequence (ZAMS) but at the substantially later age. As it is now commonly assumed, they are formed from initially detached binaries which lose orbital angular momentum (AM) at such a rate that a contact is reached after several Gyr. Kinematically, field W UMa stars belong to old disk, which supports their advanced age [@gb88; @bil05]. The most promising mechanism for AM loss is related to the chromospheric-coronal activity of binary components [@vil82; @moch85; @ste95] although a high incidence of companions to W UMa stars [@ruc07] suggests that a third body may also play a role in orbit tightening [@egg01]. @kui41 noted that a contact binary is stable only when both components are identical. Otherwise, a binary consisting of two stars with the same composition is unstable and mass will be transferred inside the common envelope from the more massive (primary) to the less massive (secondary) component. The transfer is driven by a nonuniform heating of the base of the common envelope due to unequal emergent luminosities at the inner critical surface. By continuity, temperature differences on each equipotential surface above the Roche lobes exist. As a result, the common envelope must be treated as baroclinic rather than barotropic [@shu79; @tas92]. Different vertical (i.e. perpendicular to the local equipotential surface) pressure stratifications in both components produce the horizontal pressure gradient driving mass motions on a dynamical time scale. Even if the pressure is constant over a given equipotential surface, a difference appears above and below that surface [@kah04]. In the absence of the Coriolis force the resulting large scale flows are symmetric around the axis joining centers of both stars [@web76]. However, for a flow velocity being a significant fraction of the sound velocity the Coriolis force cannot be neglected. The force acts outwards in the equatorial plane, deflects the flow towards one side of the neck between the components and makes it go around the other star in the direction of its orbital motion. Figure 1 shows geometry of both critical Roche lobes and of the flow in the equatorial plane (see also Fig. 4 in @kui41). Subsequent computations of the streamlines in semidetached binaries confirmed a strong influence of the Coriolis force (see e.g. @ls75, @ls76, @oka02). When the flow returns to the parent star it is directed to the other side of the neck (Fig. 1). Together with mass, thermal energy is carried to the other component, so we should expect a significant modification of the apparent surface brightness of both stars, compared to a detached binary. Indeed, observations of W UMa type stars show eclipse minima of nearly equal depth indicating almost the same surface brightness of both components. It indicates that, unless mass ratio is close to one, a substantial fraction of the flux radiated by the secondary component comes from the primary. A very elegant model of a W UMa type binary consisting of two main sequence stars was developed by @lucy68 [@lucy76] and @flan76 (see also @ye05 and references therein). The model assumes that each component is out of thermal equilibrium, with a radius oscillating around its critical Roche lobe (inner critical surface). The model requires that matter transported from the primary fully covers the secondary like a hot blanket. The blanket blocks completely the radiation energy flux produced in the core of the secondary. The blocked flux is converted into the thermal energy of the secondary which expands on the stellar thermal time scale. Specific entropy in its convection envelope increases, approaching the value corresponding to the primary’s convection zone, which, at the same time, decreases somewhat due to energy transfer. When the expanding secondary overflows its Roche lobe, mass and energy transfer from the primary is stopped and the star shrinks radiating away the excess thermal energy until the cycle of these thermal relaxation oscillations (TRO) repeats. The TRO model successfully explained two important properties of W UMa type stars: an abnormal radius ratio of the components (assumed to be MS stars), required by the Roche geometry, and essentials of the observed light curves. In this paper, calculations are presented that reveal difficulties with the TRO model. In particular, it will be shown below that the Coriolis force is strong enough to balance the meridional pressure gradient of the circulation current. As a result, the stream of matter from the primary is bound to the equatorial belt of the secondary. The polar regions are not covered by the stream so the star can freely radiate away its nuclear energy. As a result, the specific entropy in its convection zone increases insignificantly and remains considerably below the value characteristic of the primary convection zone. Also its radius hardly increases, which excludes MS secondaries oversized by a factor of 2-3 observed in many W UMa stars [@ste06a]. Substantially oversized secondaries can, however, be naturally explained by assuming that, instead of being MS objects, they are highly evolved stars with depleted hydrogen in the center or even possessing small helium cores [@pacz07]. A possibility of existing cool contact binaries with secondaries in a more advanced evolutionary stage than primaries has been mentioned in a number of papers [@tw75; @sar89; @egg96] but such systems were considered to be exceptions from the general rule stating that W UMa stars have not reversed mass ratios. An evolutionary model with mass ratio reversal being a [*necessary*]{} condition for forming a W UMa type binary was proposed by @ste04 [@ste06a; @ste06b]. The starting configuration for such a model is a close detached binary with a initial orbital period of a couple of days. The binary loses AM via a magnetized wind from both components. The AM loss rate varies approximately as $M^{-3}$, where $M$ is a stellar mass [@gs08]. This dependence is very similar to the mass dependence of the MS evolutionary time scale. Because of this coincidence the primary is expected to be close to, or just beyond terminal age MS (TAMS) at the time when the Roche lobe descends onto its surface and the mass transfer to the secondary begins. Conservative, or nearly conservative mass transfer makes the orbit shrink, which accelerates the transport rate. The process stops after majority of the primary’s mass is transferred and the mass ratio reversed (i.e. the former secondary component becomes now the primary). Depending on the detailed values of the binary parameters and the mass transfer process, a contact binary or a very short period Algol-type star emerges [@ste06a]. In the latter case an additional AM loss via the magnetized wind is needed to turn the Algol into a contact binary. After receiving hydrogen-rich material from the secondary, the primary moves upward toward ZAMS (corresponding to its larger mass), whereas the secondary becomes substantially oversized for its (lower) mass. Each star separately is in thermal equilibrium while filling its critical Roche lobe. Further evolution of the binary is governed by a self-regulating mechanism with two processes acting in the opposite directions: evolutionary expansion of the secondary, followed by mass transfer to the primary, which leads to widening of the orbit and orbital AM loss tightening it. As a result, a contact configuration is maintained with a slow (on a nuclear evolution time scale) net mass transfer from the secondary to primary [@gs08] until coalescence occurs when the mass ratio reaches a critical value [@ras95]. The model explains the Kuiper paradox, i. e. the existence of an equilibrium configuration with component sizes required by the Roche geometry. It does not, however, explain the observed properties of light curves. The energy transport between the components is not a part of that model. It is an independent process which will be considered in detail in the present paper. The paper is organized as follows: basic assumptions and equations governing the mass and energy flow are given in Sect. 2. The stream structure across and along the flow is discussed in Sect. 3, together with application of the derived formulas to two actual contact systems. Section 4 describes the global reaction of both components to the circulation and Sect. 5 discusses the results of the paper. It also summarizes the main results. Equations and assumptions ========================= The basic Eulerian equations of fluid flow in a frame of reference rotating with the binary are: continuity equation $$\frac{\partial\rho}{\partial t} + \mathbf{\nabla\cdot}(\rho\mathbf{v}) = 0\,,$$ momentum equation $$\frac{\partial \mathbf{v}}{\partial t} + \mathbf{(v\cdot\nabla)v} = \frac{1}{\rho}\mathbf{\nabla}p + \nu\nabla^2\mathbf{v} %+ \left(\frac{\zeta}{\rho} + %\frac{1}{3}\nu\right)\mathbf{\nabla(\nabla\cdot v)}\\ - \mathbf{\nabla}\varphi -2\mathbf{\Omega\times v}\,,%\nonumber %\end{eqnarray}$$ and entropy equation $$\rho T\left[\frac{\partial s}{\partial t} + (\mathbf{v\cdot\nabla)}s\right] = - \mathbf{\nabla\cdot F} + d_{\mathrm{visc}}\,.$$ Here $\rho, p, s$ and $\mathbf{v}$ denote gas density, pressure, entropy and velocity, $\varphi$ is the gravitational (plus centrifugal) potential, $\nu$ is kinematic viscosity coefficient, $d_{\mathrm{visc}}$ is the viscous term, $\mathbf{F}$ is the radiative energy flux and $\Omega$ is angular velocity. In the momentum equation the viscous term with $\mathbf{\nabla(\nabla\cdot v)}$ has been left out as unimportant (see below). Assuming that the life-time of the cool contact configuration is of the order of one or more Gyr, we consider a stationary solution of the above set of equations on a time scale longer than the thermal time scale. We put, then, $\partial/\partial t \equiv 0$. We assume that a steady mass flow from the hotter primary to the secondary exists in the common envelope. The assumption of steady motion eliminates any dynamical instabilities, generation of acoustic waves or any other processes taking place on a very short time scale. After leaving the neck between the components the mass stream quickly assumes the azimuthal motion with hydrostatic equilibrium condition in the vertical and meridional directions satisfied. Let us first consider an inviscid motion. The inertial term on the left hand side of Eq. (2) is of the order of $v^2/l = v^2/2\pi R_{\mathrm{sec}}$, as the flow driving force results from the azimuthal pressure gradient. It can be compared to the Coriolis term of the order of $2\Omega v$. Their ratio is called the Rossby number $Ro$ $$Ro = \frac{v^2}{2\pi R_{\mathrm{sec}}}/2\Omega v \approx \frac{v}{4\pi v_{\mathrm{eq}}}\,,$$ where $v_{\mathrm{eq}}$ is the equatorial velocity of a rotating star. In a typical W UMa type star $v_{\mathrm{eq}} \approx 100-150$ kms$^{-1}$, so as long as the flow is not highly supersonic the Rossby number is small, i.e. $Ro << 1$, and we can safely neglect the inertial term. The equation of motion becomes then $$\frac{1}{\rho}\mathbf{\nabla}p = 2\mathbf{\Omega\times v} + \mathbf{\nabla}\varphi\,,$$ where ${\bf v}$ has only one, non-vanishing azimuthal component $v$. The above equation describes the so called geostrophic flow in which the meridional component of the Coriolis force balances the lateral pressure gradient, perpendicular to the direction of motion. Such flows are often observed in the terrestrial atmosphere when air circulates around a high or low pressure center. In our case, the Coriolis force confines the mass flow to the equatorial region of width depending primarily on the flow velocity (see Sect. 3.4 below). The stream structure ==================== Lateral equilibrium ------------------- We introduce a spherical system of coordinates with the origin at the secondary star center ($r, \vartheta$, $\phi$) and we neglect the deviation of equipotential surface shapes from spheres (Fig. 2). Because the stream is symmetric relative to the equator, we consider only a hemisphere $0 \le \vartheta \le \pi/2$. Far from the neck matter flows along the equator of the secondary in a belt with a half-width $\Delta x = R_{\mathrm{sec}}(\pi/2 - \vartheta_{\mathrm{f}}(r))$, where $\vartheta_{\mathrm{f}}(r)$ is the depth dependent polar angle of the flow boundary. In general, pressure inside the flow is a function of all three coordinates but within our assumptions it is a slowly varying function of the azimuthal direction so it can be assumed constant when considering equilibrium in the radial and meridional direction. The equilibrium condition results from Eq. (5): meridional component $$\frac{1}{r}\frac{\partial p}{\partial\vartheta} = 2\rho kc_s\Omega \cos\vartheta\,,$$ and the radial component $$\frac{\partial p}{\partial r} = 2\rho kc_s\Omega\sin\vartheta +\rho\frac{\mathrm{d}\varphi}{\mathrm{d}r}\,.$$ Here $c_s$ and $k$ are the sound velocity and the Mach number of the flow, respectively. We assume $k \le 1$ throughout the paper. Eq. (6) can be integrated $$p(\vartheta) = p_o -2r\rho kc_s\Omega(1 - \sin\vartheta)\,,$$ where $p_o$ is the pressure at the equator. The ambient pressure under the secondary’s photosphere is equal to $p_{\mathrm{sec}}$, so the boundary will be reached when $p(\vartheta_{\mathrm{f}}) = p_{\mathrm{sec}}$. This gives a condition for $\vartheta_{\mathrm{f}}$ $$\sin\vartheta_{\mathrm{f}}= 1 - \frac{p_o - p_{\mathrm{sec}}} {2\rho kc_sv_{\mathrm{eq}}}\,.$$ For parameter values characteristic of W UMa stars, typical stream widths are around 15$^{\mathrm o}$ - 60$^{\mathrm o}$ (see Sect. 3.4). Above the photosphere $p_{\mathrm{sec}} \equiv 0$. With a total thickness $\Delta r << R_{\mathrm{sec}}$ the stream layer is thin, so we can replace coordinate $r$ appearing explicitly in Eq. (8) by $R_{\mathrm{sec}}$. If we now include the effects of viscosity, this will produce a drift of the stream matter into high astrographic latitudes. Molecular viscosity is negligibly small but the convective viscosity may play a role. We can estimate a possible drift resulting from that viscosity $$\nu = \frac{1}{3}l_{\mathrm{conv}}v_{\mathrm{conv}}\,,$$ where $l_{\mathrm{conv}}$ and $v_{\mathrm{conv}}$ are the mixing length and convective velocity near the bottom of the stream. Assuming that $l_{\mathrm{conv}}$ is of the order of pressure scale height near the bottom of the flow one obtains from the model of the convective zone $H_p \approx 10^9$ cm and $v_{\mathrm{conv}} \approx 2\times 10^4$ cms$^{-1}$ so $\nu \approx 7\times 10^{12}$ in cgs units [@bt66; @spr74]. The resulting viscous term in the momentum equation is of the order of $\nu v/\Delta x^2$. For $v \approx 0.3c_s$ and $\Delta x \approx R_{\mathrm{sec}}/2$ this term is equal to $2\times 10^{-2}$ cms$^{-2}$. It takes about 3 d for a given flow element to encircle the secondary assuming a typical flow velocity about 10 times slower than $v_{\mathrm{eq}}$, see below. The total drift distance of the stream in the meridional direction is equal then to $\sim 2\times 10^3$ km which is negligible compared to the stream width. We conclude that viscosity can be neglected altogether when considering dynamics of the mass flow around the secondary. Vertical equilibrium -------------------- A typical W UMa type binary consists of two unequal mass components possessing convective envelopes. Except for stars with spectral type around F0 where the convection zone is very thin and mostly super-adiabatic, its dominant part is stratified nearly adiabatically i. e. according to the pressure-temperature relation $$p=KT^{\alpha}\,,$$ where $T$ is temperature, $\alpha$ = 2.5 for fully ionized gas and $K$ = const. throughout the considered part of the convection zone. The parameter $K$ is called the adiabatic constant and its value depends on the specific entropy of the matter in the convective zone. Realistic models of the convective envelope take into account all important effects influencing its stratification, like variability with depth of $K, \alpha$, and specific heats as well as deviations from strict adiabaticity [@bt66; @spr74]. However, most of these effects take place in the uppermost layers of the convection zone where the pressure is low. Because we are interested in the pressure equilibrium at some depth below the stellar surface, a detailed structure of these shallow layers is unimportant. As the accurate models of the convection zone of solar type stars indicate, the stratification becomes very close to adiabatic already for temperatures higher than about $10^4$ K. We adopt the adiabatic stratification for the convective envelopes of both components with different values of the adiabatic constant. Matter flowing from the primary is characterized by the same adiabatic constant $K_{\mathrm{pr}}$ as the convection layer of the primary. The convection layer of the secondary is described by $K_{\mathrm{sec}}$. In general, $K_{\mathrm{sec}} < K_{\mathrm{pr}}$. Figure 3 gives an example of two pressure-temperature relations corresponding to two adiabatic constants differing by 0.5dex. The upper line on each diagram gives the $p-T$ relation for the stream and the lower line for the ambient convection layer of the secondary. Vertical hydrostatic equilibrium condition requires continuity of the pressure at the boundary between the stream and the underlying convective layer not affected by the flow. Because the secondary convection zone has specific entropy lower than the stream, a discontinuity in entropy and temperature must occur at this boundary (a vertical segment of the solid line in Fig. 3 top). A steady state model with a contact discontinuity of this kind was considered by @shu76 [@shu79]. They assumed that matter streaming from the primary fully covers the secondary component of a contact binary. They tried to build a model in which the nuclear energy of the secondary can flow outwards in spite of the discontinuity. @haz78 argued, however, that such a model contradicts the second law of thermodynamics because the nuclear energy of the secondary cannot flow from the low to the high entropy medium above. Instead, the nuclear energy will heat the convection zone on a thermal time scale rising its specific entropy to the value characteristic of the blanketing matter, just as assumed in the TRO model. The situation considered in the present paper is, however, entirely different. The streaming matter covers now only a part of the secondary. There is no need to introduce a contact discontinuity. The hot stream matter heats the layer lying underneath and lowers the temperature gradient there. As a result, the convective energy flow from below is inhibited. In steady state a radiative layer is formed with a low temperature gradient and specific entropy increasing outwards from the value characteristic of the secondary convection zone up to the value characteristic of the stream gas. We will call it a transition layer. Such a layer with the temperature gradient equal to zero is shown in Fig. 3 (bottom) as a horizontal segment of the solid line. In real stars the gradient will probably be not equal exactly to zero so the transition layer will be thicker than shown. The heat transport is completely, or nearly completely, blocked in the transition layer which acts as an insulator. The energy blocked by the stream will be redistributed over the whole convection zone of the secondary and ultimately radiated away from the polar regions not covered by the stream matter (see Section 4). To consider the vertical structure of the stream in more detail we need to calculate $p_o - p_{\mathrm{sec}}$ with depth. Because the depth of the layers in question is much lower than the stellar radius we can use the plane parallel approximation with the coordinate $z$ replacing $r$ (Fig. 2). Let us assume that the secondary convection zone remains unaffected by the stream below the bottom of the transition layer and we put the reference level $z = 0$ there . The pressure is constant over the whole equipotential surface lying at that level. We denote it by $p_b$. Within the isothermal transition layer pressure varies as $$p(z)=p_b\exp(-z/H_p),\quad H_p = \frac{k_{\mathrm{B}}T_o}{\mu\mathrm{H}g_{\mathrm{eff}}},\quad 0\le z\le z_o\,,$$ where $\mu$ is the mean molecular weight, H – mass of the hydrogen atom, $k_{\mathrm{B}}$ – the Bolzmann constant, $T_o$ – temperature of the layer, $g_{\mathrm{eff}}$ – the effective gravity and $z_o$ corresponds to the bottom of the stream, which agrees with the top of the transition layer. The pressure is continuous at the interface between the transition layer and the adiabatically stratified stream where temperature varies as $$T(z) = T_o - \frac{g_{\mathrm{eff}}}{c_p}(z - z_o),\quad z\ge z_o\,,$$ where $c_p$ is specific heat at constant pressure. When calculating the $T(z)$ relation we neglect variability of the effective gravity. With $T(z)$ known, the vertical pressure stratification is obtained from Eq. (11) where $K = K_{\mathrm{str}}$. As it will be shown below, the entropy of the stream matter varies very little during the flow around the secondary, so we can put $ K_{\mathrm{str}} \approx K_{\mathrm{pr}}$ everywhere. It is reasonable to assume that in steady state the transition layer will be partially dragged by the stream in such a manner that a gas layer lying immediately below the stream moves with a stream velocity and deeper layers move slower until zero velocity is attained at the bottom of the transition layer. We arbitrarily assume that velocity inside the transition layer varies as $$v(z) = kc_s\frac{\rho_b - \rho(z)}{\rho_b - \rho_t}\,,$$ where $\rho_b$ and $\rho_t$ are the densities at the bottom and at the top of the transition layer. Thus defined velocity vanishes at the bottom and reaches the stream velocity at the top. Far from the stream the convection zone of the secondary remains unperturbed with the adiabatic stratification. Neglecting a difference between $c_p$ in the stream and in the ambient convection zone we have $$T(z) = T_o - \frac{g_{\mathrm{eff}}}{c_p}z\,,$$ Pressure stratification is obtained again from Eq. (11), except that now $K = K_{\mathrm{sec}}$. Figure 4 shows an example of the vertical pressure distribution inside the transition layer and the stream with depth equal to 1 % of the primary’s radius (broken line), in the unperturbed secondary convection zone (dotted line) and their difference (solid line). As it is seen from the figure, the stream extends above the surface of the secondary and produces an equatorial bulge held by the meridional component of the Coriolis force (see also Fig. 5). The thermal stream structure along the flow ------------------------------------------- With the depth and width of the stream specified, the mass transfer rate $F_{\rho}$ can be calculated $$F_{\rho} = 2\Delta x\int_{0}^{\Delta r}kc_s\rho{\mathrm d}r = 2\Delta xkc_s\overline{M}_{\Delta r}\,,$$ where $\overline{M}_{\Delta r}$ is the column mass above the level $\Delta r$. The stream also carries thermal energy which is partly radiated away during its flow around the secondary. We will now discuss the variation of the stream thermal structure along the flow. Let us consider a slice of matter perpendicular to the flow with thickness $R_{\mathrm{sec}}{\mathrm{d}}\phi$. Its total heat (internal energy plus enthalpy) capacity d$Q$ is equal to $${\mathrm d}Q=2\Delta xR_{\mathrm{sec}}{\mathrm d}\phi\int_{0}^{\Delta r} \rho c_pT{\mathrm d}r = 2\Delta xR_{\mathrm{sec}}c_p\overline{MT}_{\Delta r}{\mathrm d}\phi\,,$$ where $$\overline{MT}_{\Delta r} = \int_{0}^{\Delta r}\rho T{\mathrm d}r\,.$$ The matter flowing around the secondary radiates energy at a rate $\sigma T_{\mathrm{str}}^4$ per unit time and area, where $T_{\mathrm{str}}$ is the effective temperature of the stream. In general, $T_{\mathrm{str}}$ will decrease along the flow from its initial value, assumed to be close to the effective temperature of the primary, $T_{\mathrm{pr}}$, and its precise variation could be determined by solving the transfer equation within the stream. To avoid cumbersome calculations we discuss two limiting cases: first, when the total heat capacity of the stream is much higher than the total energy radiated away, and then, when the heat capacity is comparable to the radiated energy. The slice has a radiating area $2\Delta xR_{\mathrm{sec}}{\mathrm{d}}\phi$ and emits d$F_{\mathrm{rad}} = 2\sigma T_{\mathrm{str}}^4\Delta xR_{\mathrm{sec}}{\mathrm{d}}\phi$ energy per unit time. It takes a time equal to $2\pi R_{\mathrm{sec}}/kc_s$ to encircle the secondary. The total energy radiated away during this time is $${\mathrm{d}}L = \frac{4\pi\sigma T_{\mathrm{str}}^4\Delta xR_{\mathrm{sec}}^2{\mathrm{d}}\phi}{kc_s}\,.$$ From Eqs. (17) and (19), the limiting case of large heat capacity, $\mathrm{d}Q >> \mathrm{d}L$, corresponds to the condition $$c_p\overline{MT}_{\Delta r} >> \frac{2\pi\sigma T_{\mathrm{pr}}^4R_{\mathrm{sec}}}{kc_s}\,.$$ If the radiated energy is small compared to the total heat capacity of the stream, its effective temperature will change very little from its initial value $T_{\mathrm{pr}}$ so we can put $T_{\mathrm{str}} \approx T_{\mathrm{pr}}$. In case of low heat capacity the effective temperature of the stream will approach the effective temperature of the secondary and the stream matter will mix together with its convection zone. Assuming that the energy outflow from the stream influences uniformly its entropy, i. e. that the stream specific entropy is constant with depth, we can calculate its decrease from Eq. (3). In case of the large heat capacity we obtain $$\Delta s = - \frac{4\pi R_{\mathrm{sec}}\sigma T_{\mathrm{pr}}^4}{kc_s\overline{MT}_{\Delta r}}\,.$$ In case of the low heat content, $T_{\mathrm{pr}}$ in Eq. (21) should be replaced by the properly defined average stream effective temperature. The entropy decrease can be compared with entropy of the primary convection zone $$s_{\mathrm{pr}} = - \frac{k}{\mu\mathrm{H}}\ln K_{\mathrm{pr}}\,.$$ After return to the primary, the stream matter sinks in its convection zone. Numerical example: AB And ------------------------- To obtain a numerical estimate of the parameters describing an actual stream in a cool contact binary we apply the above derived equations to a typical W-type contact binary. As an example, we take AB And. It has the orbital period $P_{\mathrm{orb}} = 0.33$ d, component masses $M_{\mathrm{pr}} = 1.04 M_{\odot}$ and $M_{\mathrm{sec}} = 0.60 M_{\odot}$, and volume radii $R_{\mathrm{pr}} = 1.02 R_{\odot}$ and $R_{\mathrm{sec}} = 0.78 R_{\odot}$ [@bar04]. These data give $v_{\mathrm{eq}}$ = 120 kms$^{-1}$ for the secondary component. The star has a relatively low fill-out factor $f = 0.05$ [@bar04], which corresponds to about 1 % of the primary star radius. Assuming that the equal pressure equipotential surface is close to the inner critical Roche lobe we obtain $\Delta r \approx 0.01$ for the depth of the stream. The sound velocity is equal to 30 kms$^{-1}$ at this depth. The width of the stream can be calculated from Eq. (9). Figure 5 shows the results. Here the angular half-width is plotted as an abscissa for different heights above the bottom of the transition layer. The broken heavy line on the right indicates the approximate photospheric level of the secondary. The diagrams give the stream width for three values of $\Delta r/R_{\mathrm{pr}}$ equal to 0.005 (top), 0.01 (middle) and 0.03 (bottom) and three values of $k$, as indicated. The width was calculated up to the level where temperature reached 10 000 K because for lower temperatures the adiabatic model breaks down. As we see, the largest width of the stream is close to the photosphere of the secondary but the Coriolis force produces a bulge above the stellar equator with a height of the order of the stream depth. As expected, a lower stream velocity results in its increased width and in the limit of zero velocity the stream would cover the whole secondary. For flow velocities comparable to the sound velocity, i. e. for $k$ between 1 and 0.3, the stream has a half-width between 15$^{\mathrm o}$ and 30$^{\mathrm o}$ (Fig. 3), and covers between 1/4 and 1/2 of the stellar surface, respectively. A schematic side view of AB And with a stream covering 50 % of the secondary’s surface is shown in Fig. 6. The stream with a depth of $0.01$ carries enough mass and energy to be in the regime of large heat capacity. To see it we substitute the following values to Eqs. (17) and (20): $\Delta r = 7\times 10^{10}$ cm, $\overline c_p = 10^9$ (in cgs units), $T_{\mathrm{pr}}$ = 5590 K (see next section) and $k$ = 0.3. The column mass and the integral $\overline{MT}_{\Delta r}$ are calculated from the convection zone model. We obtain $\overline{M}_{\Delta r} = 3.5\times 10^5$ gcm$^{-2}$ and $\overline{MT}_{\Delta r} = 1.7\times 10^{10}$ gKcm$^{-2}$. The resulting mass transfer rate is $F_{\rho} = 5\times 10^{-4} M_{\odot}$/year. The left hand side of Eq. (19) gives $1.7\times 10^{19}$ (in cgs units) and the right hand side gives $4\times 10^{16}$ in the same units. The resulting ratio of both numbers is equal to $2.4\times 10^{-3}$ which indicates that the non-equality given by Eq. (19) is very well fulfilled. The numbers become comparable only for stream depths less than 0.5 % of the stellar radius. Figure 5 (top) shows the calculated width of the stream with the depth of 0.5 %. It is somewhat surprising that its width is not much different from the widths of much more massive streams shown in Fig. 5 (middle) and (bottom). It is clear that only extremely marginally contact binaries with overfill factors less than a couple of hundredths will have streams with surface temperature approaching the temperature of the uncovered part of the secondary component. The change of specific entropy of the stream in AB And can be calculated from Eq. (20). With the adopted parameters we obtain $\Delta s = - 1.2\times 10^6$ ergg$^{-1}$K$^{-1}$. According to Eq. (21) the initial value of the specific entropy is equal to $4.59\times 10^8$ in the same units so the decrease of entropy during the flow around the secondary is small indeed. It is possible then that the stream returning to the primary component will not sink immediately after passing the neck but it may move over a substantial fraction of the primary’s circumference before it plunges in the convection zone. To summarize, we see that the Coriolis force confines the near sonic stream to the equatorial belt of width $\pm (15^{\mathrm{o}}-30^{\mathrm{o}})$ around the secondary. The stream covers 25-50 % of the stellar surface. For a primary overflowing its Roche lobe by $\sim 1 \%$ or more the stream radiates a very small fraction of its total heat capacity which results in its nearly constant surface temperature equal to the primary’s temperature. Substantially shallower streams, possible in contact binaries with an extremely marginal contact, will show a temperature decrease from the initial value close to the primary’s temperature on the trailing hemisphere of the secondary, down to the value close to the secondary’s temperature on the leading hemisphere of the star. After return to the primary, the stream will sink in its convection zone due to the lower entropy. The Coriolis force deflects the returning stream to the opposite side of the neck, compared to the stream flowing towards the secondary. The deflection should reduce an interaction of both flows although it will not separate them completely. The returning flow will collide with matter streaming towards the neck (see Figs. 1 and 2 in @oka02). However, the collision will not be head-on, as obtained by @mar95 who simulated a two dimensional motion in an equatorial plane of a contact binary. Geometry applied by the authors prevents matter from crossing the plane containing centers of both stars and the rotation axis, so the stream cannot move to one side of the neck, as shown by three dimensional calculations [@oka02]. Global reaction of the binary to the mass and energy flow ========================================================= The influence of the stream --------------------------- As it was mentioned earlier, the stream acts like a hot blanket covering part of the secondary and blocking the energy flow from below by rising the temperature of the layer immediately beneath. As a result, a transition radiative layer is formed with a decreased temperature gradient in which the specific entropy increases from the bottom value characteristic of the secondary convection zone to the top value characteristic of the stream. What is the global reaction of a star when a fraction of its surface cannot radiate away the stellar core energy due to an obstacle blocking the energy outflow? The physical nature of the obstacle is not so important as long as it occurs in the outermost stellar layers. One example of such an obstacle partly blocking the energy flow from below is a dark starspot appearing on the surface of an active star. Another example can be seen in close binaries when a hot companion irradiates a part of the surface of a cool component. Both effects have been discussed at length in the literature (e. g. @rit00, @spr86 and references therein). The blocked energy flux is redistributed inside the convection zone and re-radiated by the unperturbed part of the stellar surface. The detailed models show that the specific entropy of the convection zone is somewhat increased and so is the stellar radius. For stars with deep convection zones, or fully convective, the nuclear energy production may also be affected. Assuming a steady state in which a constant fraction of the stellar surface is covered by spots over a time long compared to the thermal time scale of the convection zone, @spr86 obtained the following expression for the equilibrium stellar radius $$R_e(f_{ps}) = R_o(1-f_{ps})^d\,,$$ where $R_e$ and $R_o$ denote the equilibrium radius of the spotted and unspotted star, $f_{ps}$ is the fraction of the stellar surface permanently covered by spots and $d \approx -0.1$. An essentially identical expression was found by @rit00 for stars irradiated by hot companions. For $f_{ps}$ = 0.25 and 0.5 the equation gives $R_e = 1.03R_o$ and $1.07R_o$, respectively. In stars with not too deep convection zones the nuclear energy production remains unaffected. The reduction of the effective radiating surface is exactly compensated by an increase in surface temperature of the undisturbed part of the star $$T^e_{\mathrm{eff}} = (1-f_{ps})^{-\frac{1+d}{4}}T_{\mathrm{eff}}\,,$$ where $T^e_{\mathrm{eff}}$ and $T_{\mathrm{eff}}$ are the equilibrium and undisturbed effective temperature. More accurate values of the equilibrium radius and temperature depend on stellar mass [@spr86]. The primary component suffers from additional cooling of its surface by the stream encircling the secondary. The stream feeds the primary with the lower entropy matter which is mixed with the rest of its convection layer. As a result, the specific entropy of the convection zone is somewhat lowered and so is the stellar radius but the expected changes should be close to those occurring in the secondary component, i.e. they should not exceed one, or at most a few percent. The primary, slightly undersized for its mass, will appear closer to ZAMS in the HR diagram. Neglecting its radius change we can obtain the modified effective temperature of the primary $$T^m_{\mathrm{pr}} = (1-\frac{\Delta L_{\mathrm{pr}}}{L_{\mathrm{pr}}})\,,$$ where $L_{\mathrm{pr}}$ is the luminosity of the primary component and $\Delta L_{\mathrm{pr}}$ is a fraction of this luminosity radiated away by the stream flowing around the secondary component (Fig. 7). Now let us calculate the expected values of the thermal parameters of AB And. The effective temperature of a single one solar mass star is about 5800 K. That should be close to the temperature of the undisturbed primary. Assuming that secondaries of W UMa type stars are evolved stars with hydrogen depleted in their stellar cores [@ste04; @ste06a] we obtain the following values of the global parameters for the unperturbed secondary of AB And: $T_{\mathrm{eff}}$ = 4570 K, and $L/L_{\odot}$ = 0.24. The data were taken from the evolutionary models of stars with masses equal to the present masses of the secondaries. This is not fully consistent with the present considerations because, according to the accepted model, the secondaries were originally more massive and their present masses result from mass exchange. It is hoped, however, that differences between the adopted and fully consistent models are of secondary importance. Let us assume, as an example, that the stream covers 50% of the secondary and that its nuclear energy production rate is unaffected by this. According to Eq. (23) the steady state effective temperature of the surface fraction not covered by the stream rises to 5340 K. The primary transfers about 16 % of its luminosity (see Fig. 7) to the secondary so its actual effective temperature, resulting from Eq. (25), drops to 5590 K. The effective temperature of the stream is of course the same. The “final” surface averaged temperature of the secondary is equal to about 5480 K, which is nearly identical as obtained by @bar04 from observations who give $T_{\mathrm{sec}}$ = 5500 K. Such a small difference between the predicted and observed temperature is very likely fortuitous, taking into account a large observational uncertainty of the observed value (see below). However, the observed primary’s temperature, equal to 5140 K, is [*substantially*]{} lower than resulting from the present model. A possible explanation of this discrepancy is discussed in the next Section. The W-type phenomenon and photospheric starspots ------------------------------------------------ The observed temperatures of components of a W UMa type binary belong to the least accurately determined stellar parameters [@ye05]. The primary’s temperature is usually determined from the spectral type which is uncertain due to very broad spectral lines, or from the photometric index. It is kept constant during the light curve modeling. The secondary’s temperature results from modeling of the light curve, in particular from the relative depth of both minima, which gives a ratio of both temperatures. Such an analysis gives only a rough estimate of the surface averaged temperatures and is insensitive to their possible variations across the stellar surface. Regarding the relative temperatures of the components, cool contact binaries are divided into two types: W-type when secondaries look hotter than primaries and A-type when the opposite takes place [@bin70]. A typical temperature difference between the components of genuine contact binaries is of the order of a few hundred kelvin [@ye05]. A few stars alternating between W-type and A-type are known with a temperature difference always staying close to zero. The present model gives $\Delta T = T_{\mathrm{pr}} - T_{\mathrm{sec}}$ equal to 110 K for AB And. Several A-type stars are known with $\Delta T$ close to this value. However, AB And is of W-type and this phenomenon cannot be explained within the energy transfer mechanism alone. No such mechanism, obeying basic laws of physics, can transfer energy from cooler to hotter medium in an isolated system. So, the most popular explanation of the W-type phenomenon assumes that dark, cool spots cover a substantial fraction of the primary’s surface [@ruc93]. The temperature of its spot-free part may be higher than the secondary’s temperature but the apparent surface averaged temperature is lower. W UMa-type stars have a very high level of chromospheric-coronal activity [@cd84; @ruc85; @vil87; @ste01] and variations of the light curve, observed on time scales of decades, indicate that they also possess dark starspots covering a variable fraction of the stellar surface [@gz06; @cou08]. Perhaps the most extreme example of light curve variability due to a variable coverage by spots can be seen in @rp02. We can estimate how large should be the fraction of the primary’s surface covered by dark spots to lower its average temperature to a required value. Assuming, for simplicity, that umbrae have surface brightness of 1/4 of the undisturbed photosphere (it is so in the solar case) and that they dominate the spots we have $$T_{\mathrm{pr}}^{\mathrm{f}} = (1-0.75f_s)^{1/4}T_{\mathrm{pr}}\,,$$ where $T_{\mathrm{pr}}^{\mathrm{f}}$ is the “final” primary’s temperature modified by spots and $f_s$ is the spot coverage. It is assumed here that spots appear and disappear on a time scale short compared to the thermal time scale of the convection zone. When the spots appear, they simply block the respective energy flux which does not reappear elsewhere. In other words, we consider now only the variable component of the spot coverage. If there also exists a permanent spot component covering a fraction $f_{ps}$ of the primary’s surface it will influence the effective temperature as discussed in Sect. 4.1. For $f_s$ = 0.25 we obtain $T_{\mathrm{pr}}^{\mathrm{f}}$ = 5310 K, hence $\Delta T$ = -170 K. When $f_s$ is increased to 0.5, we obtain $T_{\mathrm{pr}}^{\mathrm{f}}$ = 4970 K, hence $\Delta T$ = -510 K. This should be compared with the observed value $\Delta T$ = -340 K. As we see, the required spot coverage is quite high but reasonable. There are a number of single, rapidly rotating stars with a variable spot coverage within this range, e. g. AB Dor [@inn08] or BO Mic [@wol08]. Heavily spotted are also cool detached binaries with periods shorter than one day, like XY UMa [@pri01] or BI Cet [@cut03]. The absolute magnitude of AB And, obtained from the visual magnitude and the distance is equal to 4.05 mag [@rd97]. With BC = -0.08 mag [@all73] the binary has the luminosity $L \approx 1.6L_{\odot}$. Adopting $L \approx 1.16L_{\odot}$ for the MS primary of AB And and $L \approx 0.25L_{\odot}$ (see above) for the evolved secondary we obtain $1.4L_{\odot}$ for the total luminosity of the binary. This is less than observed but the difference is not large and can be due to observational uncertainties. Note that if the secondary were a normal MS star, its core luminosity would be close to $0.1L_{\odot}$ which increases the discrepancy with observations. The case of AW UMa ------------------ Up to recently the spectroscopic observations of contact binaries were not accurate enough to resolve an additional flow on a rapidly rotating component. High-precision observations obtained in the last years by S. M. Rucinski and his group changed the situation. @pruc08 analyzed very accurate profiles of spectral lines of AW UMa – an A-type contact binary with an extreme mass ratio. The observations indicate the existence of an equatorial bulge on both components, moving with velocity 20-30 km/s relative to the rotating frame of reference. The present model predicts such a bulge on the secondary star. Although dynamics of the returning stream was not considered in the present paper it was argued that the returning flow may travel a significant fraction of the circumference of the primary before it disappears beneath the stellar surface. The returning flow, together with the stream formed on the leading hemisphere of the primary, will produce a spectral feature similar to signature of an equatorial bulge. The observations by @pruc08 showed in addition that parts of the line profiles formed away from the equatorial regions of AW UMa are too narrow for the rigidly rotating components filling the Roche lobes. The authors suggest that the components may be undersized by about 15 percent. However, another explanation is also possible. The flow pattern obtained by @oka02 from hydrodynamic simulations of a semi-detached binary shows the presence of giant eddies surrounding the poles of the donor and rotating contrary to the orbital motion (i. e. contrary to the direction of the stream). Such eddies [*reduce*]{} rotational broadening of lines formed in the polar regions. In a contact binary similar eddies may also occur on the secondary. If so, the observed profiles mimic rigid rotation of undersized stars. @pruc08 stress, however, that such profiles are not common and are not observed e. g. in another contact binary, V556 Oph, in which the spectroscopic observations are in agreement with a conventional contact model. The difference between line profiles observed in AW UMa and V556 Oph may result from a different evolutionary status of both stars and different size of the stream. The secondary component of AW UMa is very likely to be a highly evolved star with a massive helium core and a very tenuous envelope [@pacz07]. The evolutionary computations suggest that the size of the secondary may be close to its maximum, or even past it, so the secondary may presently undergo evolutionary shrinking. In such a case, the stream can be more massive and move faster than in other binaries in which secondaries still expand due to evolutionary effects. A contact configuration of AW UMa may still be sustained by a slow (net) mass transfer from the primary to the secondary. This transfer results in a shortening of orbital period i. e. it acts in the same direction as AML. The situation when two operating mechanisms tighten the orbit cannot last for long. If so, AW UMa is in an exceptional evolutionary state just prior to final merging of both components. It would be very interesting to identify other stars with properties similar to AW UMa. Secondaries with little or no helium core expand as they evolve. A self-regulating mechanism works in such binaries [@gs08]. The expanding secondary transfers mass to the primary on an evolutionary time scale. Mass transfer from a less massive to a more massive component widens the orbit. At the same time, AML tightens the orbit on approximately the same time scale. The whole process contains a negative feed-back: too fast secondary expansion results in an increased mass transfer, which leads in turn to an appropriate widening of the orbit (in spite of AML) and the Roche lobe increase. The Roche lobe of the secondary approaches its surface and the mass transfer is cut down. If, for some reason, the expansion rate of the secondary becomes low, the orbit tightens due to AML, the Roche lobe shrinks and the higher mass transfer rate is restored. Similarly, if AML significantly increases (decreases), the Roche lobe varies correspondingly so that mass transfer rate increases (decreases), resulting in an orbit which is almost insensitive to such fluctuations. V556 Oph and many other W UMa type stars are very likely in such an evolutionary state. Their streams will be less violent and not so contrasted as in case of AW UMa. Summary and discussion ====================== The essentials of the circulation model --------------------------------------- It was shown by @ste04 [@ste06a; @ste06b] that an evolutionary model of a cool contact binary, in which the primary component is a MS star and the secondary component is in an advanced evolutionary stage with a hydrogen depleted core, fulfills simultaneously two conditions: both components are in thermal equilibrium and their sizes conform to the geometrical requirement of the Roche model. Energy exchange between the components was not a part of that model. The present paper considers the problem of the energy transport. A common envelope of a contact binary with unequal components cannot achieve hydrostatic equilibrium. Unequal heating from below of the base of the common envelope by the emergent energy fluxes of the components produces temperature difference on each equipotential surface. A resulting baroclinic structure in the common envelope drives large scale mass motions between the components on a dynamical time scale. Because the stream transports thermal energy together with mass, stellar luminosities are redistributed over the common surface. Three dimensional hydrodynamic simulations of the mass loss in a semi-detached binary demonstrate that a stream leaving the donor star is deflected by the Coriolis force in the direction of the orbital motion so that matter flows at an angle of about 10$^{\mathrm o}$ to the line joining star centers [@oka02]. The speed of the stream reaches sound speed already in the vicinity of L$_1$ point [@ls75; @oka02]. After leaving the L$_1$ region the stream moves down the potential well. If the companion is compact enough, the stream misses its surface, encircles it and forms a disc. If a radius of the companion is larger than a certain critical value, the stream hits its surface. The subsequent stream motion is restricted to two dimensions along the stellar surface. A contact configuration is an extreme case of a large companion filling the same equipotential surface as the donor. The stream is then forced to move from the beginning along the common equipotential surface, to encircle the other component and to return to donor. The same Coriolis force which deflects the stream, prevents it now from spreading up to the stellar poles. Even in the presence of turbulent viscosity the spreading is insignificant. The stream flows in an equatorial belt with the width determined by the stream velocity. Its part is raised above the photosphere and forms an equatorial bulge with height comparable to the stream depth. Assuming that the bottom of the stream is close to the inner critical Roche surface we can calculate the global parameters of the flow. Typical overfill factors of the W UMa type binaries correspond to thickness of the common envelope of the order of one or a few percent of the stellar radius [@ye05]. For the stream depth of one percent and velocity close to the sound speed the stream has a total width of 30$^{\mathrm o}$-60$^{\mathrm o}$ and carries about $5\times 10^{-4} M_{\odot}$/year with velocity of 10-30 km/s. Its heat capacity is a few orders of magnitude higher than the energy radiated away during the flow around the secondary. As a result, the effective temperature of the stream and its specific entropy decrease very little when it returns to the primary. The returning flow may travel a significant fraction of the stellar circumference before it sinks. Only binaries with an extremely marginal contact in which the common envelope is thinner than about 0.5 percent of the stellar radius have streams with heat capacity comparable to the energy radiated away. The effective temperature and specific entropy of such a stream approach the ambient temperature and entropy of the secondary’s convection zone. The stream covering a fraction of the secondary modifies its thermal structure. It heats the matter immediately beneath reducing the temperature gradient. The reduced gradient effectively blocks the energy flow from below. A similar situation occurs in stars covered by dark spots or irradiated by hot companions. The blocked energy is redistributed inside the convection zone and radiated away in the polar regions not covered by the stream. The effective temperature of these regions rises accordingly but the stellar radius hardly changes. Numerical examples show that the surface averaged temperatures of both components are close to one another as observed in contact binaries but the calculated primary’s temperature cannot be lower than the secondary’s. This is a known result; mass and energy transfer between the components cannot by itself result in the secondary’s temperature higher than the primary’s. The existence of A-type binaries can be explained solely by large scale circulations but not W-type. As has been suggested many times in the past, large dark spots appearing temporarily on the primary can explain its low surface averaged temperature. Numerical estimates obtained in the present paper indicate that a substantial fraction of the star (25-50 percent) must be covered with spots to lower the temperature by several hundred degrees, required by observations. Discussion ---------- Dynamics of the mass transfer between the components of a contact binary, considered in the present paper, is similar to the one investigated by @tas92. The driving force of the circulations discussed by @tas92 comes also from the baroclinic structure of the common envelope and the flow is influenced by the Coriolis force. The author applied, however, a different boundary condition at the stellar surface. Based on the observations available at that time he adopted zero velocity and the strictly uniform temperature on the stellar surface of the secondary. Yet more recent observations show the existence of massive flows in some of the W UMa type stars [@pruc08]. On the other hand, very little is known about the temperature distribution over the surfaces of both components although nonuniformities in their surface brightness are notoriously invoked to explain the observed light curves (e. g. @gaz06). @ye05 called attention to a possible role of lateral energy transfer connected with differential rotation observed on the Sun and solar-type stars. Differential rotation results from a coupling between rotation and turbulent convection in an axially symmetric star. AM is transported in the meridional direction by Reynolds stresses, meridional circulations and viscous diffusion [@bro08]. The mechanism considered in the present paper is different from that. The differential rotation was not included. Nevertheless, it may play a role in forming a stream which transfers mass and energy between the components although more elaborated model of the meridional AM transport in a contact binary is needed. W UMa type stars show often period variations [@krei01]. Systematic changes with values up to $\dot P = \pm (10^{-6} - 10^{-7})$ day/year have been detected in several stars and interpreted as a signature of TRO. However, recent observations of several hundred contact binaries monitored in the framework of the program OGLE showed that the distribution of period variations of W UMa stars can be approximated by a normal distribution with an average equal to zero and dispersion equal to $2.3\times 10^{-7}$ day/year [@kub06]. The observations are of very high accuracy but they are extended only over 13 seasons. The analysis of the observed period variations suggest that they have a random character with predominantly low values of $\dot P$ resulting in time scales substantially longer than the stellar thermal time scale. Such a distribution excludes TRO as a primary source of the variations. A much more probable reason for the observed variations is connected with possible fluctuations of the mass transfer between the components. The stream flowing from the primary to the secondary and back carries $10^{-3} - 10^{-4} M_{\odot}$/year. A relative fluctuation of this flux at the level of $10^{-3} - 10^{-4}$ can produce the observed period variations. Fluctuations of that order are expected as a result of magnetic activity cycles operating on W UMa stars [@app92]. The existence of activity cycles have been suggested from the analysis of a long-time photometric behavior of heavily spotted stars. The average magnitude of those stars varies on time scales of several decades. This would be the expected time scale of period variations in contact binaries. The present model of large scale circulations supplements the evolutionary model of W UMa type stars in which contact binaries are past mass exchange with mass ratio reversal [@ste04; @ste06a; @ste06b] i. e. they are in an evolutionary state similar to low mass Algols. The basic difference between W UMa type stars and low mass Algols is that the latter stars have more orbital AM and longer periods so only the secondary fills its Roche lobe. Binaries with lower orbital AM form contact binaries in which the overflow of the critical equipotential surface by both components drives large scale circulations encircling the whole secondary and a large fraction the primary. Matter transported by the circulations to the secondary (with the mass flux of the order of $10^{-3}-10^{-4} M_{\odot}$/year) returns back to the primary but a fraction of thermal energy associated with that matter is radiated away during the flow around the secondary. It was argued that the expected flow velocities attain a substantial fraction of the sound velocity. Such values were assumed when considering details of the flow. To determine an accurate value of the flow velocity numerical hydrodynamic simulations are needed. Particularly important in this respect is the fate of the returning flow. If it behaves like a waterfall, the expected flow velocities can be close to sound velocity. If, instead, it collides with the primary’s matter so that the high pressure wave moves back along the flow, the driving force will be reduced and the flow velocity lowered. Acknowledgments =============== I thank Dr. Ryszard Sienkiewicz for calculating a set of evolutionary models of low mass stars. The description of his program can be found in @pacz07. The remarks of an anonymous referee, which helped to improve significantly the presentation of the present paper, are highly appreciated. Allen C.W., 1973, Astrophysical Quantities, 3rd ed., Athlone Press, London Applegate J.H., 1992, ApJ, 385, 621 Baker N.H., Temesvary S., 1966, Tables of Convective Stellar Envelopes, 2nd edition, NASA, Washington Baran A., Zola S., Rucinski S. M., Kreiner J. M., Siwak M., Drozdz M., 2004, AcA, 54, 195 Bilir S., Karataş Y., Demircan O., Eker Z., 2005, MNRAS, 357, 497 Binnendijk L., 1970, Vistas in Astron., 12, 217 Brown, B.P., Browning, M.K., Brun, A.S., Miesch, M.S., Toomre, J., 2008, ApJ, 689, 1354 Coughlin J.L., Dale III H.A., Williamon R.M., 2008, AJ, 136, 1089 Cruddace R.G., Dupree A.K., 1984, ApJ, 277, 263 Cutispoto G., Messina S., Rodono M., 2003, A&A, 400, 659 Eggleton P.P., 1996, in The Origins, Evolutions, and Destinies of Binary Stars in Clusters, eds. E.F. Milone, J. -C Mermilliod, ASP Conf. Ser., Vol. 90, p. 257 Eggleton P.P., Kiseleva L., 2001, ApJ, 562, 1012 Flannery B.P., 1976 ApJ, 205, 217 Gazeas K.D., Niarchos, P.G., Zola S., Kreiner, J.M., Rucinski S.M., 2006, AcA, 56, 127 Gazeas K. D., Niarchos P.G., Gradoula G. -P.,2006, Ap& SS, 304, 181 Gazeas K., Stȩpień K., 2008, MNRAS, 390, 1577 Guinan E.F., Bratstreet D.H., 1988, in Formation and Evolution of Low Mass Stars, eds. A.K. Dupree, M.T.V.T. Lago, Dordrecht, Kluwer, p. 345 Hazlehurst J., Refsdal S., 1978, A&A, 62,L9 Innis J.L, Budding E., Oláh K., J" arvinen S.P., Coates D.W., Messina S., Kaye T.G., 2008, IBVS, No. 5832 Kähler H., 2004, A&A, 414, 317 Kaluzny J., Rucinski S.M., 1993, in Blue Stragglers, ed. R.A. Saffer, ASP Conf. Ser., Vol. 53, p. 164 Kreiner, J.M., Kim, C.W., Nha, J.L., 2001, An Atlas of $O-C$ diagrams of eclipsing binary stars, Wyd. Nauk. AP, Kraków Kubiak, M., Udalski, A., Szymański, M.K., 2006, AcA, 56, 253 Kuiper G.P., 1941, ApJ, 93, 133 Lipari S.L., Sistero R.F., 1988, PASP, 100, 377 Lubow S.H., Shu F.H., 1975, ApJ, 198, 383 Lubow S.H., Shu F.H., 1976, ApJ, 207, L53 Lucy L.B., 1968, ApJ, 151, 1123 Lucy L.B., 1976, ApJ, 205, 208 Martin T.J., Davey S.C., 1995, MNRAS, 275, 31 Mochnacki S.W., 1981, ApJ, 245, 65 Mochnacki S.W., 1985, in Interacting Binaries, eds. P.P. Eggleton, J.E. Pringle, Dordrecht, Reidel, p. 51 Oka K., Nagae T., Matsuda T., Fujiwara H., Boffin H.M.J., 2002, A&A, 394, 115 Paczyński B., Sienkiewicz R., SzczygiełD.M., 2007, MNRAS, 378, 961 Pribulla T., Chochol D., Heckert P.A., Errico L., Vittone A.A., Parimucha S., Teodorani M., 2001, A&A, 371, 997 Pribulla T., Kreiner J.M., Tremko J., 2003, Contr. Astr. Obs. Skalnate Pleso, 33, 38 Pribulla T., Rucinski S.M., 2008, MNRAS, 386, 377 Rasio F.A., 1995, ApJ, 444, L41 Ritter H., Zhang Z.-Y., Kolb U., 2000, A&A, 360, 969 Rucinski S.M., 1985, MNRAS, 215, 615 Rucinski S.M., 1993, in The Realm of Interacting Binary Stars, eds. J. Sahade et al., Dordrecht, Kluwer, p. 111 Rucinski S.M., 1998, AJ, 116, 2998 Rucinski S.M., 2000, AJ, 120, 319 Rucinski S.M., 2002, PASP, 114, 1124 Rucinski S.M., Duerbeck H.W., 1997, PASP, 109, 1340 Rucinski S.M., Paczynski B., 2002, IBVS, No. 5321 Rucinski S.M., Pribulla T., van Kerkwijk M.H., 2007, AJ, 134, 2353 Sarna M.J., Fedorova A.V., 1989, A&A, 208, 111 Shu F.H., Lubow S.H., Anderson L., 1976, ApJ, 209, 536 Shu F.H., Lubow S.H., Anderson L., 1976, ApJ, 229, 223 Spruit H.C., 1974, Solar Physics, 34, 277 Spruit H.C., Weiss A., 1986, A&A, 166, 167 Stȩpień K., 1995, MNRAS, 274, 1019 Stȩpień K., 2004, in Stars as Suns: Activity, Evolution and Planets, IAU Symp. No. 219, eds. A.K. Dupree, A.O. Benz, Astr. Soc. of Pacific, p. 967 Stȩpień K., 2006a, AcA, 56, 199 Stȩpień K., 2006b, AcA, 56, 347 Stȩpień K., Schmitt J.H.M.M., Voges W., 2001, A&A, 370, 157 Tapia S., Whelan J.A., 1975, ApJ, 200, 98 Tassoul J.-L., 1992, ApJ, 389, 375 Vilhu O., 1982, A&A, 109, 17 Vilhu O., Walter F.M., 1987, ApJ, 321, 958 Webbink R.F., 1976, ApJS, 32, 583 Wolter U., Robrade J., Schmitt J.H.M.M., Ness J.U., 2008, A&A, 478, L11 Yakut K., Eggleton P.P., 2005, ApJ, 629, 1055 [^1]: e-mail: [email protected]
--- abstract: 'This article is continuation from [@Gu]. The positive Dehn twist expressions for the generalization of the involutions described in [@Gu] are presented. The homeomorphism types of the Lefschetz fibrations they define are determined for several examples.' address: 'Department of Mathematics, Suffolk CCC, Selden, NY, USA' author: - 'Yusuf Z. Gurtas' title: 'Positive Dehn Twist Expressions for some New Involutions in the Mapping Class Group II\' --- [^1] Introduction {#introduction .unnumbered} ============ In [@Gu] the author presented the positive Dehn twist expression for a new set of involutions that are obtained by combining two well known involutions in the mapping class group $M_g$ of a $2$-dimensional, closed, compact, oriented surface $\Sigma_{g}$ of genus $g>0$, one of which is the hyperelliptic involution, Figure \[twoinvolutions.fig\]. One can extend these new involutions by gluing them together. It is the purpose of this article to find the positive Dehn twist expressions for these extended involutions and compute the signatures of the symplectic Lefschetz fibrations that they describe. Review of the Simple Case ========================= Let $i$ represent the hyperelliptic (horizontal) involution and $s$ represent the vertical involution as shown in Figure \[twoinvolutions.fig\]. If $i$ is the horizontal involution on a surface $\Sigma_{h}$ and $s$ is the vertical involution on a surface $\Sigma_{k}$, $k-$even, then let $\theta$ be the horizontal involution on the surface $\Sigma_{g}$, where $g=h+k$, obtained as in Figure \[gluingtwoinvolutions.fig\]. Figure \[simplecase.fig\] shows the cycles that are used in expressing $\theta$ as a product of positive Dehn twists which is stated in the next theorem. \[simplecase.thm\] The positive Dehn twist expression for the involution $\theta$ shown in Figure \[simplecase.fig\] is given by $$\theta =c_{2i+2}\cdots c_{2h}c_{2h+1}c_{2i}\cdots c_{2}c_{1}b_{0}c_{2h+1}c_{2h}\cdots c_{2i+2}c_{1}c_{2}\cdots c_{2i}b_{1}b_{2}\cdots b_{k-1}b_{k}c_{2i+1}.$$ See [@Gu] for the proof. Main Theorem ============ The Involution $\theta$ on bounded surface ------------------------------------------ Consider the bounded surface $\Sigma_{h+k,2}$ in Figure \[basecycleswithboundary.fig\], which is obtained from the surface in Figure \[simplecase.fig\] by removing a disk from each end. Figure \[simplecasewithboundary.fig\] is obtained from Figure \[basecycleswithboundary.fig\] by gluing a torus with two boundary components on each end. The cycles shown in Figure \[simplecasewithboundary.fig\] realize the involution $\theta$ on the bounded surface in Figure \[basecycleswithboundary.fig\] as stated in proposition \[boundedcase.prop\]. The boundary components of the chosen pants decomposition shown in Figure \[basecycleswithboundary.fig\] will constitute the set of cycles that will be mapped in order to prove proposition \[boundedcase.prop\]. Since the mapping of many of those boundary components will create in the process cycles that will contain a piece of arc similar to the two that are shown in the first column of Figure \[connectionlemma.fig\], we will show the mappings of these segments separately once and use their images in the last column of the same figure to avoid repetition, whenever necessary in the proof of the proposition. Each row in the following lemma shows the mapping of one of the two types of segments that will occur several times in the proof of proposition \[boundedcase.prop\] as mentioned above. \[connectionlemma.lem\] The action of the Dehn twists $x_1c_1$ and $x_2c_{2h+1}$ on the arcs shown in the first column of Figure \[connectionlemma.fig\] are as shown in the last column of the same figure. \[boundedcase.prop\] The positive Dehn twist expression for the involution $\theta$ defined on the bounded surface $\Sigma_{h+k,2}$ shown in Figure \[basecycleswithboundary.fig\] is given by $$\theta =c_{2i+2}\cdots c_{2h}c_{2i}\cdots c_{2}x_{2}c_{2h+1}x_{1}c_{1}b_{0}c_{2h}\cdots c_{2i+2}c_{2}\cdots c_{2i}b_{1}b_{2}\cdots b_{k-1}b_{k}c_{2i+1},$$ where the cycles in the expression are as shown in Figure \[simplecasewithboundary.fig\]. [**Proof:**]{} Figure \[basecycleswithboundary.fig\] shows a pants decomposition for the bounded surface on which $\theta$ is defined. We will show that the given Dehn twist expression in the proposition maps the boundary components of each pair of pants to their images under $\theta$. This will guarantee the mapping of the interior points of each pair of pants accordingly, due to the fact that each twist in the expression is a homeomorphism of the surface onto itself. The same idea was used in proving theorem \[simplecase.thm\] in [@Gu] for the closed surface $\Sigma_{h+k}$ and the mapping of each boundary cycle was shown there in detail, up to symmetry. Even though the surface subject to this proposition is not closed, there are several figures that are identical for both cases. Therefore, for a given boundary component, instead of repeating verbatim copy of the figures in its mapping from [@Gu], we will skip a few from the beginning and continue from where the different cycles begin to appear. The reader is referred to that article for the details of the mappings that are skipped here. The boundary components of the chosen pants decomposition in Figure \[basecycleswithboundary.fig\] can be summarized as $c_i, i=1,\ldots,2h+1, d_i,i=2,\ldots,h-1, e_i, i=1,\ldots,2k+1, f_i,i=2,\ldots,k-1, a_1,a_2,\delta_1$ and $\delta_2$ along with some additional cycles. We will begin with the mapping of $c_j$ for $j-$ odd and $2i+3\leq j< 2h$, Figure \[mappingofcjwb.fig\]. The proof for $j-$ even, including $j=2$ and $j=2h$, is similar and was shown in [@Gu]. The mappings of $c_{2i+1}$ and $c_{2i+2}$ will be shown separately. The long expressions in Figure \[mappingofcjwb.fig\] are due the fact that all the twists they contain miss the cycle that appears in the previous step. The figure shows all the steps there are. Wee see the mapping of $d_j$ in Figure \[mappingofdjwb.fig\], which is the same for $j=i+1,\ldots,h$. The twist about $b_0$ leaves the curve it is applied to unchanged because their intersection number is 0 as seen in the end of the second line. The result of application of the twists $x_2c_{2h+1}$ is obtained according to Lemma \[connectionlemma.lem\], therefore only the right end portion of the cycle to which they are applied is modified in the third line. The cycle in the end of the third line is isotopic to the previous one because it is obtained simply by retracting the portion that falls under the surface. The mapping of $d_j$ for $j=1,\ldots,i$ is similar due to symmetry and is omitted. Figure \[c2iplus2wb.fig\] shows the mapping of $ c_{2i+2}$. The details of the applications of the twists $c_2\cdots c_{2i}b_{1}\cdots b_{k}c_{2i+1}$ in the first line are skipped and can be found in [@Gu]. Note the use of Lemma \[connectionlemma.lem\] in the second line from the bottom. The first cycle of the last line is isotopic to the one that appears just before. The only curves that are effective in the mappings of $e_j$ are $b_j$ and $b_{j-1},j=1,\ldots,k$. Figures \[mappingofe1wb.fig\] and \[mappingofejwb.fig\] show the mapping of $e_j$ for $j-$ odd. The twists in the long expressions all miss the curves that come before them. The mapping of $e_k$ is a typical example for the mapping of $e_j,j-$ even, which is shown in Figure \[mappingofekwb.fig\]. The mapping of $f_{k/2}$ is shown in Figure \[mappingoffjwb.fig\]. The details of the applications of the twists $c_{2h}\cdots c_{2i+2}c_2\cdots c_{2i}b_{1}\cdots b_{k}c_{2i+1}$ in the first line are skipped and can be found in [@Gu]. The mappings of $f_j$ for $j=2,\ldots,k/2-1$ are similar to that of $f_{k/2}$. Note that $f_1$ is the same as $e_1$. Figure \[mappingofa2wb.fig\] shows the mapping $a_2$ and the mapping of $a_1$ is symmetrical to it. In this figure the details of the applications of the twists $c_{2h}\cdots c_{2i+2}c_2\cdots c_{2i}b_{1}\cdots b_{k}c_{2i+1}$ are skipped also in the first line. Lemma \[connectionlemma.lem\] is used in the second line and the resulting curve from that is isotopic to the curve in the beginning of the third line. In Figure \[c2iplus1wb.fig\] we see the mapping of $c_{2i+1}$. In this figure, too, the details of the applications of $b_{1}\cdots b_{k}c_{2i+1}$ and $c_{2h}\cdots c_{2i+2}c_2\cdots c_{2i}$ are skipped in the first line. Lemma \[connectionlemma.lem\] is used twice in the third line and the last figure in that line is isotopic to the one that is resulting from the application of the lemma. Note that $b_0$ has intersection number 2 with the curve it is applied to; therefore, the result of the twist about $b_0$ is found by taking their product twice. Finally, we will show the mapping of $\delta_1$, which is essentially the same as that of $\delta_2$ due to symmetry. The only cycles that take part in the mapping of $\delta_1$ are $c_1$ and $x_1$, as shown in Figure \[delta1\]. All the cycles that come before $c_1$ miss $\delta_1$ as well as the ones that come after $x_1$ and $x_1c_1$ fixes $\delta_1$ point-wise. This is shown in Figure \[mappingofdelta1.fig\]. The intersection number of $c_1$ and $\delta_1$ is 2; therefore $c_1(\delta_1)=c_1^2\delta_1$, namely the product of $c_1$ and $\delta_1$ twice. $c_1(\delta_1)$ is the second cycle in the second row of Figure \[mappingofdelta1.fig\]. The intersection number of $c_1(\delta_1)$ and $x_1$ is also 2; therefore $x_1(t_1(\delta_1))=x_1^2t_1(\delta_1)$, which is $\delta_1$ as seen in the last row of the same figure. Therefore $x_1c_1(\delta_1)=\delta_1$, as claimed. This concludes the proof of Proposition \[boundedcase.prop\]. Now we can prove the main theorem, which is the generalization of Proposition \[boundedcase.prop\] to a surface that is obtained by gluing $n$ copies of bounded surfaces as in Figure \[basecycleswithboundary.fig\] together along four-holed spheres in a sequence. Each copy in that sequence will then have two boundary components except for the first and the last copies, which will have only one boundary component each as shown in Figure \[multiple.fig\]. In order to simplify the Dehn twist expression for the general case it will be necessary to group the twists in each copy and give suggestive names to them. We will also pay attention to the direction in which the horizontal twists are progressing. The label of each twist group will carry the following information: Which copy the twist group is in (upper index), whether the group is on the right-hand side or on the left-hand side or in the middle section of the respective copy (name of the group), the direction in which the twists are multiplied when the group consists of horizontal twists (lower index). The twists along the cycle $b_0$ will not be included in any group. The following is the list of the identifications, except for the first and the last copies: $$\begin{aligned} l^j_i&=&c^j_{2i}\cdots c^j_2,\\ l^j_o&=&c^j_{2}\cdots c^j_{2i},\\ r^j_i&=&c^j_{2i+2}\cdots c^j_{2h}, \\ r^j_o&=&c^j_{2h}\cdots c^j_{2i+2}, \\ m^j&=&b^j_1\cdots b^j_{k}c^j_{2i+1}.\end{aligned}$$ The first two lines would be different in the first copy and the third and the fourth lines would be different in the last copy: $$\begin{aligned} l^1_i&=&c^1_{2i}\cdots c^1_1,\\ l^1_o&=&c^1_{1}\cdots c^1_{2i},\\ r^n_i&=&c^n_{2i+2}\cdots c^n_{2h+1}, \\ r^n_o&=&c^n_{2h+1}\cdots c^n_{2i+2}.\end{aligned}$$ Basically $l^j_i$ is the *inward* product of the horizontal twists on the *left* hand side of the $j^{th}$ copy, namely their product taken towards the center of the $j^{th}$ copy. Similarly $l^j_o$ is the *outward* product of the twists on the *left* hand side of the $j^{th}$ copy, namely their product taken away from the center of the $j^{th}$ copy. The definitions of $r^j_i$ and $r^j_o$ use the same idea. $m^j$ represents the product of the twists in the *middle* section of the $j^{th}$ copy as it appears in Theorem \[simplecase.thm\] The twists $c^j_{2i},c^1_{2i},c^j_{2i+2}$ and $c^n_{2i+2}$ should actually be written as $c^j_{2i_j},c^1_{2i_1},c^j_{2i_j+2}$ and $c^n_{2i_n+2}$ as the subindex $i$ will be different for each copy but we are not showing this dependence on $j$ to keep the notation simple. Using the notation described above we can write the positive Dehn twist product for the involution shown in Figure \[multiple.fig\]: The positive Dehn twist product for the involution \[main.thm\] $\theta$ shown in Figure \[multiple.fig\] is $$r^n_il^n_ir^{n-1}_il^{n-1}_ix_{n-1}t_{n-1}b^n_0r^n_ol^n_om^n\cdots m^4r^2_il^2_ix_2t_2b^3_0r^3_ol^3_om^3r^1_il^1_ix_1t_1b^2_0r^2_ol^2_om^2b^1_0r^1_ol^1_om^1.$$ To reduce the notation in the above expression for $\theta$ further let’s let $$\begin{aligned} Y^j_o&=&r^j_ol^j_om^j,\\ Y^j_i&=&r^j_il^j_i,\\ X_j&=&x_jt_j. \end{aligned}$$ Then $\theta$ can be rewritten as $$Y^n_iY^{n-1}_iX_{n-1}b^n_0Y^n_o\cdots b^4_0Y^4_oY^2_iX_2b^3_0Y^3_oY^1_iX_1b^2_0Y^2_ob^1_0Y^1_o.$$ Using the product notation we obtain: $$\theta =Y^n_i\prod_{j=2}^{n}\left(Y^{j-1}_iX_{j-1}b^j_0Y^j_o\right)b^1_0Y^1_o$$ The product sign here will mean multiplication from right to left, contrary to its usual meaning, in agreement with the earlier expressions.\ [**Proof:**]{} The proof is by induction. To show the effect of $\theta$ on the first copy we set $n=2$ in the product sign and get the expression $$Y^{1}_iX_{1}b^2_0Y^2_ob^1_0Y^1_o,$$ which is equal to $$Y^{1}_iX_{1}b^1_0Y^1_ob^2_0Y^2_o,$$ because none of the cycles in $b^1_0Y^1_o$ intersects any cycle in $b^2_0Y^2_o$. Therefore what we have is $$r^1_il^1_ix_1t_1b^1_0r^1_ol^1_om^1b^2_0r^2_ol^2_om^2,$$ which can be reduced to $$r^1_il^1_ix_1t_1b^1_0r^1_ol^1_om^1$$ because the expression $b^2_0r^2_ol^2_om^2$ has no effect on the bounded surface in Figure \[nequals1.fig\]. The explicit version of $r^1_il^1_ix_1t_1b^1_0r^1_ol^1_om^1$ is $$c_{2i+2}^1\cdots c_{2h}^1c_{2i}^1\cdots c_{1}^1x_{1}^1t_{1}^1b_{0}^1c_{2h}^1\cdots c_{2i+2}^1c_1^1c_{2}^1\cdots c_{2i}^1b_{1}^1b_{2}^1\cdots b_{k-1}^1b_{k}^1c_{2i+1}^1,$$ This is a special case of the expression in Proposition \[boundedcase.prop\] for the surface with one boundary component. Therefore the effect of the above expression on the bounded surface in Figure \[nequals1.fig\] is that of $\theta$ in Proposition \[boundedcase.prop\]. Now suppose that $$\prod_{u=2}^{j}\left(Y^{u-1}_iX_{u-1}b^u_0Y^u_o\right)b^1_0Y^1_o$$ realizes the involution $\theta$ on the first $j-1$ copies of the surface in Figure \[multiple.fig\]. Consider now $$\prod_{u=2}^{j+1}\left(Y^{u-1}_iX_{u-1}b^u_0Y^u_o\right)b^1_0Y^1_o,$$ which is equal to $$Y^{j}_iX_{j}b^{j+1}_0Y^{j+1}_o\prod_{u=2}^{j}\left(Y^{u-1}_iX_{u-1}b^u_0Y^u_o\right)b^1_0Y^1_o.$$ The first observation we have to make is, the expression $Y^{j}_iX_{j}b^{j+1}_0Y^{j+1}_o$ leaves the first $j-1$ copies with boundary $\delta_{j-1}$ unaltered, because all of the twists it contains are about cycles that lie completely to the right of $\delta_{j-1}$. Now, in order to see the effect of the inductive step on the surface in Figure \[nequalsj.fig\] let’s release the last term in the product to get $$Y^{j}_iX_{j}b^{j+1}_0Y^{j+1}_oY^{j-1}_iX_{j-1}b^{j}_0Y^{j}_o \prod_{u=2}^{j-1}\left(Y^{u-1}_iX_{u-1}b^u_0Y^u_o\right)b^1_0Y^1_o.$$ The part of this expression that will be effective in the mapping of the $j^{th}$ copy is contained in $Y^{j}_iX_{j}b^{j+1}_0Y^{j+1}_oY^{j-1}_iX_{j-1}b^{j}_0Y^{j}_o$. Using the commutativity relation between the terms that do not intersect we can rewrite this as $Y^{j-1}_iY^{j}_iX_{j}X_{j-1}b^{j}_0Y^{j}_ob^{j+1}_0Y^{j+1}_o$, just to bring the terms that we need together. To be precise, the twists contained in $Y^{j}_iX_{j}X_{j-1}b^{j}_0Y^{j}_o$ are the ones that will realize the effect of $\theta$ on the $j^{th}$ copy. Writing them explicitly, we get $$c^j_{2i+2}\cdots c^j_{2h}c^j_{2i}\cdots c^j_{2}x_{j}t_{j}x_{j-1}t_{j-1}b^j_{0}c^j_{2h}\cdots c^j_{2i+2}c^j_{2}\cdots c^j_{2i}b^j_{1}b^j_{2}\cdots b^j_{k-1}b^j_{k}c^j_{2i+1},$$ which is exactly the expression in Proposition \[boundedcase.prop\] adapted for the $j^{th}$ copy, with the identifications $c_1=t_{j-1},c_{2h+1}=t_j,x_1=x_{j-1},x_2=x_j$. This proves the inductive step. To complete the proof we need to point out to the mapping of the last copy. Recall the expression for $\theta$ $$\theta =Y^n_i\prod_{j=2}^{n}\left(Y^{j-1}_iX_{j-1}b^j_0Y^j_o\right)b^1_0Y^1_o.$$ Releasing the last term in the product sign we get $$Y^n_iY^{n-1}_iX_{n-1}b^n_0Y^n_o\prod_{j=2}^{n-1}\left(Y^{j-1}_iX_{j-1}b^j_0Y^j_o\right)b^1_0Y^1_o$$ Since $Y^{n-1}_i$ has no effect on the $n^{th}$ copy we have only $Y^n_iX_{n-1}b^n_0Y^n_o$ realizing $\theta$ on the last copy. Writing them explicitly we get $$c^n_{2i+2}\cdots c^n_{2h}c^n_{2h+1}c^n_{2i}\cdots c^n_{2}x_{n-1}t_{n-1}b^n_{0}c^n_{2h+1}c^n_{2h}\cdots c^n_{2i+2}c^n_{2}\cdots c^n_{2i}b^n_{1}b^n_{2}\cdots b^n_{k-1}b^n_{k}c^n_{2i+1},$$ This is, again, a special case of the formula in Proposition \[boundedcase.prop\] adapted for the surface with one boundary component seen in Figure \[nequalsn.fig\]. Although it is not needed, we will also include the mapping of the cycle $t_j$ in the proof. Figure \[mappingoftjwb.fig\] shows the mapping of the curve $t_j,j=1\ldots,n-1$. To understand the steps in that figure let’s write $$\theta=\cdots Y^{j+1}_iX_{j+1}b^{j+2}_0Y^{j+2}_oY^{j}_iX_{j}b^{j+1}_0Y^{j+1}_oY^{j-1}_iX_{j-1}b^j_0Y^j_o\cdots.$$ The first term in the expression for $\theta$ that will not miss $t_j$ is $Y^j_o=r^j_ol^j_om^j$. In fact all the twists in $Y^j_o$ will miss $t_j$ except for the last twist in $r^j_o,$ which is $c^j_{2h}$. The next twist is $b_0^j$ and it will leave the result of the previous twist unaltered as shown in the first line of Figure \[mappingoftjwb.fig\]. So does the expression $Y^{j-1}_iX_{j-1}$ because all the twists they contain miss the same result. The effect of the next term $Y^{j+1}_o=r^{j+1}_ol^{j+1}_om^{j+1}$ on the current cycle is performed by the twist $c^{j+1}_2$ which is contained in $l^{j+1}_o$. The result from this twist is seen in the second line of the figure. This movement causes the next twist to miss the current result, namely $b^{j+1}_0$ leaves it unaltered as shown in the second line. The first twist $t_j$ in $X_j=x_jt_j$ has two intersection points with the current cycle and the result from its application is seen in the first half of the third line. The cycle $x_j$ doesn’t intersect the result from twisting about $t_j$, therefore it has no effect on it as indicated in the end of the third line. The following term $Y^j_i=r^j_il^j_i$ has only one cycle that will intersect the cycle that is missed by $x_j$ in the third line, i.e., $c^j_{2h}$ that lies in $r^j_i$. Its effect on the current cycle is seen in the beginning of the last line. All the twists contained in the sequence of terms $X_{j+1}b^{j+2}_0Y^{j+2}_o$ following $Y^{j}_i$ miss the first cycle in the last line. The next cycle that will not miss it is $c_2^{j+1}$, which is contained in $l^{j+1}_i$ of $Y^{j+1}_i=r^{j+1}_il^{j+1}_i$. The rest of the twists miss the last cycle in Figure \[mappingoftjwb.fig\], therefore $t_j$ is fixed point-wise under the action of the expression for $\theta$, as expected. \[hyperelliptic.cor1\] Let $\theta$ be expressed as in Theorem \[simplecase.thm\]. By setting $k=0$ we obtain the positive Dehn twist expression $$i =c_{2i+2}\cdots c_{2h}c_{2h+1}c_{2i}\cdots c_{2}c_{1}b_{0}c_{2h+1}c_{2h}\cdots c_{2i+2}c_{1}c_{2}\cdots c_{2i}c_{2i+1}$$ for the hyperelliptic involution. [**Proof:**]{} We will give an algebraic proof for this fact. First, observe that $b_0=c_1\cdots c_{2h}(c_{2h+1})$. Here we abuse the notation and use the same notation for the cycles and the twists. $b_0=c_1\cdots c_{2h}(c_{2h+1})$ means that the sequence of twists $c_1\cdots c_{2h}$ are applied to the cycle $c_{2h+1}$ and the cycle $b_0$ is obtained as the result of that. Therefore the twist about $b_0$ is obtained from the twist about $c_{2h+1}$ by conjugation by $c_1\cdots c_{2h}$ by a well-known fact [@Gu], i.e., $$b_0=c_1\cdots c_{2h}c_{2h+1} (c_1\cdots c_{2h})^{-1}.$$ Substituting this in the expression for $i$ stated in Corollary \[hyperelliptic.cor1\] we obtain $$c_{2i+2}\cdots c_{2h}c_{2h+1}c_{2i}\cdots c_{2}c_{1}c_1\cdots c_{2h}c_{2h+1} (c_1\cdots c_{2h})^{-1}c_{2h+1}c_{2h}\cdots c_{2i+2}c_{1}c_{2}\cdots c_{2i}c_{2i+1},$$ $$c_{2i+2}\cdots c_{2h}c_{2h+1}c_{2i}\cdots c_{2}c_{1}c_1\cdots c_{2h}c_{2h+1} c_{2h}^{-1}\cdots c_1^{-1} c_{2h+1}c_{2h}\cdots c_{2i+2}c_{1}c_{2}\cdots c_{2i}c_{2i+1}.$$ Recall that $c_i$ and $c_j$ commute if $|i-j|>1$. Using this we can write $$c_{2h}^{-1}\cdots c_1^{-1} c_{2h+1}c_{2h}\cdots c_{2i+2}c_{1}c_{2}\cdots c_{2i}c_{2i+1}$$ as $$c_{2h}^{-1}\cdots c_{2i+1}^{-1} c_{2h+1}c_{2h}\cdots c_{2i+2}c_{2i}^{-1}\cdots c_1^{-1}c_{1}c_{2}\cdots c_{2i}c_{2i+1}$$ $$c_{2h}^{-1}\cdots c_{2i+1}^{-1} c_{2h+1}c_{2h}\cdots c_{2i+2}c_{2i+1}$$ Therefore what we had originally is equal to $$c_{2i+2}\cdots c_{2h}c_{2h+1}c_{2i}\cdots c_{2}c_{1}c_1\cdots c_{2h}c_{2h+1}c_{2h}^{-1}\cdots c_{2i+1}^{-1} c_{2h+1}c_{2h}\cdots c_{2i+2}c_{2i+1}.$$ Another relation we have to remember is the braid relation $c_ic_{i+1}c_i=c_{i+1}c_ic_{i+1}$, from which we can obtain $c_ic_{i+1}c_i^{-1}=c_{i+1}^{-1}c_ic_{i+1}$. Using this multiple times on the expression $$c_1\cdots c_{2h}c_{2h+1}c_{2h}^{-1}\cdots c_{2i+1}^{-1}$$ along with the commutativity relation mentioned above, we obtain $$c_{2h+1}^{-1}\cdots c_{2i+2}^{-1}c_1\cdots c_{2h}c_{2h+1}.$$ Substituting this in what we have for the original expression now we get $$c_{2i+2}\cdots c_{2h}c_{2h+1}c_{2i}\cdots c_{2}c_{1}c_1\cdots c_{2h}c_{2h+1}c_{2h}^{-1}\cdots c_{2i+1}^{-1} c_{2h+1}c_{2h}\cdots c_{2i+2}c_{2i+1}$$ $$c_{2i+2}\cdots c_{2h}c_{2h+1}c_{2i}\cdots c_{2}c_{1}c_{2h+1}^{-1}\cdots c_{2i+2}^{-1}c_1\cdots c_{2h}c_{2h+1} c_{2h+1}c_{2h}\cdots c_{2i+2}c_{2i+1}.$$ Using the commutativity relation between the terms $c_{2i}\cdots c_{2}c_{1}$ and $c_{2h+1}^{-1}\cdots c_{2i+2}^{-1}$ we can write the above expression as $$c_{2i+2}\cdots c_{2h}c_{2h+1}c_{2h+1}^{-1}\cdots c_{2i+2}^{-1}c_{2i}\cdots c_{2}c_{1}c_1\cdots c_{2h}c_{2h+1} c_{2h+1}c_{2h}\cdots c_{2i+2}c_{2i+1},$$ which simplifies to $$c_{2i}\cdots c_{2}c_{1}c_1\cdots c_{2h}c_{2h+1} c_{2h+1}c_{2h}\cdots c_{2i+2}c_{2i+1}.$$ If we square this we get $$c_{2i}\cdots c_{1}\underline{c_1\cdots c_{2h+1} c_{2h+1}\cdots c_{2i+1}c_{2i}\cdots c_{1}}c_1\cdots c_{2h+1} c_{2h+1}\cdots c_{2i+1}.$$ The underlined portion is the well-known expression for $i$. Also using the fact that $i$ commutes with $c_i,$ the above expression becomes $$ic_{2i}\cdots c_{1}c_1\cdots c_{2h+1} c_{2h+1}\cdots c_{2i+1}.$$ Now the question reduces to showing $$ic_{2i}\cdots c_{1}c_1\cdots c_{2h+1} c_{2h+1}\cdots c_{2i+1}=1.$$ We will obtain that result by going backwards from the relation $i^2=1$, by first writing it as $$ic_1\cdots c_{2h+1}c_{2h+1}\cdots c_{2i+1}c_{2i}\cdots c_{1}=1,$$ then multiplying by $c_1^{-1}$ on the right, $$ic_1\cdots c_{2h+1}c_{2h+1}\cdots c_{2i+1}c_{2i}\cdots c_2=c_{1}^{-1},$$ and then multiplying by $c_1$ on the left $$ic_1c_1\cdots c_{2h+1}c_{2h+1}\cdots c_{2i+1}c_{2i}\cdots c_2=1$$ and repeating the same procedure $2i$ times.\ An alternate expression for $\theta$ using a slightly different set of cycles is obtained by gluing $n$ copies of bounded surfaces in Figure \[basecycleswithboundary.fig\] together along tori with two boundary components. Figure \[altmultiple.fig\] demonstrates the set of cycles that are used in that expression. The need for this expression emerges from the fact that it is necessary to have at least two holes between two copies when they are glued along four-holed spheres, as seen in Figures \[inputex1.fig\] and \[inputex2.fig\]. The alternate expression allows us to have only one hole between two adjacent copies and it is very similar to the one given in Theorem \[main.thm\]: $$\theta =Y^n_i\prod_{j=2}^{n}\left(Y^{j-1}_iX_{j-1}b^j_0Y^j_o\right)b^1_0Y^1_o,$$ where $$\begin{aligned} Y^j_o&=&r^j_ol^j_om^j,\\ Y^j_i&=&r^j_il^j_i,\\ X_j&=&x_jt_j, \end{aligned}$$ and $$\begin{aligned} l^j_i&=&c^j_{2i}\cdots c^j_1,\\ l^j_o&=&c^j_{1}\cdots c^j_{2i},\\ r^j_i&=&c^j_{2i+2}\cdots c^j_{2h+1}, \\ r^j_o&=&c^j_{2h+1}\cdots c^j_{2i+2}, \\ m^j&=&b^j_1\cdots b^j_{k}c^j_{2i+1}.\end{aligned}$$ The proof of this fact also uses induction and it is essentially based on the simple bounded case that is similar to the one in Proposition \[boundedcase.prop\]. After modifying Figure \[basecycleswithboundary.fig\] slightly, we obtain Figure \[altsimplecasewithboundary.fig\] and hence the expression $$\theta =c_{2i+2}\cdots c_{2h+1}c_{2i}\cdots c_{1}x_{2}t_{2}x_{1}t_{1}b_{0}c_{2h+1}\cdots c_{2i+2}c_{1}\cdots c_{2i}b_{1}b_{2}\cdots b_{k-1}b_{k}c_{2i+1}$$ that replaces the one in Proposition \[boundedcase.prop\]. We will not give a detailed proof for this last expression, instead just provide the mapping of the boundary component $\delta_1$ in Figure \[basecycleswithboundary.fig\]. One has to mimic the steps that is involved in the mapping of the other cycles in the proof of Proposition \[boundedcase.prop\] by accommodating the slight modifications as needed.\ In the first line of Figure \[altmappingofdelta1.fig\] we see the effect of $c_1$ on $\delta_1$ first because all the cycles that come before $c_1$ miss $\delta_1$. The next cycle, $b_0,$ misses the result from that as seen in the end of the first and the beginning of the second line. After that, the twist about $t_1$ takes place, which is not demonstrated in two steps, even though it intersects the cycle it twists twice. The result from that has intersection number 2 with $x_1$ and the twist about $x_1$ is shown in two steps in the end of the second line and all of the third line. Following twists miss completely the cycle that is in the end of the third line and the twist about $c_1$ brings that cycle back to $\delta_1$.\ A corollary to the expression for $\theta$ in Theorem \[main.thm\] and its alternate form would be setting $k=0$ to obtain some new expressions for the hyperelliptic involution. All we have to do is redefine $Y^j_o$ as $Y^j_o=r^j_ol^j_oc^j_{2i+1}$ without changing the expression $$Y^n_i\prod_{j=2}^{n}\left(Y^{j-1}_iX_{j-1}b^j_0Y^j_o\right)b^1_0Y^1_o,$$ for $\theta$. Applications ============ In this section we will determine the homeomorphism type of the genus $g$ Lefschetz fibration $$X\longrightarrow S^{2}$$ described by the word $\theta^2 =1$ in the mapping class group $M_g$, where $\theta$ is as defined in Theorem \[main.thm\]. Consider the surface in Figure \[multiple.fig\]. Let $k_j$ be the $j^{th}$ *vertical genus*, the total genus of the central part of the $j^{th}$ copy, and let $h_j=l_j+r_j$ be the $j^{th}$ *horizontal genus*, namely the sum of the $j^{th}$ *left genus* $l_j$ and the $j^{th}$ *right genus* $r_j$. Let $k=\sum k_j$ be *the vertical genus* and $h=\sum h_j$ be *the horizontal genus*. If we denote the total genus by $g$ then $g=h+k$. To find the total number of cycles contained in $\theta$ let’s recall that $$\theta =Y^n_i\prod_{j=2}^{n}\left(Y^{j-1}_iX_{j-1}b^j_0Y^j_o\right)b^1_0Y^1_o,$$ where $$\begin{aligned} Y^j_o&=&r^j_ol^j_om^j,\\ Y^j_i&=&r^j_il^j_i,\\ X_j&=&x_jt_j, \end{aligned}$$ and $$\begin{aligned} l^j_i&=&c^j_{2i}\cdots c^j_2,\\ l^j_o&=&c^j_{2}\cdots c^j_{2i},\\ r^j_i&=&c^j_{2i+2}\cdots c^j_{2h}, \\ r^j_o&=&c^j_{2h}\cdots c^j_{2i+2}, \\ m^j&=&b^j_1\cdots b^j_{k}c^j_{2i+1}.\end{aligned}$$ The first and the last copies would differ in the first two and the following two lines of definitions above, respectively: $$\begin{aligned} l^1_i&=&c^1_{2i}\cdots c^1_1,\\ l^1_o&=&c^1_{1}\cdots c^1_{2i},\\ r^n_i&=&c^n_{2i+2}\cdots c^n_{2h+1}, \\ r^n_o&=&c^n_{2h+1}\cdots c^n_{2i+2}.\end{aligned}$$ $m^j$ consists of $k_j+1$ cycles. Both $l^j_i$ and $l^j_o$ consist of $2i-2+1=2i-1$ cycles for $j\neq1$ and $l^1_i$ and $l^1_o$ consist of $2i-1+1=2i$ cycles. Likewise, both $r^j_i$ and $r^j_o$ consist of $2h_j-(2i+2)+1=2h_j-2i-1$ cycles for $j\neq n$ and $r^n_i$ and $r^n_o$ consist of $2h_n+1-(2i+2)+1=2h_n-2i$ cycles. Therefore $Y^j_o$ consists of $y^j_o=2h_j-2i-1+2i-1+k_j+1=k_j+2h_j-1$ cycles for $j\neq1,n$, $Y^1_o$ consists of $y^1_o=2h_1-2i-1+2i+k_1+1=k_1+2h_1$ cycles, and $Y^n_o$ consists of $y^n_o=2h_n-2i+2i-1+k_n+1=k_n+2h_n$ cycles. $Y^j_i$ has $y^j_i=2h_j-2i-1+2i-1=2h_j-2$ cycles for $j\neq1,n$, $Y^1_i$ has $y^1_i=2h_1-2i-1+2i=2h_1-1$ cycles, and $Y^n_i$ has $y^n_i=2h_n-2i+2i-1=2h_n-1$ cycles. Clearly $X_j$ consists of 2 cycles. In the above computations, too, we ignored the dependence of $i$ on $j$ and did not write $2i_j$ instead of $2i$ in order not to make the computations more complicated because they cancel out anyway. Now, using the lengths of each group of twists computed above we determine that $$Y^n_i\prod_{j=2}^{n}\left(Y^{j-1}_iX_{j-1}b^j_0Y^j_o\right)b^1_0Y^1_o,$$ consists of $$y^n_i+\sum_{j=2}^{n}\left(y^{j-1}_i+2+1+y^j_o\right)+1+y^1_o$$ $$=y^n_i+\sum_{j=2}^{n} y^{j-1}_i+3(n-2+1)+\sum_{j=2}^{n}y^j_o+1+y^1_o$$ cycles. Rearranging the indices and simplifying we get $$y^n_i+\sum_{j=1}^{n-1}y^{j}_i+3(n-1)+\sum_{j=2}^{n}y^j_o+1+y^1_o$$ Releasing the first term of the first sum and the last term of the second sum gives $$y^n_i+y^1_i+\sum_{j=2}^{n-1}y^{j}_i+3(n-1)+y^n_o+\sum_{j=2}^{n-1}y^j_o+1+y^1_o,$$ which is equal to $$y^1_i+y^1_o+y^n_i+y^n_o+\sum_{j=2}^{n-1}\left(y^{j}_i+y^j_o\right)+3(n-1)+1.$$ Now substituting the value of each term and simplifying we obtain $$2h_1-1+k_1+2h_1+2h_n-1+k_n+2h_n+\sum_{j=2}^{n-1}\left(2h_j-2+k_j+2h_j-1\right)+3(n-1)+1$$ $$=4h_1+k_1+4h_n+k_n+\sum_{j=2}^{n-1}\left(4h_j+k_j\right)-3(n-1-2+1)+3(n-1)-1$$ $$=\sum_{j=1}^{n}\left(4h_j+k_j\right)-3(n-2)+3(n-1)-1$$ $$=4\sum_{j=1}^{n}h_j+\sum_{j=1}^{n}k_j+6-3n+3n-3-1$$ $$=4h+k+2$$ Therefore the word $\theta^2 =1$ consists of $2(4h+k+2)=8h+2k+4$ twists. Since all the twists are about non-separating cycles, the Lefschetz fibration defined by the word $\theta^2 =1$ has $8h+2k+4$ irreducible fibers. This allows us to compute the Euler characteristic of the $4-$ manifold $X$ using the formula $$\chi(X)=2(2-2g)+\mbox{number of singular fibers}$$ for Lefschetz fibrations, which is $$2(2-2g)+8h+2k+4=4-4g+8h+2k+4=4-4(h+k)+8h+2k+4$$ $$=8+4h-2k$$ in our case. The other homeomorphism invariant that we will compute is the signature $\sigma(X)$ of the $4-$ manifold $X$. Using the algorithm described in [@Oz] we wrote a Matlab program that computes the signature of the Lefschetz fibration described by the word $\theta^2 =1$. The input for the program is the left, right, and the vertical genus of each copy that is glued together to form the surface $\Sigma$ on which $\theta$ is defined. The following are two examples that demonstrate how the shape of the surface is coded into a sequence of numbers, which are used as the inputs for the program. (0 2 1,1 2 0): $0+0+0+0-1+0+0+0+0-1-1-1-1-1-1-1+0-1+0-1-1+0-1+0+0+0+0+0=-12$ (0 4 1,1 2 0): $0+0+0+0+0+0-1+0+0+0+0-1-1-1-1-1-1-1+0+0+0-1+0-1-1+0-1+0+0+0+0+0=-12$ (0 2 1,1 4 0): $0+0+0+0-1+0+0+0+0+0+0-1-1-1-1-1-1-1+0-1+0-1-1+0+0+0-1+0+0+0+0+0=-12$ (0 4 1,1 4 0): $0+0+0+0+0+0-1+0+0+0+0+0+0-1-1-1-1-1-1-1+0+0+0-1+0-1-1+0+0+0-1+0+0+0+0+0=-12$ (1 2 1,1 4 0): $0+0+0+0+0+0-1+0+0+0+0+0+0-1-1-1-1-1-1-1-1-1+0-1-1-1+0-1-1+0+0+0-1+ 0+0+0+0+0+0+0=-16$ (0 2 1,1 4 1): $0+0+0+0-1+0+0+0+0+0+0+0+0-1-1-1-1-1-1-1-1-1+0-1+0-1-1+0+0+0-1-1-1+ 0+0+0+0+0+0+0=-16$ (0 2 2,1 4 0): $0+0+0+0+0+0-1+0+0+0+0+0+0-1-1-1-1-1-1-1-1-1+0-1-1-1+0-1-1+0+0+0-1+ 0+0+0+0+0+0+0=-16$ (0 2 1,1 2 1,1 2 0): $0+0+0+0-1+0+0+0+0+0+0-1-1-1+0+0+0+0-1-1-1-1-1-1-1-1+0-1+0-1-1+0+ -1-1+0+0+0+0-1-1+0-1+0+0+0+0+0+0=-20$ (0 2 1,1 2 1,1 2 2,1 2 1): $0+0+0+0-1+0+0+0+0+0+0-1-1-1+0+0+0+0+0+0+0+0-1-1-1-1+0+0+0+0+0+0-1 -1-1-1-1-1-1-1-1-1-1-1+0-1+0-1-1+0-1-1+0+0+0+0-1-1+0-1-1-1-1+0+0+0+0+0-1-1 +0-1-1-1+0+0+0+0+0+0+0+0+0+0=-36$ (2 2 1,1 2 2,1 4 1): $0+0+0+0+0+0+0+0-1+0+0+0+0+0+0+0+0-1-1-1-1-1-1-1+0+0+0+0+0+0+0+0-1-1-1-1-1-1-1 -1-1-1-1-1+0-1-1-1-1-1+0-1-1+0-1-1-1-1+0+0+0+0+0+0+0+0-1-1+0+0+0-1-1-1+0+0+0+0+0+0+0+0+0+0=-36$ (3 4 2,1 4 2): $0+0+0+0+0+0+0+0+0+0+0+0+ 0+0-1+0+0+0+0+0+0+0+0+0+0-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1+0+0+0-1-1-1-1-1-1-1-1-1+0-1-1+0+0+0-1-1-1-1-1 +0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0=-36$ (1 4 1,1 2 1,1 6 2): -32 The above computations, along with many others that we do not include here, point out to the fact that the signature depends only on $h$, i.e., it is independent of $k$. A quick check suggests that $\sigma(X)=-4(h+1)$ for the above computations. We conjecture that this is true in general, namely the signature of the Lefschetz fibration given by the word $\theta^2 =1$, where $\theta$ is as defined in Theorem \[main.thm\], is $-4(h+1)$. For $\chi(X)=8+4h-2k$ and $\sigma(X)=-4(h+1)$, we obtain $$\begin{aligned} c_1^2(X)&=&3\sigma(X)+2\chi(X) \\ &=&3(-4h-4)+2(8+4h-2k) \\ &=& -4h-4k+4 \\ &=& -4(g-1)\end{aligned}$$ and $$\begin{aligned} \chi_h(X)&=&\frac{1}{4}(\sigma(X)+\chi(X)) \\ &=&\frac{1}{4}(8+4h-2k-4h-4) \\ &=& \frac{1}{4}(4-2k)=1-k/2\end{aligned}$$ Recall that $k$ is even. $\chi_h(X)$ makes sense here because $X$ has almost complex structure. [^1]:
--- abstract: 'Many of the strongest game playing programs use a combination of Monte Carlo tree search (MCTS) and deep neural networks (DNN), where the DNNs are used as policy or value evaluators. Given a limited budget, such as online playing or during the self-play phase of AlphaZero (AZ) training, a balance needs to be reached between accurate state estimation and more MCTS simulations, both of which are critical for a strong game playing agent. Typically, larger DNNs are better at generalization and accurate evaluation, while smaller DNNs are less costly, and therefore can lead to more MCTS simulations and bigger search trees with the same budget. This paper introduces a new method called the multiple policy value MCTS (MPV-MCTS), which combines multiple policy value neural networks (PV-NNs) of various sizes to retain advantages of each network, where two PV-NNs $f_{S}$ and $f_{L}$ are used in this paper. We show through experiments on the game NoGo that a combined $f_{S}$ and $f_{L}$ MPV-MCTS outperforms single PV-NN with policy value MCTS, called PV-MCTS. Additionally, MPV-MCTS also outperforms PV-MCTS for AZ training.' author: - 'Li-Cheng Lan$^{1,2}$' - 'Wei Li$^{1}$' - | Ting-Han Wei$^{1,2}$I-Chen Wu$^{1,2}$ $^1$National Chiao Tung University, Taiwan\ $^2$Pervasive Artificial Intelligence Research (PAIR) Labs, Taiwan {sb710031, fm.bigballon, tinghan.wei}@gmail.com, [email protected] bibliography: - 'ijcai19.bib' title: Multiple Policy Value Monte Carlo Tree Search --- Introduction {#Intro} ============ Many of the state-of-the-art game playing programs in games such as Go, chess, shogi, and Hex used a combination of Monte Carlo tree search (MCTS) and deep neural networks (DNN) [@silver2016mastering; @silver2018general; @gao2018three]. MCTS is a heuristic best-first search algorithm and had been a major breakthrough for many games since 2006 [@kocsis2006bandit; @coulom2006efficient; @browne2012survey], especially for Go [@gelly2007combining; @enzenberger2010fuego]. Starting in 2015, DNNs were investigated for move prediction [@clark2015training; @tian2015better]. For a problem with large state spaces like Go, the power of DNNs to generalize for previously unseen states is critical. In addition, DNNs were found to be more accurate at evaluating positions than Monte Carlo rollout [@silver2016mastering]. MCTS, however, remains a critical component of strong game playing programs due to its ability to search ahead while balancing exploration and exploitation. The choice of network size in an MCTS and DNN combined algorithm is a non-trivial decision. Common current practices tend to settle on a network size based on empirical experience; for the example of Go, most teams tend to settle on 256 filters with varying number of layers [@tian2019elfopengo; @Leela-zero], following the first example by AlphaGo [@silver2016mastering]. Empirical data shows that overall, the larger the network, the stronger the program tends to be under the same number of MCTS simulations [@Leela-zero]. Similarly, outside the context of Go, there is abundant evidence showing that in general, the larger the network, the better it will be at generalization, and the increased capacity of the network also leads to better learned representations [@hinton2015distilling; @he2016deep]. However, when creating a program that combines MCTS and DNN, given the same amount of computing resources, it is not a simple decision of training using the largest allowable network, since the number of MCTS simulations depends on the size of the network. A smaller network can be much faster and therefore search more states; given the asymptotic convergence guarantee of MCTS, it may be more favorable to spend the finite budget on performing more simulations, rather than using a larger DNN for a more accurate evaluation. When considering training following the AZ paradigm, this problem is critical, since millions of self-play records need to be generated. To solve this dilemma, we consider taking advantage of both large and small networks by combining them together with MCTS, which we call multiple policy value MCTS (MPV-MCTS). Two pre-trained DNNs are used in this paper, typically of significantly different network sizes (i.e., consisting of different numbers of filters and layers). MPV-MCTS is a general method that does not depend on how the two DNNs are trained. In the method, each network grows its own best-first search tree. The two trees share the same action value, so intuitively, the small net helps avoid blind spots via its lookahead from a higher simulation count, while the large net provides more accurate values and policies. We use a simplified variant of Go called NoGo [@nogo] to demonstrate the idea. Experiments show that by combining supervised learning networks of different sizes, MPV-MCTS can reach an average of 418 Elo rating against the baseline (HaHaNoGo), whereas a small network and a large network alone can only reach 277 and 309 Elo ratings, respectively. Compared with intermediate-sized single networks, MPV-MCTS is also stronger against the common baseline. MPV-MCTS can also improve the playing strength of separately trained AZ networks. Lastly, using equivalent training budgets, MPV-MCTS can accelerate AZ training by at least a factor of 2. Matchups between the MPV-MCTS method and those without achieves win rates of 56.6% and 51.2% for 2 times and 2.5 times the training budget, respectively. With the same training budget, AZ training with MPV-MCTS can be up to 252 Elo ratings stronger than those trained without. We list two major contributions of this paper as follows: 1. The MPV-MCTS search tree generated with two differently sized DNNs $f_S$ and $f_L$ is stronger in playing strength than either net alone, given the same amount of computing resource usage. 2. With MPV-MCTS, AZ training can be performed more efficiently. Background {#Background} ========== Monte Carlo Tree Search {#MCTS} ----------------------- MCTS is a best-first tree search algorithm, typically consisting of the iterative process of: 1) traversing the search tree according to a specified selection algorithm to *select* a yet-to-be-evaluated leaf state 2) *evaluating* the leaf, and 3) *updating* the tree correspondingly with the evaluation result. Instead of following the minimax paradigm, MCTS averages the evaluation results in the subtree rooted at state $s$ to decide on the state value $V(s)$. Monte Carlo sampling (often referred to as rollout) is a standard evaluation method; modern programs may use DNNs as state value approximators instead of rollout, or combine the two evaluation methods in some way. Most selection algorithms are designed to minimize the expected regret of sampling the state $s$ by balancing exploration and exploitation. Examples of selection algorithms include UCB1 [@kocsis2006bandit] and PUCT [@rosin2011multi]. Both of these algorithms follow the general form: $$a_t=\mathop{\arg\max}_{a} (Q(s,a)+u(s,a)),$$ where $Q(s,a)$ is the state action value of taking action $a$ at $s$, and $u(s,a)$ is a bonus value to regulate exploration. For example, the bonus value of PUCT (used by AlphaGo) is: $$\label{equ:PUCT} u_{\text{PUCT}}(s,a)\propto P(s,a)\times\frac {\sqrt{N(s)}}{N(s,a)},$$ where $N(s)$ and $N(s,a)$ are the simulation counts of $s$ and taking the action $a$ at $s$ respectively; and $P(s,a)$ is the probability of $a$ being the best action of $s$, which is referred to as the *prior probability*. Policy Value MCTS {#PV-MCTS} ----------------- Policy Value MCTS (PV-MCTS) uses DNNs to provide a policy $p(a|s)$ and a value $v(s)$ for any given state $s$. The policy $p$ is used for PUCT as $P(s,a)$ in Equation \[equ:PUCT\] during the selection phase, while $v(s)$ is used as the evaluation result to update the state value $V$ of ancestor states of $s$. The particular implementation of PV-MCTS as used by AlphaGo consists of two separate networks, the policy networks (be used when a node become a branch node of MCTS) and value networks (be used when a node be selected). AlphaGo Zero combines the two as a policy value network (PV-NN) that outputs the prior probabilities and state value simultaneously. Data from Silver et al. hinted that the advantage of search cannot be easily replaced by spending more effort on training better policy networks. More specifically, their experiments show that the output policy of PV-MCTS is about 2000 Elo ratings [@elo1978rating] stronger than simply using the policy output of the same PV-NN without search. This indicates that even with a strong DNN policy function, performance can be further improved significantly through search. Combining Multiple Strategies in MCTS {#APV-MCTS} ------------------------------------- There are several methods that have been proposed to combine multiple strategies during MCTS action value evaluation, such as Rapid Action Value Evaluation (RAVE) [@gelly2011monte], implicit minimax backups [@lanctot2014monte], and asynchronous policy value MCTS (APV-MCTS) [@silver2016mastering]. Both RAVE and implicit minimax backup combine MCTS evaluation (rollout) with another heuristic; for the scope of this paper, we omit the details. Of these three algorithms, MPV-MCTS is most related to APV-MCTS. APV-MCTS is designed to combine the value network and rollout as the MCTS evaluation. The combined strength of the value network and rollout can achieve more than 95% winning rate against using either one alone [@silver2016mastering], demonstrating that there is potential for better performance by combining different strategies. Method {#Method} ====== Multiple Policy Value MCTS {#MPV-MCTS} -------------------------- In this subsection, we focus on explaining the scenario where the overall system consists of two PV-NNs $f_S$ and $f_L$ (small and large networks, respectively). Let $b_S$ ($b_L$) be an assigned number of simulations, or budget, for which our method uses the network $f_S$ ($f_L$) to grow its own search tree $T_S$ ($T_L$). Our problem is then: Given $s$, $f_S$, $f_L$, $b_S$, and $b_L$, find a stronger policy: $$\pi(s,(f_S,b_S),(f_L,b_L)),$$ such that $b_S \geq b_L$. When a state is in both search trees, the two networks collaborate by sharing the same state value $V(s)$ and the same prior probability $P(s,a)$. For the scope of this paper, we use the following method, similar to APV-MCTS: $$\begin{aligned} V(s) &= \alpha V_S(s)+(1-\alpha) V_L(s), \text{ and} \\ P(s,a) &= \beta p_S(a|s)+(1-\beta) p_L(a|s),\end{aligned}$$ where $\alpha, \beta \in [0,1]$ are weight coefficients. Note that $\alpha$ and $\beta$ can be set according to the accuracy of the values and prior probabilities. For example, the more accurate $V_L$ and $p_L$ are, the smaller $\alpha$, $\beta$ should be. In the experiments, we set $\alpha=0.5$, $\beta=0$, following settings by APV-MCTS. For each simulation during the search, we first choose either $f_S$ or $f_L$, say $f_S$, then select a leaf state of $T_S$ to evaluate, and then use the results to update the search tree. It is up to the user to design how the two networks take turns, as long as the budgets $b_S$ and $b_L$ are satisfied. We suggest several ways to take turn in section \[Discussion\]. In our experiments, for a given set of $b_S$ and $b_L$, we simply randomly select $b_S$ numbers between 1 and $b_S+b_L$, and perform a small net simulation for those iterations; for all other iterations, we use the larger net. This corresponds to line 1 in Algorithm \[alg:MPV-MCTS\]. We now consider how $f_S$ and $f_L$ contribute to the overall MPV-MCTS method, starting with $f_S$. Conceptually, the goal is to allow $T_S$ to provide the overall MPV-MCTS with the benefits of lookahead search, where the tree balances between exploration and exploitation as it grows. This is the role of $f_S$ because it is the faster of the two networks, and can therefore perform more simulations with the same amount of resource usage as $f_L$. This is also the reason why we assign $b_S$ to be larger than $b_L$. During MPV-MCTS, for each simulation using $f_S$, a leaf state is selected following the PUCT algorithm using the simulation count of $f_S$ as $N$. This corresponds to lines 4-6 in Algorithm \[alg:MPV-MCTS\]. Now we consider the role of $f_L$. For every simulation of $f_L$, we wish to identify and simulate the most critical states. While there are many ways to do so, we simply assume for simplicity that in the larger tree the yet-to-be-evaluated leaf with the higher visit count in $T_S$ should be more important, and thus the leaf with the highest visit count in $T_S$ is selected in each simulation. This corresponds to line 8 in Algorithm \[alg:MPV-MCTS\]. There is a very rare special case, wherein the selected leaf may not yet be visited by $f_S$. In this case (line 9), we reselect a unevaluated leaf state (line 10) using PUCT for $f_L$ instead (following Equation \[equ:PUCT\], but with $N_L(s)$ and $N_L(s,a)$). $list = \text{RandomlySelect}(b_S, b_S+b_L)$ Training AlphaZero with MPV-MCTS {#Training AlphaZero with MPV-MCTS} -------------------------------- We first briefly review the AZ training method, in which an agent improves by playing against itself repeatedly. The agent generates its playing policy by using a PV-NN in a PV-MCTS algorithm. The weights in the PV-NN are first initialized randomly. We now describe the training process by listing three major components. - [By using the PV-NN in a PV-MCTS algorithm, *self-play workers* generate game records by letting two same instances of itself play against each other. At each turn in a game, a worker computes PV-MCTS and follows the playing policy as follows. The probability of $a$ being played at $s$ is $\pi_a \propto N(s,a)^{1/\tau}$, where $N(s,a)$ is the action’s simulation count and $\tau$ is a temperature parameter. When a game ends, the self-play record is saved to the replay buffer.]{} - [The *replay buffer* is a fixed-size queue containing a collection of self-play game records. Each record $(s, \pi, z)$ includes a state $s$ (of which there will be many in a single game), the playing policy $\pi$ used to select the action at state $s$, and the outcome $z$ of the game.]{} - [The *training worker* continually samples records from the replay buffer and trains the current PV-NN.]{} Following the above AZ training and from the intuition that larger DNNs tend to learn better, it is reasonable to expect the trained agent using $f_L$ should outperform that with $f_S$. That said, if our hypothesis for MPV-MCTS holds, it is also possible that replacing PV-MCTS with MPV-MCTS in AZ training will lead to better performance. Figure \[fig:workflow\] shows the work-flow of how MPV-MCTS can be applied to the AZ training algorithm. We start from two PV-NNs of different sizes with random weights. The replay buffer does not need to be modified. The self-play workers use the latest PV-NNs of both sizes with MPV-MCTS to generate the self-play records. After performing MPV-MCTS, we simply use the simulation count of the small net tree $T_S$ as our playing policy $\pi$ to select actions. When a game ends, we store the records to the replay buffer. The training worker repeatedly use records sampled from the replay buffer to train both $f_S$ and $f_L$ using the same loss function as AlphaGo Zero [@silver2017mastering]. ![ Workflow of training AZ with MPV-MCTS. []{data-label="fig:workflow"}](mpv_mcts_az_training.png){width="7cm"} Experiments {#Experiments} =========== In this section, we empirically demonstrate our method on NoGo, a two-player game on a $9 \times 9$ Go board. NoGo was selected as the feature game of the 2011 Combinatorial Games Workshop at BIRS, and subsequently became a tournament item in the Computer Olympiad [@nogo]. The game has the same rules as Go, except where moves that lead to capturing and suicide are not allowed. The player who cannot make any moves during her turn loses. NoGo is chosen for our experiments since the complexity of the game is relatively low when compared with Go, while maintaining many similar characteristics [@chou2011revisiting]. Experiment Settings ------------------- In our experiments, the state-of-the-art NoGo program HaHaNoGo [@HaHaNoGo] served as the benchmark. HaHaNoGo is a MCTS-based program that defeated the reigning champion HappyNoGo (which placed first in the 2013 and 2015 Computer Olympiad) in a competition held in TAAI 2016. In all experiments, we used HaHaNoGo with 100,000 simulations as the baseline, and each program played 1,000 games (the standard deviation of the win rate is about 1%) against the baseline to obtain a win rate. For better visualization and analysis, we use the Elo rating [@elo1978rating] instead of win rate to judge the playing strength of AI by setting the Elo rating of HaHaNoGo to $0$. In this paper, all experiments are performed on eight Intel Xeon(R) Gold 6154 CPUs and 64 Nvidia Tesla V100 GPUs. The architecture of our PV-NNs is the same as that used in AlphaGo Zero [@silver2017mastering], except for filter sizes, residual blocks [@he2016deep] and inputs. Let $f(x,y)$ denote a PV-NN with $x$ filters and $y$ residual blocks. The input to the network is a $9 \times 9 \times 4$ image stack comprised of four binary feature planes for the board information (player’s stones, opponent’s s, player’s legal moves, opponent’s moves) representing game states. For fairness of comparison, let one unit of normalized budget indicate the amount of computing resources consumed for one single forward pass on network $f_{128,10}$ on the environment of the above setting. Note that one forward pass on $f(a\times x,b\times y)$ ideally runs about $a^2\times b$ times slower than $f(x,y)$. Thus, for computing resource analysis, one single forward pass on $f_{64,5}$ is said to consume $1/8$ normalized budget (ignoring the cost of selection and update). Thus, given the same amount of normalized budget $B$, $B$ forward passes can be performed on $f_{128,10}$, and $8B$ on $f_{64,5}$. Evaluating MPV-MCTS {#sec:evaluating MPV-MCTS} ------------------- #### Combining Supervised Learning Networks. First, we investigate using PV-NNs that were trained with supervised learning in MPV-MCTS. We trained both $f_{64,5}$ and $f_{128,10}$ from a dataset of 200,000 games (about $10^7$ positions) generated by HaHaNoGo with 50,000 simulations for each move via self-play. We will present the performance of MPV-MCTS with different budget allocation schemes, compared to single networks using PV-MCTS. With a normalized budget of $B$, we combine $f_{64,5}$ and $f_{128,10}$ with the MPV-MCTS algorithm by different allocation schemes according to a budget ratio $r \in [0,1]$, where the normalized budget is $rB$ for $f_{128,10}$ and $(1-r)B$ for $f_{64,5}$. Figure \[fig:sl-haha-allocation\] shows the result of different resource allocation schemes, showing that combining two PV-NNs leads to better performance. The strongest version in these experiments is $r=2/4$, which achieved $527$ Elo rating (about $95.40\%$ win rate against the baseline) with a normalized budget of 1600. While the version that only uses either the small or the large network can only achieve $323$ and $472$ Elo rating at best, respectively. With the same normalized budget, both PV-NNs $f_{64,5}$ and $f_{128,10}$ alone are weaker than all three allocation ratios of MPV-MCTS. In the rest of the experiment, we set $r=2/4$. coordinates [ (100,233)(200,254)(400,250)(800,327)(1600,323) ]{}; coordinates (100,184)(200,261)(400,259)(800,367)(1600,472) ; coordinates [ (100,293)(200,333)(400,422)(800,449)(1600,512) ]{}; coordinates [ (100,318)(200,363)(400,431)(800,460)(1600,527) ]{}; coordinates (100,268)(200,312)(400,356)(800,444)(1600,501) ; We also trained intermediate-sized PV-NNs, $f_{128,5}$, $f_{90,10}$, $f_{90,5}$, and $f_{64,10}$, which are expected to be more accurate than smaller PV-NNs (e.g., $f_{64,5}$) and faster than larger PV-NNs (e.g., $f_{128,10}$). We compared these mid-sized PV-NNs using PV-MCTS with the best performing set in Figure \[fig:sl-haha-allocation\] (i.e., MPV-MCTS with $f_{64,5}+f_{128,10}, r=2/4$) with the same normalized budget. Figure \[fig: mid vs combine\] shows that MPV-MCTS outperforms all of the mid-sized PV-NNs with PV-MCTS. coordinates [ (100,229)(200,291)(400,327)(800,439)(1600,463) ]{}; coordinates (100,243)(200,310)(400,338)(800,427)(1600,508) ; coordinates [ (100,201)(200,234)(400,343)(800,356)(1600,436) ]{}; coordinates [ (100,161)(200,196)(400,259)(800,283)(1600,327) ]{}; coordinates [ (100,318)(200,363)(400,431)(800,460)(1600,527) ]{}; \[hbt\] [0.48]{} coordinates [ (250,-237)(500,-8)(750,108)(1000,184)(1250,192)(1500,191)(1750,288)(2000,273) ]{}; coordinates [ (250,-310)(500,53)(750,206)(1000,237)(1250,242)(1500,315)(1750,343)(2000,324) ]{}; coordinates [ (250,-254)(500,147)(750,244)(1000,256)(1250,300)(1500,380)(1750,367)(2000,378) ]{}; \[fig:1x simulation\] [0.48]{} coordinates [ (250,-195)(500,75)(750,153)(1000,258)(1250,296)(1500,292)(1750,348)(2000,320) ]{}; coordinates [ (250,-238)(500,118)(750,213)(1000,283)(1250,314)(1500,386)(1750,374)(2000,358) ]{}; coordinates [ (250,-203)(500,227)(750,259)(1000,338)(1250,365)(1500,402)(1750,452)(2000,444) ]{}; \[fig:4x simulation\] #### Combining AlphaZero Trained Networks. {#sec:az training} Since MPV-MCTS is comprised of pre-trained PV-NNs, we now investigate using various AZ trained PV-NNs in pairs. We trained both $f_{64,5}$ and $f_{128,10}$ using the following settings: - self-play simulation: $800$ - PUCT constant $c_{\text{PUCT}}$: $1.5$ - replay buffer size: $100,000$ We used 60 GPUs as self-player workers and 4 GPUs as a training worker. Each network was trained with about 500,000 steps, and 2,000,000 generated self-play games. After training, $f_{64,5}$ and $f_{128,10}$ will each have a progression of PV-NNs as the accumulated number of self-play games approaches 2,000,000. We select eight pairs of large and small PV-NNs, along the following total number of accumulated self-play games: 250k, 500k, ... , 2000k. That is, in Figure \[fig: MPV-MCTS use PV-NN trained by AlphaZero\], the red data points refer to the MPV-MCTS consisting of $f_{64,5}$ and $f_{128,10}$, each trained following AZ with 250k accumuluated self-play game records. Figure \[fig: MPV-MCTS use PV-NN trained by AlphaZero\] also presents the strength of each pair of large and small PV-NNs using the same normalized budget for testing. For Figure \[fig:1x simulation\], a normalized budget of 200 was used, while for Figure \[fig:4x simulation\], a normalized budget of 800 was used. Both results show that MPV-MCTS outperforms both the large and small nets, at all stages throughout the AZ training process. This also shows the robustness of MPV-MCTS. AlphaZero Training with MPV-MCTS -------------------------------- In this subsection, we investigate performing AZ training with MPV-MCTS, as illustrated in Figure \[fig:workflow\]. Let $f_{S}$ be $f_{64,5}$, and $f_{L}$ be $f_{128,10}$ for clarity of presentation. Following the workflow in Figure \[fig:workflow\], both $f_L$ and $f_S$ are trained together; self-play workers use both PV-NNs to generate self-play games, using the playing policy $\pi(s,(f_S,800)),(f_L,100)$ (as in the terminology in subsection \[MPV-MCTS\]). Next, we follow AZ training for three separate large networks, where the difference is that during self-play, each move is generated with simulation counts of 200, 400, and 800. These three are denoted as $f_{L_{200}}$, $f_{L_{400}}$, and $f_{L_{800}}$, respectively. Since we wish to fix the total training resource usage, we will need to define a *normalized game generation count*, similar to the way we defined a normalized budget. Since the version with 800 simulations per self-play move theoretically spends four times approximately as much resources as the version with 200 simulations per self-play, the former will generate $1/4$ as many self-play game records given the same training budget. For this experiment, we define 1 normalized generated game to use the same amount of training budget as generating 1 self-play game record for $f_{128,10}$ using 200 simulations per move. Note that in the MPV-MCTS case (where $f_S$ and $f_L$ are trained with 800 and 100 simulations per move respectively), generating 1 actual self-play game record is equivalent in terms of cost as 1 normalized generated game. Figure \[fig: Az training with MPV-MCTS\] presents the results of training AZ with MPV-MCTS compared with the three other large nets. Since our main goal is to demonstrate the benefit of using MPV-MCTS for training, the testing conditions should be as equal across all sets versions as possible. The Elo ratings were obtained by playing against the same baseline (HahaNoGo with 100,000 simulations per move), where each agent with the trained large net uses 200 (Figure \[fig: Az training with MPV-MCTS 200\]) or 800 (Figure \[fig: Az training with MPV-MCTS 800\]) simulations per move for testing. For the MPV-MCTS case, we use only $f_L$ during testing for comparison; $f_S$ was omitted from testing so that the comparison is strictly between large PV-NNs. \[hbt\] [0.48]{} coordinates [ (250,-1100)(500,-623)(750,-369)(1000,-159)(1250,-119)(1500,-72)(1750,-48)(2000,-16) ]{}; coordinates [ (250,-1079)(500,-494)(750,-251)(1000,-193)(1250,-159)(1500,-149)(1750,-41)(2000,-28) ]{}; coordinates [ (250,-1100)(500,-592)(750,-398)(1000,-310)(1250,-70)(1500,-38)(1750,53)(2000,53) ]{}; coordinates [ (250,-305)(500,-12)(750,89)(1000,122)(1250,219)(1500, 277)(1750,272)(2000,279) ]{}; [0.48]{} coordinates [ (250,-1100)(500,-449)(750,-207)(1000,-71)(1250,-26)(1500,43)(1750,17)(2000,51) ]{}; coordinates [ (250,-1100)(500,-363)(750,-155)(1000,-147)(1250,-65)(1500,-57)(1750,24)(2000,58) ]{}; coordinates [ (250,-1079)(500,-667)(750,-268)(1000,-238)(1250,-24)(1500,8)(1750,151)(2000,118) ]{}; coordinates [ (250,-231)(500,48)(750,108)(1000,202)(1250,308)(1500, 348)(1750,320)(2000,370) ]{}; The results show that our method outperforms the best large net following AZ training with PV-MCTS (i.e., $f_{L_{800}}$) by 226 and 252 Elo ratings (for 200 and 800 testing simulations) with a total of 2000k normalized generated self-play games. Our interpretation is that this is because with a fixed training budget, the trade-off is again between more simulations and more accurate networks. In the case for $f_{L_{200}}$, the smaller simulation count leads to poor playing policies, consequently making it difficult to improve the network. On the other hand, for $f_{L_{800}}$, the total number of self-play records is $1/4$ that of $f_{L_{200}}$, so the overall learning progress is slow. The results for $f_{L_{800}}$ in Figure \[fig: Az training with MPV-MCTS\] corresponds to the blue dataset in Figure \[fig: MPV-MCTS use PV-NN trained by AlphaZero\], where the self-play simulation (for training) was also set to be 800. If we look at the MPV-MCTS training case (the green dataset in Figure \[fig: Az training with MPV-MCTS\]), the result at 2000k normalized generated games performs 279 Elo ratings and 370 Elo ratings stronger than the baseline, for 200 and 800 simulations respectively (Figures \[fig: Az training with MPV-MCTS 200\] and \[fig: Az training with MPV-MCTS 800\]). Comparing between the data for MPV-MCTS in Figure \[fig: Az training with MPV-MCTS 800\] and $f_{128,10}$ in Figure \[fig:4x simulation\], the performance for MPV-MCTS still exceeds that of $f_{128,10}$ for 1000k (4000k normalized) and 1250k generated games (5000k normalized). In fact, matches between $f_{S_{800},L_{100}}$ at 2000k normalized generated games and $f_{L_{800}}$ at 1000k generated games yields a win rate of 56.6%, while playing against $f_{L_{800}}$ at 1250k generated games yields a win rate of 51.2%. This implies that MPV-MCTS accelerates AZ training by at least factor of 2. Discussion and Future work {#Discussion} ========================== In this section, we discuss some settings that we only performed partial experiments on, and share preliminary results. First, instead of two separate trees, we can think of our method as having just one single tree, with the small net’s tree as the “main” tree, because it is usually the larger one. Nonetheless, we describe MPV-MCTS as two trees for generality (thereby allowing the algorithm to work with more than two nets) and simplicity. Second, in line 8 of Algorithm \[alg:MPV-MCTS\], we used the simulation count to select the most important yet-to-be-evaluated states for $f_L$. Other types of priority functions can be used instead when selecting states for $f_L$, such as following its own PUCT, or adding a discounted bonus to states whose parent has higher visit counts. For situations with limited budgets, the former tends to be weaker than our current priority function (but still stronger than a single net), while the latter is complicated but yields no improvement. For multiple simultaneous networks, only the smallest network (with the largest tree) uses PUCT for selection. All other networks would use a user-defined priority function instead. Third, instead of randomly selecting the order to take turns simulating with $f_S$ and $f_L$, as described in subsection \[MPV-MCTS\] and line 1 of Algorithm \[alg:MPV-MCTS\], we have also tried some other settings. Assume that the budget allocation is $b_S=800, b_L=100$; one alternative method is to force MPV-MCTS to start with the larger net (say, the first 50 simulations all use $f_L$) so that the small net has more information that guides the search when it begins. Results show that this alternative method of taking turns does not yield improvement in testing, but seem to have beneficial effects when used in AZ training (as in subsection \[Training AlphaZero with MPV-MCTS\]). We also tried round-robin, but no significant difference was observed. Fourth, our method provides an opportunity for ensemble learning. For example, we tried to increase the coefficient of the mean square error in the small net’s loss function, with the aim of improving the accuracy of its value function. Results show that training AZ with MPV-MCTS in this way accelerates the process of training. Fifth, we trained a mini-net (even weaker than the small net) to replace the small net. The results of 800 mini network simulations + 100 large network simulations were comparable to 800 small network simulations + 100 large network, despite the mini network being obviously worse than the small network. This seems to imply that the MPV-MCTS benefits from the look-ahead search provided by the smaller “partner” more than the quality of the smaller partner. Finally, we believe that a better way to train the AZ algorithm with MPV-MCTS is to train with the help of several sizes of networks. In this scenario, the largest network is the primary network. The training begins by using the smallest network as the support network, following the training process as the one described in subsection \[Training AlphaZero with MPV-MCTS\]. As the large network improves and the representation learned by the small network is no longer sufficient to master the training data, we can replace it with a larger support network. This process is iterative; while the primary network is persistent, the support network should increase in capacity whenever it is unable to keep up. The decision of using which support networks and the parameters such as $\alpha$, $\beta$, and simulation count can be controlled by meta-learning. We leave this as an open problem in the future. Acknowledgments {#acknowledgments .unnumbered} =============== This research is partially supported by the Ministry of Science and Technology (MOST) under Grant Number MOST 107-2634-F-009-011 and MOST 108-2634-F-009-011 through Pervasive Artificial Intelligence Research (PAIR) Labs, Taiwan. The computing resource is partially supported by National Center for High-performance Computing (NCHC).
--- abstract: 'We reply to the comment of Becker, Nelissen, Cleuren, Partoens, and Van den Broeck [@Com] on our article [@we_14] about transport properties of a class of generalized exclusion processes.' author: - Chikashi Arita - 'P. L. Krapivsky' - Kirone Mallick title: 'Reply to “Comment on Generalized Exclusion Processes: Transport Coefficients”' --- Stochastic lattice gases with symmetric hopping are described, on a coarse-grained level, by diffusion equation with density-dependent diffusion coefficient. Density fluctuations additionally depend on the local conductivity (which also describes the response to an infinitesimal applied field). A hydrodynamic description therefore requires the determination of these two transport coefficients. Generally for lattice gases even with rather simple hopping rules, analytic results are unattainable; however, when an additional feature, known as the [*gradient condition*]{}, is satisfied, the Green-Kubo formula takes a simple form [@Spohn] and computations of the transport coefficients become feasible. For a number of lattice gases of gradient type, e.g., for the Katz-Lebowitz-Spohn model with symmetric hopping [@KLS], for repulsion processes [@Krapivsky], for a lattice gas of leap-frogging particles [@CCGS; @GK], the diffusion coefficient has been rigorously computed. The gradient property is also true for the misanthrope process, a class of generalized exclusion processes [@C-T; @AM]. For gradient type lattice gases, an exact expression for the diffusion coefficient can also be obtained by a perturbation approach: one writes the formula for the current at the discrete lattice level and then performs a continuous limit assuming that the density field is slowly varying. Generalized exclusion processes with multiple occupancies [@KLO94; @KLO95; @Timo; @BNCPV13], in general, do not obey the gradient condition. However, we argued in [@we_14] that the perturbation approach should, nevertheless, lead to an exact prediction for the diffusion coefficient. For the class of generalized exclusion processes which we studied [@we_14] simulation results were indeed very close to the predictions by perturbative calculation. The comment [@Com] by Becker [*et al.*]{} prompted us to perform more simulations and to analyze our results more carefully. ![Stationary current multiplied by the system size: simulation results (dots) and the prediction from our previous approach. The latter holds for $L=\infty$, but is shown as a line. []{data-label="fig:RP"}](current.pdf){width="85mm"} Becker [*et al.*]{} computed numerically the diffusion coefficient $D(\rho)$. They performed simulations for various system sizes $L$ and various density differences $\delta \rho$ between the boundary reservoirs. In order to extract $D(\rho)$ from simulations they needed to take [@Com] two limits: $L\to\infty$ and $\delta \rho\to 0$. We considered a system with a large density difference and measured the stationary current through the system: the advantage is that we have to take only one limit, $L\to\infty$. We analyzed the generalized exclusion process GEP(2) with maximal occupancy $k=2$ particles per site and extreme densities at the boundaries: $\rho(0)=2$ and $\rho(L)=0$. According to our expectations [@we_14], the average current should vanish as $(1+\frac{\pi}{2})/L$ when $L\gg 1$. Simulation results (Fig. \[fig:RP\]) demonstrate that the error is smaller than $0.9\%$, but this discrepancy does not seem to disappear in the $L\to\infty$ limit. The numerical results of Ref. [@Com] and our simulations (Fig. \[fig:RP\]) show that the perturbation approach does not lead to the correct analytical results for the GEP(2). We emphasize that the perturbation approach is [*not*]{} a naive mean-field theory where correlations are obviously neglected as argued by Becker [*et al.*]{} In dense lattice gases, the equilibrium state itself is usually highly correlated; e.g., in the repulsion process $\langle \tau_i \tau_{i+1}\rangle =0 \ne \rho^2$ for $0\leq \rho\leq \frac{1}{2}$, where $ \tau_i \in \{1,0\} $ denotes the occupation number of site $i$: the mean-field assumption is completely wrong. Yet, a careful use of the perturbation approach leads to the correct result [@Krapivsky]. The gradient condition is thus crucial for the applicability of the perturbation approach. For GEP($k$) with maximal occupancy $k$, the gradient condition is obeyed in extreme cases of $k=1$ which reduces to the simple exclusion process and $k=\infty$ which reduces to random walks. Presumably because GEP($k$) is sandwiched between two extreme cases in which the perturbation approach works, this method provides a very good approximation when $1< k<\infty$. We now clarify the underlying assumptions behind the perturbation approach and suggest some tracks to improve our results. For the GEP(2), the current reads $$\begin{aligned} J_i = \langle \tau_i f (\tau_{i+1}) - f (\tau_i ) \tau_{i+1} \rangle , \end{aligned}$$ where $ \tau_i \in \{0,1,2\} $ and $ f(n) =1 - \frac 1 2 n(n-1) $. In our computation of the diffusion coefficient [@we_14], we used two assumptions. The first one concerns one-point functions. Let $\mathbb P [ \tau_i = m ]$ be the probability of finding $m$ particles at site $i$. The density at $ i $ is $$\begin{aligned} \rho_i = \langle \tau_i \rangle = \mathbb P [ \tau_i = 1 ] + 2 \mathbb P [ \tau_i = 2] .\end{aligned}$$ We assumed that one-site probabilities satisfy $$\begin{aligned} \label{eq:P=X} \mathbb P [ \tau_i = m ] \simeq X_m (\rho_i) \quad \end{aligned}$$ where the $ X_m $’s represent the single-site weights in an infinite lattice or on a ring: $$\begin{aligned} X_0 ( \rho ) = \frac{1}{Z}, \ X_1 ( \rho ) = \frac{\lambda }{Z}, \ X_2 ( \rho ) = \frac{\lambda^2}{2Z} \end{aligned}$$ with the fugacity $ \lambda $ and the normalization $ Z$ $$\begin{aligned} \lambda ( \rho ) = \frac{ \sqrt{1+2\rho-\rho^2} + \rho -1 }{ 2-\rho }, \ Z = 1 + \lambda + \frac 1 2 \lambda^2 . \end{aligned}$$ The second assumption was to rewrite the current as $$\begin{aligned} \label{eq:<>=<><>} J_i \simeq \langle \tau_i \rangle \langle f (\tau_{i+1}) \rangle - \langle f (\tau_i ) \rangle \langle \tau_{i+1} \rangle .\end{aligned}$$ This, indeed, is a mean-field type assumption [@Com]. The assumptions , are asymptotically *true* in the stationary state of a large system ($ L\to \infty $): We have checked these facts by performing additional simulations. Our numerical results suggest more precise expressions for and with some scaling functions $\kappa $ and $ \mu $: $$\begin{aligned} \label{eq:P=X+kappa} \mathbb P [ \tau_i = m ] = X_m (\rho_i) + \frac 1 L \kappa_m \Big( \frac i L \Big)\,, \end{aligned}$$ $$\begin{aligned} \label{eq:<>=<><>+mu} J_i =\langle \tau_i \rangle \langle f (\tau_{i+1}) \rangle - \langle f (\tau_i ) \rangle \langle \tau_{i+1}\rangle + \frac 1 L \mu \Big( \frac i L \Big) , \end{aligned}$$ where we omitted $o(L^{-1})$ terms. Performing the perturbation approach with the refined expressions , , we obtain $$\begin{aligned} \label{J:exact} J = - \frac{1}{L} \frac{d\rho}{dx} \left(1-X_2(\rho) + \rho \frac{ dX_2 ( \rho ) }{ d\rho } \right) + \frac 1 L \mu(x )\end{aligned}$$ where we have switched from the discrete variable $ i $ to $ x= i/L $. The functions $ \kappa_m$ do not appear in , but $\mu(x)$ does, and it was missing in our paper [@we_14] leading to the wrong expressions for the current and for the stationary density profile. In order to calculate $\mu(x)$, we are presently examining nearest-neighbor correlation functions for the GEP(2). Numerically at least, these nearest-neighbor correlations exhibit a neat scaling behavior and simple patterns; detailed results will be reported in [@we_future]. [99]{} T. Becker, K. Nelissen, B. Cleuren, B. Partoens, and C. Van den Broeck, Phys. Rev. E **93**, 046101 (2016). C. Arita, P. L. Krapivsky, and K. Mallick, Phys. Rev. E **90**, 052108 (2014). H. Spohn, [*Large Scale Dynamics of Interacting Particles*]{} (New York: Springer-Verlag, 1991). S. Katz, J. L. Lebowitz, and H. Spohn, J. Stat. Phys. **34**, 497 (1984). P. L. Krapivsky, J. Stat. Mech. P06012 (2013). J. M. Carlson, J. T. Chayes, E. R. Grannan, and G. H. Swindle, Phys. Rev. Lett. **65**, 2547 (1990). D. Gabrielli and P. L. Krapivsky, in preparation. C. Cocozza-Thivent, Z. Wahrscheinlichkeitstheorie verw. Gebiete **70**, 509 (1985). C. Arita and C. Matsui, arXiv:1605.00917. C. Kipnis, C. Landim, and S. Olla, Commun. Pure Appl. Math. **47**, 1475 (1994). C. Kipnis, C. Landim, and S. Olla, Ann. Inst. H. Poincaré **31**, 191 (1995). T. Seppäläinen, Ann. Prob. **27**, 361 (1999). T. Becker, K. Nelissen, B. Cleuren, B. Partoens, and C. Van den Broeck, Phys. Rev. Lett. **111**, 110601 (2013). C. Arita, P. L. Krapivsky, and K. Mallick, in preparation.
--- abstract: 'Surface melting temperature is well-known significantly lower than the bulk melting point. But we find that the interior melting temperature in ultrathin nanowires is lower than that of the surface melting . The thermal stability of helical multi-walled cylindrical gold nanowires is studied using molecular dynamics simulations. The melting temperature of gold nanowires is lower than the bulk value, but higher than that of gold nanoclusters. An interesting interior melting is revealed in the gold nanowires and the thermodynamics is closely related to interior structures. The melting starts from the interior atoms, while the surface melting occurs at relatively higher temperature. We propose that the surface melting represents the overall melting in ultrathin metallic nanowires.' address: | $^1$[*National Laboratory of Solid State Microstructures and Department of Physics Nanjin University Nanjing 210093, P.R. China*]{}\ $^2$[*National Laboratory for Infrared Physics, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, P.R. China*]{}\ $^3$[*Department of Physics and Astronomy, University of North Carolina at Chapel Hill, North Carolina 27599-3255*]{} author: - 'Jinlan Wang$^{1,2}$, Xiaoshuang Chen$^2$, Guanghou Wang$^1$[^1]' - 'Baolin Wang$^1$, Wei Lu$^2$ and Jijun Zhao$^3$' title: 'Does the melting behavior in ultrathin metallic nanowires begin from the surface ?' --- The melting behavior of nanoparticles and nanorods have been demonstrated dramatically different from the bulk both experimentally and theoretically [@labasite; @ercolessi; @berry; @gulseren; @goldstein; @wangzl; @link; @schmidt; @nayak; @hu; @liu; @lee; @wu]. The melting process of a crystalline starts from the surface layer and propagates into the interior. Thus, the surface melting temperature is significantly lower than the bulk melting point. Similarly, people may ask whether the surface melting temperature is lower than the overall melting temperature in clusters and nanowires. Berry considers that “dynamic coexistence” or surface melting happens in the melting process of small clusters before the total melting[@berry]. For crystalline nanowires, Tosatti found that surface melting temperature for Pb wires was also lower than the total melting temperature[@gulseren]. Experimentally, surface melting is involved in the melting process of nanoparticles and nanorods [@goldstein; @wangzl; @link]. Especially, Schmidt found a broad peak for heat capacity of Na$_{139}^{+}$ clusters, maybe implying the coexistence of solid-like and liquid-like phases before the total melting [@schmidt]. Two major effects are responsible for these different melting behavior in nanoparticles and nanorods. One is the large surface-to-volume ratio in these nanostructures. The other is quantum confinement effect in finite size systems. Surface atoms have fewer nearest neighbors and weaker binding, which may induce an earlier surface melting behavior. On the other hand, a close relationship between the melting and the structure features is found in the cluster[@kusche; @litx]. Recent studies demonstrated that ultrathin metallic nanowires have quite different structural and properties from those of bulk, clusters and crystalline nanowires [@kondo; @kondo1; @tosatti; @wang; @bilalb1; @bilalb2; @bao; @bao1]. The helical multi-walled cylindrical structures have been widely found in 1-3nm size range of metallic nanowires both experimentally and theoretically [@kondo; @kondo1; @tosatti; @wang; @bao; @bao1]. This kind of novel structure will bring some bizarre melting features different from the above-mentioned systems. To our knowledge, less efforts are focused on their thermodynamics so far, although this kind of ultrathin metallic nanowires have attracted great interests [@bilalb1; @bilalb2]. Furthermore, the ultrathin nanowire has some characteristics similar to the cluster, crystalline wire and bulk. It may provide an opportunity for comprehensively understanding these types of matter states and their relations. For example, the surface-to-volume ratio is a non-zero value in these nanostructures, while it approaches toward zero in the bulk. Surface and core atoms are expected to play different role during the melting process. However, it is difficult to definitely distinguish the overall melting and surface melting of clusters experimentally because the cluster’s signal spectrum will not exist when the surface melting occurs. Therefore, we can employ the ultrathin gold nanowire as a representation to explore these problems. In this letter, the thermal stabilities of gold nanowires with helical multi-walled cylindrical structures are studied by using molecular dynamical (MD) simulations. We start from the optimized structures from previous works [@wang], which were imaged by electron microscopy[@kondo1]. The interaction between gold atoms is described by a glue potential [@glue] and the periodic boundary condition is applied along the wire axis direction to model the wire with sufficient length wire. The length of supercell is chosen with the same as Ref. [@wang], which is a reasonable scheme to attain the helical structures in the nanowire. To characterize the thermal behavior of nanowires, we monitor the root-mean-square (rms) fluctuation of the interatomic bond distances $\delta $ defined by $$\delta =\frac 2{n(n-1)}\sum\limits_{i<j}^n\frac{\left( \left\langle r_{ij}^2\right\rangle _t-\left\langle r_{ij}\right\rangle _t^2\right) ^{1/2} }{\left\langle r_{ij}\right\rangle _t}$$ and the heat capacity $C$ per atom, which is related to the energy $E$ fluctuation, by the relation $$C=\frac{(\langle E^2\rangle -\langle E\rangle ^2)}{nk_BT^2}$$ where $r(i,j)$ denotes the distance between the nuclei $i$ and $j$, $n$ is the total number of the atoms in the nanowire, $k_B$ is the Boltzman constant, $\langle ...\rangle $ indicates the thermal statistical averages in the canonical ensemble after equilibration. The constant temperature molecular dynamics (MD) method of Nose [@nose] is employed to exploit the thermal properties of gold nanowires. The MD time step is chosen as 2.15$\times 10^{-15}$s. The initial 10$^5$ steps are used to bring the system into equilibration and further 10$^6$ steps are used to record the thermal average of the physical quantities. We study five representative helical structures of gold nanowires with up to triple shells: single atom centered, double-chain, trigonal, tetragonal and six atoms parallelogram centered noted as S1, S2, S3, S4, S6 in Fig.1. Table I presents the starting melting temperature $T_{ini}$ and the overall melting temperature $T_m$ of these structures. The overall melting temperature $T_m$ reflects the stability of the whole system, while the starting melting temperature $T_{ini}$ describes the early stage of melting and sensitively depends on wire structures. The overall melting temperature $T_m$ is estimated from the curves of rms bond length fluctuation $\delta $, heat capacity $C$ and binding energy $E$ as functions of temperature. =3.5in As shown in Table I, the overall melting temperatures $T_m$ for nanowires are almost the same for all the wires and are slightly lower than the bulk melting point. Experimentally, Liu [*et al.* ]{} have reported the melting point of Pt nanowire is about 400$^o$C[@liu]. Lee [*et al.*]{} have also found the melting temperature of 4.6nm Pd nanowire is just 300$^o$C, which is much lower than the bulk value (1445$^o$C)[@lee]. Similar phenomenon is also found in the case of metal clusters. The depression of melting point can be attributed to the low dimension and large surface-to-volume ratio in these nanostructures. However, the melting temperatures obtained for gold nanowires are much higher than those of gold nanoclusters [@Garz; @Buffat; @Castro; @Jellinek]. This may be understood by the tightened helical structures in nanowires. On the other hand, the starting melting temperatures $T_{ini}$ are different for wires with different interior structures (see Table I). The wire S1 (as described by multi-walled structural index:18-12-6-1) starts to melt at a rather lower temperature than the other ones, while the wire S4 (21-15-9-4) has relatively higher starting melting temperature. These indicate that the different starting temperature $T_{ini}$ for different nanowires are related to their different interior structures. Similar overall stability for all the wires may come from their common multi-walled helical packing, despite different interior structure. Moreover, since the $T_{ini}$ is related to the stability of interior atomss, the rather lower starting melting temperature $T_{ini}$ implies that the interior melting behavior happens during melting process of the helical gold nanowires. We discuss the structural evolution of gold nanowires during the melting process. Fig.2 gives several snap shots taken from the structural trajectories of the wire S1 at different temperatures. It is interesting to note that the interior atoms diffuse along the wire axis direction at a rather low temperature. As shown in Fig.2, the center atoms firstly move along the wire at 300K (Fig.2a). The helical structure of the outmost shell is almost invariant. With the raise of temperature, the center atoms continue moving away from the wire. The atoms in the first shell (from interior to outer) thus have fewer nearest neighbors and begin to diffuse along axis direction (Fig.2b). Similarly, the atoms in the second shell are also involved into the migration at higher temperature $T=900$K (Fig.2c). However, the helical structure of surface shell still exists. When the temperature is high enough, the surface atoms in the outmost shell also start to migrate and the helical structure of surface is broken at 1100K, leading to the overall melting (Fig.2d). Therefore, we conclude that interior atoms diffuse prior to surface atoms and no surface melting takes place before the overall melting in gold nanowires. Table I. Melting temperature for different interior structure of Au nanowire. melting temperature(K) S1 S2 S3 S4 S6 ------------------------ ------ ------ ------ ------ ------ -- T$_{ini}$ 300 550 650 700 650 T$_{m}$ 1100 1100 1050 1100 1100 =3.5in To further illustrate the argument of the interior melting and distinguish the role of surface and core atoms on the melting behavior, Fig.3 plots the rms bond length fluctuation of the surface atoms ($\delta _s$), core atoms ($\delta _c$) and the total atoms ($\delta $) of S1 wire as functions of temperature. Obviously, the rms bond length fluctuation of the core atoms have similar trends to that of the whole wire, but dramatic different from the surface. In the temperature range of $350-1000$K, the $\delta _s$ of the surface atoms is very small and almost invariable, while $\delta _c$ and $\delta $ have substantial fluctuations. In other words, the core atoms begin to diffuse along axis and become ‘wet’, while the surface atoms remain ‘solid-like’ at this melting region. These results indicate that the melting mainly comes from the diffusion of core atoms and no surface melting happens at the beginning of the melting. Moreover, the $\delta _c$ fluctuates around 0.12 at low temperature, consistent with the Lindemann criterion for equilibrium melting of simple crystals[@11], which shows that the interior melting takes place. For all the three cases, there is a rapid raise of rms bond length fluctuation in a narrow temperature region (1000-1150K), indicating that the surface atoms are involved into the melting process. Afterwards, the surface atoms play an important role on the melting of the wire at the high temperature region. Above 1150K, all of three quantities have a large, constant and smooth variation, corresponding to the completely melting status. Together with Fig.2d, we hold the surface melting means the overall melting in the multi-walled ultrathin helical nanowires. The interior melting is in preference to the surface melting and the diffusion of core atoms has a dominant effect on the melting at low temperature. The surface atoms affect the melting at high temperature and surface melting represents the overall melting. The above interior melting behavior is obviously different from that of the bulk, clusters and crystalline nanowires, where the surface melting usually occurs before the total melting. In the bulk, after the melting of surface layers, the rest core atoms can still be seen as an analogous bulk. Much more energy is needed to make the ‘rest bulk’ molten. Therefore, the surface melting temperature can be lower than the overall melting point. In the cases of clusters and crystalline nanowires, surface atoms have fewer nearest neighbors and are thus weaker bound and less constrained compared to the core atoms. Thus, surface atoms are easy to diffuse and become liquid-like. However, the present structures of gold nanowires are helical multi-walled cylindrical. The interaction among the same shell atoms is stronger than that of neighboring shell atoms. Furthermore, to reach a well-tightened helical multi-walled structure, the helical match may cause the fewer number of atoms in the interior shells, especially for the center chain. Thus, interior atoms may have less coordinate numbers and larger interatomic distances than those of surface atoms. Therefore, core atoms in nanowires can break away from the binding sites in the wire prior to surface atoms and diffuse at a rather low temperature. In addition, in th optimization process of ground state structure, we have found that the formation of helical structure in the outer shell is earlier than the interior shell. This implies that the surface is dynamically more stable than the interior part. Experimentally, Wu [*et al.*]{} have found that the melting of Ge nanowire starts from the two ends of the wire and move towards the middle[@wu]. These imply that the melting behavior for nanowires dramatically differs from those for nanorods and bulk[@link]. =3.3in To further clarify our ideas, we separate the function of surface atoms and core atoms by fixing the surface atoms artificially and allowing the interior atoms to move or vice verse. All the rms bond length fluctuation $\delta $, $\delta _{fc}$, $\delta _{fs}$ are calculated for the entire system, representing no constrained, fixing core atoms and fixing surface, respectively. We still take the wire S1 as an example. As shown in Fig.4, regardless the large difference among $\delta $, $\delta _{fc}$, $\delta_{fs}$ curves, the full melting temperatures (which corresponds to a large, smooth and constant rms bond length fluctuation) in the three cases are about 1150K. The $\delta _{fs}$ has considerable jump at low temperature (400K), while the $\delta _{fc}$ is almost invariable up to 800K. This indicates that surface atoms with the helical structure are thermodynamically more stable. For the case of fixing the surface and no constrained, although the $\delta_{fs}$ is just nearly half of the $\delta $, their general trends are similar. The small absolute value of $\delta_{fs}$ comes from the fixing surface atoms, which contributes nearly zero to the rms bond length fluctuation. Therefore, we propose there should be no essential difference between the first two cases. These results also support the melting comes from the interior atoms at low temperature and the surface melting represents the overall melting. =3.3in To check the validity of the current results, we have used the same MD code to study the melting of clusters and obtained results similar to previous Monte Carlo (MC) simulations[@wang1; @wang2]. We have also exploited the melting process of gold nanowires by MC and observed similar interior diffuse behavior. Moreover, to examine the effect of the periodic bound condition on the melting behavior, we rescale the supercell length to 2-times and 3-times of original one. The melting temperatures are also about 1100K and the observed melting process is similar to the above results. It proves that the periodic bound condition has little effect on the simulation results. It should be further pointed out that we just limit our discussion to the helical structure in this paper. As mentioned above, this kind of helical structure is prevalent in metal nanowires in the small diameter range [@kondo; @kondo1; @tosatti; @wang; @bilalb1; @bilalb2; @bao]. Therefore, the current results on interior melting behavior in ultrathin gold nanowires is significant and might be a common feature in this kind of ultrathin metallic nanowires. In summary, the thermal behavior of helical multi-walled gold nanowires has been studied and the main points are made as following. (1) The melting process starts from the interior region and no surface melting happens at lower temperature. We further argued that interior melting behavior happens prior to surface melting and surface melting represents the overall melting in ultrathin metallic nanowires. (2) The overall melting temperature of gold nanowires is lower than the one for the bulk, but higher than gold nanoclusters. (3) The surface and core atoms play different role in the melting behavior of these nanowires. The core atoms have a dominating effect on the melting at the beginning stage and the surface atoms are involved in the melting at higher temperature region. The core melting is closely related to the interior atomic structural characters. (4) The novel interior melting behavior is ultimately attributed to the helical multi-walled structure. This work is financially supported by the National Natural Science Foundation of China(No.29890210, 10023001) and One-hundred-person project of Chinese Science Academic in China(2000). P. Labastie and R. L. Whetten, Phys. Rev. Lett. [**65**]{}, 1567(1990). F. Ercolessi, [*et al*]{}., Phys. Rev. Lett. [**66**]{}, 911(1991). R.S. Berry, in: T.P. Martin (Ed.), Large Clusters of Atoms and Molecules, Kluwer Academic Publishers, Dordrecht, 1996, pp281-297. O. Gulseren [*et al*]{}., Phys. Rev. B [**51**]{}, 7377(1995). A.N. Goldstein, [*et al*]{}., Science [**256**]{}, 1425(1992). Z.L. Wang, [*et al*]{}., J. Phys. Chem.B. [**102**]{}, 6154(1998). S. Link, [*et al*]{}., J.Phys.Chem.B [**104**]{}, 7867(2000). M. Schmidt, [*et al*]{}., Phys. Rev. Lett. [**79**]{}, 99(1997). S. K. Nayak, [*et al*]{}., Phys. Rev. Lett. [**74**]{}, 4181(1995). J. Hu, [*et al*]{}., Acc.Chem.Res. [**32**]{}, 435(1999). Z. Liu, [*et al*]{}., Angew.Chem.Int.Ed.[**39**]{}, 3107(2000). K. Lee, [*et al*]{}., Adv. Mater.[**13**]{}, 517(2001). Y. Wu, [*et al*]{}., Adv. Mater.[**13**]{}, 520(2001). R. Kusche, [*et al*]{}., Eur.Phys.J.D 9, 1(1999). T.X. Li, [*et al*]{}., Solid State Comm. 116 547(2000). Y. Kondo and K.Takayanagi, [*et al*]{}., Phys. Rev. Lett. [**79**]{}, 3455 (1997). Y. Kondo and K.Takayanagi, [*et al*]{}., Science [**289**]{}, 606 (2000). E. Tosatti [*et al*]{}., Science [**291**]{}, 288 (2001). B.L. Wang [*et al*]{}., Phys. Rev. Lett. [**86**]{}, 2046(2001). G. Bilalbegovic, Phys. Rev. B [**58**]{}, 15412 (1998). G. Bilalbegovic, Solid State Comm. [**115**]{}, 73(2000). B.L. Wang [*et al*]{}., J.Phys.Condens.Matter [**13**]{}, L403(2001). B.L. Wang [*et al*]{}., Submitted to Phys.Rev.B. F. Ercolessi, [*et al*]{}., Phys. Rev. Lett. [**57**]{}, 719(1986). S. Nose, [*et al*]{}., Phys.Rev.A[**31**]{}, 1695(1985). I. L. Garzon [*et al.*]{}, Phys. Rev. Lett. [**81**]{}, 1600(1998). Ph. Buffat, [*et al*]{}. Phys. Rev. A[**13**]{}, 2287(1976). T. Castro, [*et al*]{}., Phys. Rev. B[**42**]{}, 8548(1990). J. Jellinek, [*et al*]{}., Z. Phys. D [**20**]{}, 239(1991). F.A. Lindemann, Z.Phys.[**11**]{}, 609(1910); J.J.Gilvarry, Phys. Rev.[**102**]{}, 308(1956); A. Voronel et al., Phys. Rev. Lett.[**60**]{}, 3402(1998). J.L. Wang [*et al*]{}., Solid State Comm. [**119**]{}, 13(2001). J.L. Wang [*et al*]{}., Chem. Phys. Lett. [**341**]{}, 529(2001). [^1]: To whom the correspondence should be addressed. E-mail: [email protected]
--- author: - 'Mike Tyson[^1]' bibliography: - 'shiftedmoments.bib' title: A theorem on shifted Hankel determinants --- Let $(\mu(n))_{n\ge 0}$ be a sequence with $\mu(0)=1$, and assume none of the Hankel determinants $H_n=\det(\mu(i+j))_{i,j=0}^{n-1}$ vanish. Then one can define the family of polynomials $$p_n(x)=\sum_{j=0}^n(-1)^{n-j}x^jp(n,j)=\frac{1}{H_n}\det\begin{pmatrix} \mu(0) & \mu(1) & \cdots & \mu(n) \\ \vdots & \vdots & \ddots & \vdots \\ \mu(n-1) & \mu(n) & \cdots & \mu(2n-1) \\ 1 & x & \cdots & x^n \end{pmatrix},$$ which are orthogonal with respect to the weight with $n$th moment $\mu(n)$. In [@2019arXiv190210468C], Johann Cigler conjectured that $$\det(\mu(i+j))_{i,j=0}^{m-1}\cdot \det(p(i+m,j))_{i,j=0}^{n-1}=\det(\mu(i+j+n))_{i,j= 0}^{m-1}.$$ The purpose of this note is to prove this conjecture. For $0\le a\le j\le b$, let $D_j(\mu;a,b;\lambda)$ be the determinant of the $(b-a)$-by-$(b-a)$ matrix $$\begin{pmatrix} \mu(a) & \cdots & \mu(j-1) & \mu(j+1) & \cdots & \mu(b) \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ \mu(b-1) & \cdots & \mu(j+b-a-2) & \mu(j+b-a) & \cdots & \mu(2b-a-1)+\lambda \end{pmatrix},$$ where the $\lambda$ is added only to the bottom right entry. For $j<a$ or $b<j$ let $D_j(\mu;a,b;\lambda)=0$. We will also denote $D_j(\mu;a,b;0)$ by $D_j(\mu;a,b)$ and typically suppress the $\mu$. Note that when the $\mu(i)$ are left as variables and $a\le j\le b$, none of these determinants are identically zero. We will divide by finitely many of these determinants over the course of the proof, but the end result is a polynomial identity which therefore applies even when some $D_j(a,b;0)$ vanish. Since $H_n=D_n(0,n)$ and $p(n,j)=D_j(0,n)/D_n(0,n)$, the conjecture is that $\det\left(D_j(0,i+m)\right)_{i,j=0}^{n-1}$ equals $$D_{m+n}(n,m+n)\cdot D_{m+1}(0,m+1)\cdots D_{m+n-1}(0,m+n-1).$$ For instance, when $m=2$ and $n=3$ the conjecture is that $$\begin{pmatrix} \begin{vmatrix}\mu(1) & \mu(2)\\\mu(2) & \mu(3)\end{vmatrix} & \begin{vmatrix}\mu(0) & \mu(2)\\\mu(1) & \mu(3)\end{vmatrix} & \begin{vmatrix}\mu(0) & \mu(1)\\\mu(1) & \mu(2)\end{vmatrix}\\ \begin{vmatrix}\mu(1) & \mu(2) & \mu(3)\\\mu(2) & \mu(3) & \mu(4)\\\mu(3) & \mu(4) & \mu(5)\end{vmatrix} & \begin{vmatrix}\mu(0) & \mu(2) & \mu(3)\\\mu(1) & \mu(3) & \mu(4)\\\mu(2) & \mu(4) & \mu(5)\end{vmatrix} & \begin{vmatrix}\mu(0) & \mu(1) & \mu(3)\\\mu(1) & \mu(2) & \mu(4)\\\mu(2) & \mu(3) & \mu(5)\end{vmatrix}\\ \begin{vmatrix}\mu(1) & \mu(2) & \mu(3) & \mu(4)\\\mu(2) & \mu(3) & \mu(4) & \mu(5)\\\mu(3) & \mu(4) & \mu(5) & \mu(6)\\\mu(4) & \mu(5) & \mu(6) & \mu(7)\end{vmatrix} & \begin{vmatrix}\mu(0) & \mu(2) & \mu(3) & \mu(4)\\\mu(1) & \mu(3) & \mu(4) & \mu(5)\\\mu(2) & \mu(4) & \mu(5) & \mu(6)\\\mu(3) & \mu(5) & \mu(6) & \mu(7)\end{vmatrix} & \begin{vmatrix}\mu(0) & \mu(1) & \mu(3) & \mu(4)\\\mu(1) & \mu(2) & \mu(4) & \mu(5)\\\mu(2) & \mu(3) & \mu(5) & \mu(6)\\\mu(3) & \mu(4) & \mu(6) & \mu(7)\end{vmatrix} \end{pmatrix}$$ has determinant $$\begin{vmatrix}\mu(3)& \mu(4)\\\mu(4) & \mu(5)\end{vmatrix}\begin{vmatrix}\mu(0)& \mu(1) & \mu(2)\\\mu(1) & \mu(2) & \mu(3)\\\mu(2) & \mu(3) & \mu(4)\end{vmatrix}\begin{vmatrix}\mu(0)& \mu(1) & \mu(2) & \mu(3)\\\mu(1) & \mu(2) & \mu(3) &\mu(4)\\\mu(2) & \mu(3) & \mu(4) & \mu(5)\\\mu(3) & \mu(4) & \mu(5) & \mu(6)\end{vmatrix}.$$ The conjecture can be proven by induction on $n$ via the following theorem. Let $\nu(i)=\mu(i+1)$ for all $i\ge 0$. Then $\det\left(D_j(\mu;0,i+m)\right)_{i,j=0}^{n-1}$ equals $$D_0(\mu;0,m)\cdot \det\left(D_j(\nu;0,i+m)\right)_{i,j=0}^{n-2}\cdot\prod_{i=1}^{n-1}\frac{D_{i+m}(\mu;0,i+m)}{D_{i+m}(\mu;1,i+m)}.$$ The $n=1$ case is trivial. Assume $n>1$ and let $Q$ be the matrix $\left(D_j(0,i+m)\right)_{i,j=0}^{n-1}$. Note that for $i\ge 1$ and $j<i+m$, the entry $Q_{i-1,j}$ appears as a cofactor in the expansion of $Q_{i,j}$. For $j\ge i+m$, $Q_{i-1,j}=0$. This means that adding $\lambda_i$ times the $(i-1)$th row to the $i$th row has the effect of replacing $Q_{i,j}=D_j(0,i+m)$ with $Q'_{i,j}=D_j(0,i+m;\lambda_i)$ for $j<i+m$. Since the entry $Q_{i-1,j}$ is generically nonzero, $\lambda_i$ can be chosen to make $D_0(0,i+m;\lambda_i)=0$. Perform this row operation on row $n-1$, then row $n-2$, and so on up to row $1$. Call the new matrix $Q'$. For $i\ge 1$, $Q'_{i,0}=0$, which (together with the nonvanishing of $Q_{i-1,0}$) implies that the last column of the underlying matrix of $D_0(0,i+m;\lambda_i)$ can be written as a linear combination of the first $i+m-1$ columns. That is, there are $a_1,\dots,a_{i+m-1}$ such that $$a_1\begin{pmatrix}\mu(1)\\ \vdots \\ \mu(i+m)\end{pmatrix}+\cdots+a_{i+m-1}\begin{pmatrix}\mu(i+m-1)\\ \vdots \\ \mu(2i+2m-2)\end{pmatrix}=\begin{pmatrix}\mu(i+m)\\ \vdots \\ \mu(2i+2m-1)+\lambda_i\end{pmatrix}.$$ By restricting to the first $i+m-1$ coordinates of this equation and applying Cramer’s rule, one finds that $$a_k=(-1)^{k+i+m-1}D_k(1,i+m)/D_{i+m}(1,i+m).$$ For $j<i+m$, the solved-for column $(\mu(i+m),\dots,\mu(2i+2m-1)+\lambda_i)^\top$ appears in the underlying matrix of $D_j(0,i+m;\lambda_i)=Q'_{i,j}$. Substitute in the linear combination and perform column operations to remove summands of repeated columns. One is left with $$Q'_{i,j}=(-1)^{j+i+m-1}a_jD_{i+m}(0,i+m)=D_j(1,i+m)\frac{D_{i+m}(0,i+m)}{D_{i+m}(1,i+m)}.$$ For $j=i+m$, $Q'_{i,j}$ was originally $D_{i+m}(0,i+m)$, so the above equation still holds. For $j>i+m$, both $Q'_{i,j}=D_j(0,i+m)$ and $D_j(1,i+m)$ are $0$, so the above equation again holds. Factor out $D_{i+m}(0,i+m)/D_{i+m}(1,i+m)$ from row $i$ of $Q'$ for each $i\ge 1$ to get a new matrix $Q''$. Like $Q'$, all entries in the leftmost column of $Q''$ are $0$ except for the top entry, which is $D_0(0,m)$. The bottom right $(n-1)$-by-$(n-1)$ submatrix of $Q''$ is $\left(D_{j}(1,i+m)\right)_{i,j=1}^{n-1}$, or $\left(D_j(\nu;0,i+m)\right)_{i,j=0}^{n-2}$. Therefore the determinant is as claimed. As an example, consider when $\mu(n)=C_n=\frac{1}{n+1}\binom{2n}{n}$ is a Catalan number. It can be verified via orthogonality that $$p_n(x)=\sum_{j=0}^n(-1)^{n-j}\binom{n+j}{n-j}x^j.$$ As a result, $$\det\left(\binom{i+j+m}{i-j+m}\right)_{i,j=0}^{n-1}=\frac{\det(C_{n+i+j})_{i,j=0}^{m-1}}{\det(C_{i+j})_{i,j=0}^{m-1}}.$$ The quotient on the right can be calculated with Dodgson condensation [@1999math......2004K] to be $$\prod_{1\le i\le j\le n-1}\frac{2m+i+j}{i+j}.$$ [^1]: Email: [email protected]
--- author: - 'Kezhou Yang, Akul Malhotra, Sen Lu, and Abhronil Sengupta, [^1][^2]' title: 'All-Spin Bayesian Neural Networks' --- Neuromorphic Computing, Bayesian Neural Networks, Magnetic Tunnel Junction. Introduction ============ Probabilistic inference is at the core of decision-making in the brain. While the past few years have witnessed unprecedented success of deep learning in a plethora of pattern recognition tasks (complemented by advancements in dedicated hardware designs for these workloads), these problem spaces are usually characterized by the availability of large amounts of data and networks that do not explicitly represent any uncertainty in the network structure or parameters. However, as we strive to deploy Artificial Intelligence platforms in autonomous systems like self-driving cars, decision-making based on uncertainty is crucial. Standard supervised backpropagation based learning techniques are unable to deal with such issues since they do not overtly represent uncertainly in the modelling process. To circumvent these problems, Bayesian deep learning has recently been gaining attention where deep neural networks are trained in a probabilistic framework following the classic rules of probability, i.e. Bayes’ theorem. In the Bayesian formulation, the network is visualized as a set of plausible models (assuming *prior* probability distributions on its parameters, for instance, synaptic weights). Given observed data, the *posterior* probability distributions are learnt that best explains the observed data. The key distinction between standard deep networks and Bayesian deep networks is the fact that network parameters in the latter case are modelled as probability distributions. It is worth noting here that the probability distributions are usually modelled by Gaussian processes characterized by a mean and variance [@gal2016uncertainty]. Utilizing probability distributions to model network parameters allows us to characterize the network outputs by an uncertainty measure (variance of the distribution), instead of just point estimates in a standard network. These uncertainty measures can therefore be used by autonomous agents for decision making and self-assessment in the presence of continuous streaming data. This paper explores a hardware-software co-design approach to accelerate Bayesian deep learning platforms through the usage of spintronic technologies. Recent research has demonstrated the possibility of mimicking the primitives of standard deep learning frameworks – synapses and neurons by single magnetic device structures that can be operated at very low terminal voltages [@sengupta2017encoding; @sengupta2018stochastic; @grollier2016spintronic; @romera2018vowel]. Further, being non-volatile in nature, spintronic devices can be arranged in crossbar architectures to realize “In-Memory" dot-product computing kernels – thereby alleviating the memory access and memory leakage bottlenecks prevalent in CMOS based implementations [@sengupta2016proposal; @sengupta2017performance]. As mentioned earlier, the key distinction between Bayesian and standard deep learning is the requirement of sampling from probability distributions and inference based on sampled values. Interestingly, scaled nanomagnetic devices operating at room temperature are characterized by thermal noise resulting in probabilistic switching. We propose to leverage the inherent device stochasticity of spintronic devices to generate samples from Gaussian probability distributions by drawing insights from statistical Central Limit Theorem. Further, our paper also elaborates on a cohesive design of a spintronic Bayesian processor that leverages benefits of spin-based Gaussian random number generators and spintronic “In-Memory" crossbar architectures to realize high-performance, energy efficient hardware platforms. We believe the drastic reductions in circuit complexity (single devices emulating synaptic scaling operations, crossbar architectures implementing “In-Memory" dot-product computing kernels and leveraging device stochasticity to sample from probability distributions) and low operating voltages of spintronic devices make them a promising path toward the realization of Probabilistic Machine Learning enabled by the Bayesian formulation. Preliminaries: Bayesian Neural Networks ======================================= ![In a Bayesian framework, each synaptic weight is represented by a Gaussian probability distribution. The core computing kernel for a particular layer during inference is a dot-product between the inputs and a synaptic weight matrix sample drawn from the individual probability distributions. Learning involves the determination of the mean and variances of the probability distributions using Bayes’ formulation.[]{data-label="fig1"}](fig1.pdf){width="2in"} Before going into the technical details of the work, we would like to first discuss the preliminaries of Bayesian Neural Networks and the main computationally expensive operations pertaining to their hardware implementation. As shown in Fig. \[fig1\], a particular layer of a neural network consists of a set of neurons receiving inputs (sensory information or previous layer of neurons) through synaptic weights, $\textbf{W}$. Bayesian Neural Networks consider the weights of the network, $\textbf{W}$, to be latent variables characterized by a probability distribution, instead of point estimates. More specifically, each weight in such a framework is a random number drawn from a *posterior probability distribution* (characterized by a mean and variance) that is conditioned on a *prior probability distribution* and the *the observed datapoints*, $D$ (incoming patterns to the network). Hence, during inference, each incoming data pattern will get propagated through the synaptic weights, each of which is characterized by a probability distribution. Hence, as shown in Fig. \[fig1\], the final output of the neurons of a particular layer will also be described by a probability distribution characterized by a mean and variance (the uncertainty measure). Bayesian Neural Networks correspond to the family of deep learning networks where the weights are ‘learnt’ using Bayes’ rule. The learning process here involves the estimation of the mean and variance of the weight *posterior* distribution. Following Bayes’ rule, the *posterior probability* can be written as, $$P(\textbf{W}|D) = \frac{P(D|\textbf{W})P(\textbf{W})}{P(D)}$$ where, $P(\textbf{W})$ denotes the *prior probability* (probability of the latent variables before any data input to the network). $P(D|\textbf{W})$ is the *likelihood*, corresponding to the feedforward pass of the network. In order to make the above *posterior probability* density estimation tractable, two popular approaches are – Variational Inference methods [@houthooft2016vime] or Markov Chain Monte Carlo methods [@andrieu2003introduction]. However, in this paper, we focus on Variational Inference methods due to its scalability to large-scale problems [@cai2018vibnn]. Variational Inference methods usually approximates the *posterior* distribution by a Gaussian distribution, $q(\textbf{W, $\theta$})$, characterized by parameters, $\theta=(\mu,\sigma$), where $\mu$ and $\sigma$ represent the mean and standard deviation vectors for the probability distributions representing $P(\textbf{W}|D)$ [@ghahramani2001propagation]. To summarize, the main hardware design space concerns in Bayesian Neural Networks can be categorized as follows: **Gaussian Random Number Generation:** Central to the entire framework, both in the learning as well as the inference process, is the random number generation corresponding to the synaptic weights. Given current large model sizes characterized by over a million synapses, coupled with the fact that random draws need to performed multiple times for each synaptic weight, random number generator circuits would contribute significantly to the total area and power consumption of the hardware. Further, the random numbers need to be sampled from a Gaussian distribution, thereby increasing the complexity of the circuit. We will discuss the hardware costs for CMOS implementations of such Gaussian random number generators in the following sections along with their limitations, followed by our proposal of nanomagnetic random number generators that can serve as the basic building blocks of such Bayesian Neural Networks. **Dot-Product Operation Between Inputs and Sampled Synaptic Weights:** A common aspect of any standard deep learning framework is the fact that forward propagation of information through the network involves a significant amount of memory-intensive operations. The dot-product operation between the synaptic weights and inputs for inference involves the compute energy along with memory access and memory leakage components. For large-scale problems and correspondingly large-scale models, CMOS memory access and memory leakage can be almost $\sim 50\%$ of the total energy consumption profile [@ankit2017resparc]. The situation is further worsened in a Bayesian deep network since each synaptic weight is characterized by two parameters (mean and variance of the probability distribution), thereby requiring double memory storage. However, the dot-product operation does not occur directly between the inputs and these parameters. In fact, for each inference operation the synaptic weights (typically assumed constant during inference for non-probabilistic networks and implemented by memory elements in hardware) are repeatedly updated depending on sampled values from the Gaussian probability distribution. Hence, direct utilization of crossbar based “In-Memory" computing platforms enabled by non-volatile memory technologies (discussed in details later) for alleviating the memory access and memory fetch bottlenecks is not possible and therefore require a significant rethinking. In the following sections, we sequentially expand on each of these points and propose a spin-based neural processor that merges deterministic and stochastic devices as a potential pathway to enable Bayesian deep learning that can be orders of magnitude more efficient in contrast to state-of-the-art CMOS implementations. Spintronic Device Design ======================== Magnetic Tunnel Junction - True Random Number Generator Design -------------------------------------------------------------- The basic device structure under consideration is the Magnetic Tunnel Junction (MTJ), which consists of two nanomagnets sandwiching a spacer layer (typically an oxide such as MgO). The magnetization of one of the layers is magnetostatically “pinned" in a particular direction while the magnetization of the other layer can be manipulated by a spin current or an external magnetic field. The two layers are denoted as the “Pinned" layer (PL) and “Free" layer (FL). Depending on the relative orientation of the two magnets, the device exhibits a high-resistance anti-parallel (AP) state (when the magnetizations of the two layers have opposite direction) and a low-resistance parallel (P) state (when the magnetizations of the two layers have the same direction). These two states are stabilized by an energy barrier determined by the anisotropy and volume of the magnet. Let us now consider the switching of the magnet from one state to another by the application of an external current. The switching process is inherently stochastic at non-zero temperatures due to thermal noise [@scholz2001micromagnetic]. In the presence of an external current, the probability of switching from one state to the other is modulated depending on the magnitude and duration of the current. True random number generator (TRNG) can be designed using such a device by biasing the magnet at the “write" current corresponding to a switching probability of 50%. Note that CMOS based TRNGs suffer from high energy consumption and circuit design complexity [@yang201416]. Proposals and experimental demonstrations of MTJ based TRNG have been shown in Refs. [@vodenicarevic2017low; @fukushima2014spin]. MTJ based TRNGs are characterized by low area footprint and compatibility with CMOS technology. In this paper, we consider a spin-orbit coupling enabled device structure (Fig. \[fig2\]). It consists of the MTJ stack lying on top of a heavy-metal (HM) underlayer. The device “read" is performed through the MTJ stack between terminals T1 and T3. However, the device “write" is performed by passing current through the heavy-metal underlayer between terminals T2 and T3. Input current flowing through the heavy-metal results in spin-injection at the interface of the magnet and heavy-metal due to spin-Hall effect (SHE) [@hirsch1999spin] and thereby causes switching of the MTJ “free layer" [@liu2012spin]. The device has the following advantages: The decoupled “write" and “read" current paths is advantageous from the perspective of peripheral circuit design to avoid “read"-“write" conflicts since the associated circuits can be optimized independently. Such devices offer 1-2 orders of magnitude energy efficiency in comparison to other memristive technologies as well as standard spin-transfer torque MRAMs. This is due to the fact that in such spin-orbit coupling based systems, every incoming electron in the “write" current path repeatedly scatters at the interface of the magnet and heavy metal and transfers multiple units of spin angular momentum to the ferromagnet lying on top. ![The TRNG device structure is shown. Reset current ($I_Q$) flowing through the heavy-metal (HM) results in in-plane spin current ($I_S$) injection for the MTJ “free layer" (FL). After switching to the in-plane meta-stable position, the magnet relaxes to either of the two stable states with 50% probability.[]{data-label="fig2"}](fig2.pdf){width="45.00000%"} Usage of SHE based switching enables us to use an alternative TRNG design [@kim2015spin] that has the potential to produce high quality random numbers in presence of process, voltage and temperature (PVT) variations. In the earlier scenario of a standard MTJ, device-to-device variations can result in deviations of the bias current required for 50% switching probability, thereby degrading the quality of the random number generation process. Our scheme is depicted in Fig. \[fig2\], where a magnet with Perpendicular Magnetic Anisotropy (PMA) lies on top of the heavy-metal. The device operation is divided into three stages. During an initial “Reset" stage, a current flowing through the heavy metal results in in-plane spin injection in the magnet and orients it along the hard-axis for a sufficient magnitude of the “reset" current. The magnet is then allowed to relax to either of the two stable states in presence of thermal noise – the switching probability being 50% since the hard-axis is a meta-stable orientation point for the magnet. In this case, device-to-device variations only causes change in the critical current required for biasing the magnet close to the meta-stable orientation and does not skew the probability distribution to a particular direction (as in the standard MTJ case). Hence, by maintaining a worst-case critical value of the heavy-metal “reset" current, quality of the random number generation process can be preserved even in the presence of PVT variations. Further, the “reset" current does not flow through the tunneling oxide layer (unlike the standard MTJ case) and therefore reliability of the oxide layer is not a concern in this scenario [@kim2015spin]. Note that our device operation is validated by recent experiments of holding the magnet to its meta-stable hard-axis orientation for performing Bennett clocking in the context of nanomagnetic logic [@bhowmik2014spin]. SHE based energy-efficient switching also results in reduction of the energy consumption involved in the random number generation process. The probabilistic switching characteristics of the MTJ can be analyzed by Landau-Lifshitz-Gilbert (LLG) equation with additional term to account for the spin-orbit torque generated by spin-Hall effect at the ferromagnet-heavy metal interface [@slonczewski1989conductance], $$\label{llg} \frac {d\widehat {\textbf {m}}} {dt} = -\gamma(\widehat {\textbf {m}} \times \textbf {H}_{eff})+ \alpha (\widehat {\textbf {m}} \times \frac {d\widehat {\textbf {m}}} {dt})+\frac{1}{qN_{s}} (\widehat {\textbf {m}} \times \textbf {I}_s \times \widehat {\textbf {m}})$$ where, $\widehat {\textbf {m}}$ is the unit vector of FL magnetization, $\gamma= \frac {2 \mu _B \mu_0} {\hbar}$ is the gyromagnetic ratio for electron, $\alpha$ is Gilberts damping ratio, $\textbf{H}_{eff}$ is the effective magnetic field including the shape anisotropy field for elliptic disks, $N_s=\frac{M_{s}V}{\mu_B}$ is the number of spins in free layer of volume $V$ ($M_{s}$ is saturation magnetization and $\mu_{B}$ is Bohr magneton), and $\textbf{I}_{s}=\theta_{SH}(A_{MTJ}/A_{HM})\textbf{I}_{q}$ is the input spin current ($A_{MTJ}$ and $A_{HM}$ are the MTJ and HM cross-sectional area, $\theta_{SH}$ is the spin-Hall angle and $\textbf{I}_q$ is the charge current flowing through the HM underlayer). Thermal noise is included by an additional thermal field [@scholz2001micromagnetic], $\textbf{H}_{thermal}=\sqrt{\frac{\alpha}{1+\alpha^{2}}\frac{2K_{B}T_{K}}{\gamma\mu_{0}M_{s}V\delta_{t}}}G_{0,1}$, where $G_{0,1}$ is a Gaussian distribution with zero mean and unit standard deviation, $K_{B}$ is Boltzmann constant, $T_{K}$ is the temperature and $\delta_{t}$ is the simulation time-step. ![(a) DW-MTJ: Magnitude of current flowing through the HM, $J$, causes a proportionate displacement, $\Delta x$, in the DW position, which causes a change, $\Delta G$, in the device conductance between terminals T1 and T3. (b) The same device can be used as a neuron by interfacing with a Reference MTJ. The current provided by the output transistor, $I_{out}$ is a saturated linear function of the input current, $I_{in}$.[]{data-label="fig3"}](fig3.pdf){width="48.00000%"} \[table1\] TABLE I. MTJ Device Simulation Parameters ----------------------------------------------------------------- -- **Parameters & **Value\ Free layer area & $\frac{\pi}{4} \times 100 \times 40 nm^2$\ Free layer thickness & $ 1.2 nm$\ Heavy-metal thickness, $t_{HM}$ & $ 2 nm$\ Saturation magnetization, $M_{S}$ & 1000 $KA/m$ [@pai2012spin]\ Spin-Hall angle, $\theta_{SH}$ & 0.3 [@pai2012spin]\ Gilbert damping factor, $\alpha$ & 0.0122 [@pai2012spin]\ Energy barrier, $E_{B}$ & 20 $K_{B}T$\ Temperature, $T_{K}$ & $300K$\ **** ----------------------------------------------------------------- -- \ Considering a worst-case “reset" current of $140\mu A$ for a duration of $1ns$, the energy consumption involved in using a $20k_{B}T$ barrier magnet (calibrated to experimental measurements reported in [@pai2012spin]) as a TRNG is $\sim 57fJ$/bit ($I^2Rt$ energy consumption) [@kim2015spin], which is almost $2\times$ lower than standard MTJ based TRNG. Domain Wall Motion Based Magnetic Devices - Multi-Level Non-Volatile Memory Design ---------------------------------------------------------------------------------- The mono-domain magnet discussed above is characterized by only two stable states. For a magnet with elongated shape, multiple domains can be stabilized in the FL, thereby leading to the realization of multiple stable resistive states. Such a domain-wall (DW) MTJ consists of a domain wall separating the two oppositely magnetized regions and the domain wall position is programmed to modulate the MTJ resistance (due to variation in the relative proportion of P and AP domains in the device) [@sengupta2016proposal]. ![Device characteristics are shown for a $20 \times 0.6 nm^3$ magnet calibrated to experimental measurements [@emori2014spin]. The device characteristics illustrate that the programming current magnitude is directly proportional to the amount of conductance change [@sengupta2016proposal]. ](fig4.pdf){width="38.00000%"} \[fig4\] We consider SHE based domain wall motion dynamics also in magnet-heavy metal bilayers. In magnetic heterostructures with high perpendicular magnetocrystalline anisotropy, spin-orbit coupling and broken inversion symmetry stabilizes chiral domain walls through Dzyaloshinskii-Moriya interaction (DMI) [@emori2013current; @martinez2014current]. Such an interfacial DMI at the magnet-heavy metal interface results in the formation of a Néel domain wall. When an in-plane charge current is injected through the heavy metal, the accumulated spins at the magnet-heavy metal interface results in Néel domain-wall motion. The device structure is shown in Fig. \[fig3\](a), where a current of magnitude, $J$, flowing through the HM layer results in a conductance change, $\Delta G$, between terminals T1 and T3. As shown in Fig. \[fig4\](a), for a given programming time duration, the current flowing through the HM underlayer, causes DW displacement proportional to its magnitude. Note that the device characteristics are obtained by performing micromagnetic LLG simulations by dividing the magnet into multiple grids. The domain wall position determines the magnitude of the MTJ conductance. The MTJ conductance varies linearly with the domain wall position since it determines the relative proportion of the area of the Parallel and Anti-Parallel domains of the MTJ (Fig. \[fig4\](b)). Since such a device can be programmed to multi-level resistive states and are characterized by low switching current requirements and linear device behavior (device conductance change varies in proportion to magnitude of programming current), they are an ideal fit for implementing crossbar based “In-Memory" computing platforms (discussed in next section). We will refer to this device as a DW-MTJ for the remainder of this text. Experimentally, a multi-level DW motion based resistive device was recently shown to exhibit 15-20 intermediate resistive states [@lequeux2016magnetic]. It is worth noting here that the device structure in Fig. \[fig3\](a) can be used as a neuron by interfacing with a Reference MTJ (Fig. \[fig3\](b)) [@sengupta2016proposal]. The resistive divider can drive a CMOS transistor where the output drive current would be a linear function of the input current flowing through the heavy metal layer of the device, thereby mimicking the functionality of a Saturated Linear Functionality by ensuring that the transistor operates in the saturation regime [@sengupta2016proposal]. The simulation parameters, provided in Table II, were used for the rest of this text for DW-MTJ unless otherwise stated. The parameters were obtained magnetometric measurements of CoFe-Pt nanostrips [@emori2013current; @martinez2014current]. \[table2\] TABLE II. DW-MTJ Device Simulation Parameters ---------------------------------------------------------------------------- -- **Parameters & **Value\ Ferromagnet Thickness & $0.6 nm$\ Grid Size & $ 4 \times 1 \times 0.6 nm^3$\ Heavy Metal Thickness & $ 3 nm$\ Domain Wall Width & $ 7.6 nm$\ Saturation Magnetization, $M_s$ & 700 $KA/m$\ Spin-Hall Angle, $\theta$ & 0.07\ Gilbert Damping Factor, $\alpha$ & 0.3\ Exchange Correlation Constant, $A$ & $1 \times 10^{-11} J/m$\ Perpendicular Magnetic Anisotropy, $K_{u2}$ & $4.8 \times 10^{5} J/m^{3}$\ Effective DMI constant, $D$ & $-1.2 \times 10^{-3} J/m^{2}$\ **** ---------------------------------------------------------------------------- -- \ ![image](fig5.pdf){width="\textwidth"} All-Spin Bayesian Neural Networks ================================= Spin-Based Gaussian Random Number Generator ------------------------------------------- Gaussian random number generation task is a hardware-expensive process. CMOS based designs for Gaussian random number generators would usually require large number of registers, linear feedback circuits, etc. For instance, a recent work for a CMOS based Gaussian RNG implementation reports $1780$ registers and $528.69mW$ power consumption for a $64$-parallel Gaussian random number generator task [@cai2017hardware]. Let us now discuss our proposal of spin-based Gaussian random number generator. In the previous section, we discussed the design of a spintronic TRNG. An array of TRNGs can be used for sampling from a uniform probability distribution. In order to generate a Gaussian probability distribution from a uniform one, we draw inspiration from statistical Central Limit Theorem, discussed in Box 1. The key result of Central Limit Theorem that we utilize is that the sum of a large number of independent and identically distributed (i.i.d) random variables is approximately Normal. Our proposed design is illustrated in Fig. \[fig5\] which depicts a possible array implementation [@kim2015spin] of our spin-based TRNGs. Each spin device is interfaced with an access transistor. Rows sharing a Reset-Line can be driven simultaneously. Hence, random numbers can be generated in the entire array in parallel. The timing diagram is shown in Fig. \[fig5\]. Each row can be read by asserting a particular word-line (WL) and sensing the bit-line voltage (BL). For an $m \times n$ array, each row-read produces an $n$-bit number generated from a uniform probability distribution. By interfacing the array with an accumulator, that averages all the generated random numbers, we are able to produce random numbers drawn from a Normal distribution. Note that the hardware overhead for this process would be high for applications that require precise sampling from Gaussian distributions, since the convergence takes place only for infinite samples. However, for machine learning workloads considered herein, performance of such platforms are usually resilient to approximations in the underlying computations. For instance, Fig. \[fig5\] shows that even with 8-bit representation and 3 random variables drawn from uniform probability distribution, we are able to achieve an approximate Gaussian distribution. While Gaussian probability distributions are primarily used in such algorithms, non-Gaussian weight distributions can be also designed by using the Gaussian function as a basis. Note that, while Box 1 discussions are equally valid for a CMOS based TRNG, it will be an order of magnitude more area and power consuming than our proposed spin-based TRNG (as explained in Section III). Dot-Product Operation Between Inputs and Sampled Synaptic Weights ----------------------------------------------------------------- ![ “In-Memory" computing primitive where an array of spin synapses implement the dot-product kernel.[]{data-label="fig6"}](fig6.pdf){width="42.00000%"} Let us first discuss the operation of DW-MTJ enabled spintronic crossbar arrays as an energy-efficient mechanism to realize the dot-product computing kernel. Assuming each synapse to be represented by a DW-MTJ, as shown in Fig. \[fig6\], they can be arranged in a crossbar structure. Each row of the array is driven by an analog voltage (output of Digital-to-Analog converters – DACs) that corresponds to the magnitude of the input. The current flowing through each synapse is scaled by the conductance of the device and due to Kirchoff’s law, all these currents get summed up along the column, thereby realizing the dot-product kernel. Note that negative synaptic weights can be also mapped by using two horizontal lines per input (driven by ‘positive’ and ‘negative’ supply voltages). In case a particular synaptic weight is ‘positive’ (‘negative’), then the corresponding conductance in the ‘positive’ (‘negative’) line is set in accordance to the weight. The resultant currents get summed up along the column and pass as the input “write" current through the spin-neuron. Consecutive “write" and “read" cycles of the spin-neurons will implement multiple iterations of the Bayesian network. The analog output current provided by the spin-neuron is then converted to a digital format using the Analog-to-Digital Converters (ADCs). The digital outputs can be latched to provide inputs for the fan-out crossbar arrays. The energy-efficiency of the system stems mainly from two factors: The input write resistance of the spintronic neurons are low (being magneto-metallic devices) and they inherently require very low currents for switching. This enables the crossbar arrays of spintronic synapses to be operated at low terminal voltages (typically $100mV$). Further, spintronic neurons are inherently current-driven and thereby do not require costly current to voltage converters unlike CMOS and other emerging technology (Resistive Random Access Memory, Phase Change Memory, among others) based implementations. Since, spin devices are inherently non-volatile technologies, the ability to perform the costly Multiply-Accumulate operations in the memory array itself enables us to address the issues of von-Neumann bottleneck. However, in the context of Bayesian deep networks, even for the inference stage, the synaptic weights are not constant but are updated depending on sampled values from a Gaussian distribution. Assuming we are able to generate samples from a Normal distribution by using the device-circuit primitives proposed earlier, the computations in a Bayesian network can be partitioned in an appropriate fashion such that the benefits of spin-based “In-Memory" computing can be still utilized. This is explained in Box 2. Realizing that a Normal distribution with a particular mean and variance is equivalent to a scaled and shifted version of a Normal distribution with zero mean and unit variance, we partition the inference equation as shown in (3). The constant parameters $\mu_{jk}$ and $\sigma_{jk}$ (highlighted in red) represent the mean and variance of the probability distribution of the corresponding synaptic weight and can be therefore implemented by DW-MTJ based memory devices from a hardware implementation perspective. The resultant system consists of two crossbar arrays for storing the mean and variance parameters respectively. While the inputs of a particular layer are directly applied to the crossbar array storing the mean values, they are scaled by the random numbers generated from the RNG unit (outputs normalized to provide random numbers with zero mean and unit variance) described previously for the crossbar array storing the variance values. Note that the crossbar array storing the mean values need to be evaluated only once during the inference process for a particular input while repeated evaluations are performed on the variance crossbar array for each set of sampled random numbers. Typical CMOS neuromorphic architectures are characterized by much higher movement of weight data than input data to compute the inference operation [@chen2014dadiannao]. Our proposal of computation partition, explained in Box 2, enables us to leverage the “In-Memory" computing primitives for storing the probability distribution parameters while parallely computing energy-efficient dot-products in-situ between inputs and stochastic weights. It is worth noting here that the crossbar column outputs are read sequentially in order to ensure that the random numbers sampled for the synaptic weights of each column are independent. Such a sequential column read is currently a common practice in crossbar based deep learning architectures [@ankit2019puma; @shafiee2016isaac]. Results and Discussion ====================== A hybrid device-circuit-algorithm co-simulation framework was developed to evaluate the performance of the proposed All-Spin Bayesian hardware. The magnetization switching characteristics of the mono-domain and multi-domain MTJ was simulated in MuMax3, a GPU accelerated micro-magnetic simulation framework [@vansteenkiste2014design]. Non-Equilibrium Green’s Function (NEGF) based transport simulation framework [@fong2011knack] was used for modelling the MTJ resistance variation with oxide thickness and applied voltage. The obtained device characteristics from MuMax3 and SPICE simulation tools was used in algorithm level simulator, PyTorch, to evaluate the functionality of the circuit. The performance of this design was tested for a standard digit recognition problem on the MNIST dataset [@lecun1998gradient] (60,000 training samples and 10,000 test samples of handwritten digits (0-9)). A two layer fully connected neural network was used, with each hidden layer having 200 neurons ($784\times200\times200\times10$). The probability distributions were learnt using the ‘Bayes by Backprop’ algorithm [@blundell2015weight][^3], which learns the optimal Gaussian distribution by minimizing the KL divergence from the true probability distribution. The prior distribution on the weights used for training was a scaled mixture of two gaussian functions. The network was trained offline to obtain the values of the mean and standard deviation of the probability distributions of the weights. Subsequently they were mapped to the conductances of the DW-MTJ devices. The baseline idealized software network was trained with an accuracy of $98.63\%$ over the training set and $97.51\%$ over the testing set (averaged over 10 sampled networks). The device parameters used in this work have been tabulated in the previous section. $20k_{B}T$ barrier height magnet was used in the Gussian RNG unit. We considered 4-bit representation in the DW-MTJ weights and 3-bit discretization in the neuron output. Note that, as explained in the previous section, our neuronal devices mimic a saturating linear functionality and our network was trained with such a transfer function itself. Considering a minimum sensing and programming displacement of $20nm$ for the DW location, we consider our cross-point and neuronal devices to be $320nm$ and $160nm$ in length. From our micromagnetic simulations, we observe the critical current required to switch the neuronal device from one edge to the other is $40\mu A$ for a time duration of $1ns$. Assuming a crossbar supply voltage of $100mV$, the synaptic weight corresponding to unity value is mapped to $2.5K\Omega$. Note that the lower supply voltage is enabled by the low current requirement of the magneto-metallic spin neurons. In addition to reducing the system-level power consumption, lower supply voltage minimizes the variation of MTJ AP resistance with applied voltage during the read operation. We consider $300\%$ TMR in the DW-MTJ weights of the crossbar array. Considering such device-level behavioral characteristics, non-idealities and constraints, the test accuracy of the network was $97.02\%$ (averaged over 10 samples). In order to estimate the system-level energy consumption, we consider the core RNG and crossbar energy consumption along with peripheral circuitry like ADC and DAC[^4]. We evaluate the energy consumption for a single image inference and a particular network sample. The crossbar read latency was assumed to be $10ns$ (for each column read) while the ADC throughput was $1ns$. For the RNG, DAC and ADC units, we considered 8-bit precision and 3 variables were used for the accumulation process in the Normal distribution sampling. We would like to mention here that we assumed 8-bit precision for the energy calculations in order to achieve a fair comparison with numbers reported in Ref. [@cai2018vibnn] for an iso-network CMOS architecture. However, from functional viewpoint, lower bit-precision $\sim 4$ bits was observed to be sufficient. The total energy consumption of our proposed “All-Spin" was evaluated to be $804.3nJ$ per classification, which is $23.6\times$ energy efficient in contrast to the baseline CMOS implementation [@cai2018vibnn]. Note that while limited bit-precision and device resistance ratio in ON and OFF states are concerns that might limit algorithm recognition accuracy, circuit level solutions like mapping the computation across multiple devices have been explored [@boybat2018neuromorphic]. Further, resistive crossbars are usually characterized by limited fan-in – much smaller than neuron fan-in in typical deep networks due to non-idealities, parasitics and sneak-paths [@liang2010cross]. Hence, mapping a practically sized network requires mapping synapses of a neuron across multiple crossbars [@ankit2019puma; @shafiee2016isaac]. Such architectural level innovations can be easily integrated with our current proposal. Summary ======= In summary, we proposed the vision of an “All-Spin" Bayesian neural processor that has the potential of enabling orders of magnitude hardware efficiency (area, power, energy consumption) in contrast to state-of-the-art CMOS implementations. Our core proposal can be easily extended to incorporate innovations in the material stack through the exploration of novel device physics (like Voltage-Controlled Magnetic Anisotropy (VCMA) effect [@amiri2012voltage], the Magneto-electric effect [@franke2015reversible]) and spin textures (like skyrmions [@woo2016observation]). Computing frameworks, so far, have mainly segregated deterministic and stochastic computations. Standard deterministic deep learning frameworks enabled by spintronic devices and other post-CMOS technologies have been explored. In such scenarios, device-level non-idealities are usually treated as a disadvantage. More recently, stochasticity inherent in such devices have been exploited for computing to implement stochastic versions of their deterministic counterparts [@sengupta2016probabilistic; @srinivasan2016magnetic] (driven by the motivation that devices can be scaled down to single bit instead of multi-bit representations due to probabilistic domain encoding of information). Device stochasticity has been also used in other unconventional computing platforms like Ising computing, combinatorial optimization problems, among others [@roy2018perspective]. Note that prior work on using magnetic devices for Bayesian Inference engines have been proposed [@faria2018implementing; @shim2017stochastic] which are mainly used for implementing Bayes’ rule for simple prediction tasks in directed acyclic graphs and do not have relevance or overlap with Bayesian deep networks. Bayesian deep learning is a unique computing framework that necessitates the merger of both deterministic (dot-product evaluations of sampled weights and inputs) and stochastic computations (sampling weights from probability distributions) - thereby requiring a significant rethinking of the design space across the stack from devices to circuits and algorithms. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} Y. Gal, “Uncertainty in deep learning,” Ph.D. dissertation, PhD thesis, University of Cambridge, 2016. A. Sengupta and K. Roy, “Encoding neural and synaptic functionalities in electron spin: A pathway to efficient neuromorphic computing,” *Applied Physics Reviews*, vol. 4, no. 4, p. 041105, 2017. A. Sengupta, G. Srinivasan, D. Roy, and K. Roy, “Stochastic inference and learning enabled by magnetic tunnel junctions,” in *2018 IEEE International Electron Devices Meeting (IEDM)*.1em plus 0.5em minus 0.4emIEEE, 2018, pp. 1–4. J. Grollier, D. Querlioz, and M. D. Stiles, “Spintronic nanodevices for bioinspired computing,” *Proceedings of the IEEE*, vol. 104, no. 10, pp. 2024–2039, 2016. M. Romera, P. Talatchian, S. Tsunegi, F. A. Araujo, V. Cros, P. Bortolotti, J. Trastoy, K. Yakushiji, A. Fukushima, H. Kubota, S. Yuasa, M. Ernoult, D. Vodenicarevic, T. Hirtzlin, N. Locatelli, D. Q. Querlioz, and J. Grollier, “Vowel recognition with four coupled spin-torque nano-oscillators,” *Nature*, vol. 563, no. 7730, p. 230, 2018. A. Sengupta, Y. Shim, and K. Roy, “Proposal for an all-spin artificial neural network: Emulating neural and synaptic functionalities through domain wall motion in ferromagnets,” *IEEE transactions on biomedical circuits and systems*, vol. 10, no. 6, pp. 1152–1160, 2016. A. Sengupta, A. Ankit, and K. Roy, “Performance analysis and benchmarking of all-spin spiking neural networks (special session paper),” in *2017 International Joint Conference on Neural Networks (IJCNN)*.1em plus 0.5em minus 0.4emIEEE, 2017, pp. 4557–4563. R. Houthooft, X. Chen, Y. Duan, J. Schulman, F. De Turck, and P. Abbeel, “[VIME]{}: Variational information maximizing exploration,” in *Advances in Neural Information Processing Systems*, 2016, pp. 1109–1117. C. Andrieu, N. De Freitas, A. Doucet, and M. I. Jordan, “An introduction to [MCMC]{} for machine learning,” *Machine learning*, vol. 50, no. 1-2, pp. 5–43, 2003. R. Cai, A. Ren, N. Liu, C. Ding, L. Wang, X. Qian, M. Pedram, and Y. Wang, “[VIBNN]{}: Hardware acceleration of bayesian neural networks,” in *Proceedings of the Twenty-Third International Conference on Architectural Support for Programming Languages and Operating Systems*. 1em plus 0.5em minus 0.4emACM, 2018, pp. 476–488. Z. Ghahramani and M. J. Beal, “Propagation algorithms for variational [Bayesian]{} learning,” in *Advances in neural information processing systems*, 2001, pp. 507–513. A. Ankit, A. Sengupta, P. Panda, and K. Roy, “[RESPARC]{}: A reconfigurable and energy-efficient architecture with memristive crossbars for deep spiking neural networks,” in *Proceedings of the 54th Annual Design Automation Conference 2017*.1em plus 0.5em minus 0.4emACM, 2017, p. 27. J. Slonczewski, “Currents, torques, and polarization factors in magnetic tunnel junctions,” *Physical Review B*, vol. 71, no. 2, p. 024411, 2005. W. Scholz, T. Schrefl, and J. Fidler, “Micromagnetic simulation of thermally activated switching in fine particles,” *Journal of Magnetism and Magnetic Materials*, vol. 233, no. 3, pp. 296–304, 2001. K. Yang, D. Fick, M. B. Henry, Y. Lee, D. Blaauw, and D. Sylvester, “[16.3 A 23Mb/s 23pJ/b fully synthesized true-random-number generator in 28nm and 65nm CMOS]{},” in *2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC)*.1em plus 0.5em minus 0.4em IEEE, 2014, pp. 280–281. D. Vodenicarevic, N. Locatelli, A. Mizrahi, J. S. Friedman, A. F. Vincent, M. Romera, A. Fukushima, K. Yakushiji, H. Kubota, S. Yuasa, S. Tiwari, J. Grollier, and D. Querlioz, “Low-energy truly random number generation with superparamagnetic tunnel junctions for unconventional computing,” *Physical Review Applied*, vol. 8, no. 5, p. 054045, 2017. A. Fukushima, T. Seki, K. Yakushiji, H. Kubota, H. Imamura, S. Yuasa, and K. Ando, “Spin dice: A scalable truly random number generator based on spintronics,” *Applied Physics Express*, vol. 7, no. 8, p. 083001, 2014. J. Hirsch, “Spin [Hall]{} effect,” *Physical Review Letters*, vol. 83, no. 9, p. 1834, 1999. L. Liu, C.-F. Pai, Y. Li, H. Tseng, D. Ralph, and R. Buhrman, “Spin-torque switching with the giant spin [Hall]{} effect of tantalum,” *Science*, vol. 336, no. 6081, pp. 555–558, 2012. Y. Kim, X. Fong, and K. Roy, “Spin-orbit-torque-based spin-dice: A true random-number generator,” *IEEE Magnetics Letters*, vol. 6, pp. 1–4, 2015. D. Bhowmik, L. You, and S. Salahuddin, “Spin [Hall]{} effect clocking of nanomagnetic logic without a magnetic field,” *Nature nanotechnology*, vol. 9, no. 1, p. 59, 2014. J. C. Slonczewski, “Conductance and exchange coupling of two ferromagnets separated by a tunneling barrier,” *Physical Review B*, vol. 39, no. 10, p. 6995, 1989. C.-F. Pai, L. Liu, Y. Li, H. Tseng, D. Ralph, and R. Buhrman, “Spin transfer torque devices utilizing the giant spin [Hall]{} effect of tungsten,” *Applied Physics Letters*, vol. 101, no. 12, p. 122404, 2012. S. Emori, E. Martinez, K.-J. Lee, H.-W. Lee, U. Bauer, S.-M. Ahn, P. Agrawal, D. C. Bono, and G. S. Beach, “Spin [Hall]{} torque magnetometry of [Dzyaloshinskii]{} domain walls,” *Physical Review B*, vol. 90, no. 18, p. 184427, 2014. S. Emori, U. Bauer, S.-M. Ahn, E. Martinez, and G. S. Beach, “Current-driven dynamics of chiral ferromagnetic domain walls,” *Nature materials*, vol. 12, no. 7, pp. 611–616, 2013. E. Martinez, S. Emori, N. Perez, L. Torres, and G. S. Beach, “Current-driven dynamics of [Dzyaloshinskii]{} domain walls in the presence of in-plane fields: [Full]{} micromagnetic and one-dimensional analysis,” *Journal of Applied Physics*, vol. 115, no. 21, p. 213909, 2014. S. Lequeux, J. Sampaio, V. Cros, K. Yakushiji, A. Fukushima, R. Matsumoto, H. Kubota, S. Yuasa, and J. Grollier, “A magnetic synapse: Multilevel spin-torque memristor with perpendicular anisotropy,” *Scientific Reports*, vol. 6, 2016. R. Cai, A. Ren, L. Wangy, M. Pedramy, and Y. Wang, “Hardware acceleration of bayesian neural networks using [RAM]{} based linear feedback gaussian random number generators,” in *2017 IEEE International Conference on Computer Design (ICCD)*.1em plus 0.5em minus 0.4emIEEE, 2017, pp. 289–296. Y. [Chen]{}, T. [Luo]{}, S. [Liu]{}, S. [Zhang]{}, L. [He]{}, J. [Wang]{}, L. [Li]{}, T. [Chen]{}, Z. [Xu]{}, N. [Sun]{}, and O. [Temam]{}, “[DaDianNao]{}: A machine-learning supercomputer,” in *Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture*.1em plus 0.5em minus 0.4emIEEE Computer Society, 2014, pp. 609–622. A. Ankit, I. E. Hajj, S. R. Chalamalasetti, G. Ndu, M. Foltin, R. S. Williams, P. Faraboschi, W.-m. W. Hwu, J. P. Strachan, K. Roy *et al.*, “[PUMA]{}: A programmable ultra-efficient memristor-based accelerator for machine learning inference,” in *Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems*.1em plus 0.5em minus 0.4emACM, 2019, pp. 715–731. A. Shafiee, A. Nag, N. Muralimanohar, R. Balasubramonian, J. P. Strachan, M. Hu, R. S. Williams, and V. Srikumar, “[ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars]{},” *ACM SIGARCH Computer Architecture News*, vol. 44, no. 3, pp. 14–26, 2016. A. Vansteenkiste, J. Leliaert, M. Dvornik, M. Helsen, F. Garcia-Sanchez, and B. Van Waeyenberge, “The design and verification of mumax3,” *AIP advances*, vol. 4, no. 10, p. 107133, 2014. X. Fong, S. K. Gupta, N. N. Mojumder, S. H. Choday, C. Augustine, and K. Roy, “[KNACK: A hybrid spin-charge mixed-mode simulator for evaluating different genres of spin-transfer torque MRAM bit-cells]{},” in *Simulation of Semiconductor Processes and Devices (SISPAD), 2011 International Conference on*.1em plus 0.5em minus 0.4emIEEE, 2011, pp. 51–54. Y. LeCun, L. Bottou, Y. Bengio, P. Haffner *et al.*, “Gradient-based learning applied to document recognition,” *Proceedings of the IEEE*, vol. 86, no. 11, pp. 2278–2324, 1998. C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, “Weight uncertainty in neural networks,” *arXiv preprint arXiv:1505.05424*, 2015. I. Boybat, M. Le Gallo, S. Nandakumar, T. Moraitis, T. Parnell, T. Tuma, B. Rajendran, Y. Leblebici, A. Sebastian, and E. Eleftheriou, “Neuromorphic computing with multi-memristive synapses,” *Nature communications*, vol. 9, no. 1, p. 2514, 2018. J. Liang and H.-S. P. Wong, “Cross-point memory array without cell selectors – [Device]{} characteristics and data storage pattern dependencies,” *IEEE Transactions on Electron Devices*, vol. 57, no. 10, pp. 2531–2538, 2010. P. K. Amiri and K. L. Wang, “Voltage-controlled magnetic anisotropy in spintronic devices,” in *Spin*, vol. 2, no. 03.1em plus 0.5em minus 0.4emWorld Scientific, 2012, p. 1240002. K. J. Franke, B. Van de Wiele, Y. Shirahata, S. J. H[ä]{}m[ä]{}l[ä]{}inen, T. Taniyama, and S. van Dijken, “Reversible electric-field-driven magnetic domain-wall motion,” *Physical Review X*, vol. 5, no. 1, p. 011010, 2015. S. Woo, K. Litzius, B. Kr[ü]{}ger, M.-Y. Im, L. Caretta, K. Richter, M. Mann, A. Krone, R. M. Reeve, M. Weigand, P. Agrawal, I. Lemesh, M.-A. Mawass, P. Fischer, M. Kläui, and G. S. D. Beach, “Observation of room-temperature magnetic skyrmions and their current-driven dynamics in ultrathin metallic ferromagnets,” *Nature materials*, 2016. A. Sengupta, M. Parsa, B. Han, and K. Roy, “Probabilistic deep spiking neural systems enabled by magnetic tunnel junction,” *IEEE Transactions on Electron Devices*, vol. 63, no. 7, pp. 2963–2970, 2016. G. Srinivasan, A. Sengupta, and K. Roy, “Magnetic tunnel junction based long-term short-term stochastic synapse for a spiking neural network with on-chip [STDP]{} learning,” *Scientific reports*, vol. 6, p. 29545, 2016. K. Roy, A. Sengupta, and Y. Shim, “Perspective: Stochastic magnetic devices for cognitive computing,” *Journal of Applied Physics*, vol. 123, no. 21, p. 210901, 2018. R. Faria, K. Y. Camsari, and S. Datta, “Implementing bayesian networks with embedded stochastic [MRAM]{},” *AIP Advances*, vol. 8, no. 4, p. 045101, 2018. Y. Shim, S. Chen, A. Sengupta, and K. Roy, “Stochastic spin-orbit torque devices as elements for bayesian inference,” *Scientific reports*, vol. 7, no. 1, p. 14101, 2017. [^1]: Manuscript received November, 2019. [^2]: All authors contributed equally to this work. The authors are with the School of Electrical Engineering and Computer Science, Department of Materials Science and Engineering, The Pennsylvania State University, University Park, PA 16802, USA. A. Malhotra is also affiliated with Birla Institute of Technology and Science, Pilani, Rajasthan 333031, India. E-mail: [email protected]. [^3]: The related code can be found at <https://github.com/nitarshan/bayes-by-backprop>. [^4]: The energy consumption for the peripheral circuitry were included from typical numbers considered in literature [@ankit2019puma; @shafiee2016isaac] and can be found at <https://github.com/Aayush-Ankit/puma-simulator/blob/training/include/constants.py>.
--- author: - | [**Hugo Richard ([email protected])**]{}\ PARIETAL Team, INRIA, 1 Rue Honor d’Estienne d’Orves, 91120 Palaiseau, France\ PARIETAL Team, INRIA, 1 Rue Honor d’Estienne d’Orves, 91120 Palaiseau, France\ \ PARIETAL Team, INRIA, 1 Rue Honor d’Estienne d’Orves, 91120 Palaiseau, France\ TAU team, INRIA, LRI, Paris-Sud University, France\ bibliography: - 'biblio.bib' title: Optimizing deep video representation to match brain activity --- Abstract ======== [ The comparison of observed brain activity with the statistics generated by artificial intelligence systems is useful to probe brain functional organization under ecological conditions. Here we study fMRI activity in ten subjects watching color natural movies and compute deep representations of these movies with an architecture that relies on optical flow and image content. The association of activity in visual areas with the different layers of the deep architecture displays complexity-related contrasts across visual areas and reveals a striking foveal/peripheral dichotomy. ]{} > **Keywords:** deep learning; video encoding; brain mapping; Introduction ============ The understanding of brain functional architecture has long been driven by subtractive reasoning approaches, in which the activation patterns associated with different experimental conditions presented in event-related or block designs are contrasted in order to yield condition-specific maps [@poline2012]. A more ecological way of stimulating subjects consists in presenting complex continuous stimuli that are much more similar to every-day cognitive experiences. The analysis of the ensuing complex stimulation streams proceeds by extracting relevant features from the stimuli and correlating the occurrence of these features with brain activity recorded simultaneously with the presentation of the stimuli. The analysis of video streams has been carried in [@eickenberg2017seeing] or [@gucclu2015deep] using a deep convolutional network trained for image classification. More recently, [@gucclu2017increasingly] has used a deep neural network trained for action recognition to analyze video streams. Like [@gucclu2017increasingly], we use a deep neural network trained for action recognition to extract video features and train a linear model to predict brain activity from these features. In contrast, our study is not restricted to dorsal stream visual areas but involves the whole brain, and the deep neural network we use is pretrained on the largest action recognition dataset available [@kay2017kinetics]. From the different layers of the deep neural networks, we build video representations that allow us to segregate (1) occipital and lateral areas of the visual cortex (reproducing the results of [@gucclu2015deep]) and (2) foveal and peripheric areas of the visual cortex. We also introduce an efficient spatial compression scheme for deep video features that allows us to speed up the training of our predictive algorithm. We show that our compression scheme outperforms PCA by a large margin. Methods ======= Deep video representation ------------------------- We use a deep neural network trained for action recognition to build deep representations of the Berkeley Video stimuli [@nishimoto2011reconstructing]. This material consists of more than four hours of color natural movies built by mixing video blocks of $5$-$15$ seconds in a random fashion. The deep network we use is called Temporal Segment Network (TSN) [@wang2016temporal]. Following an idea introduced in 2014 [@simonyan2014two] it was intended to mimic the dorsal and ventral stream by separately processing raw frames and optical flow fields. We chose TSN for our experiments because it uses a much larger number of layers than the original network (which results in higher accuracy in action recognition) and that a version of TSN pretrained on Kinetics – a massive video dataset (300 000 unique video clips) with 400 different classes all describing a human and at least 400 videos per class – is publicly available. The network is trained to recognize human actions such as slack-lining, skateboarding, massaging feet, dancing zumba and dining. The version of TSN we use in our experiments is based on Inception v3 [@szegedy2016rethinking] for both streams where small networks are used as building blocks of the main large network [@lin2013network]. Each stream in the TSN Network is composed of more than 40 convolution layers and a fully connected layer. The activities after the last layer represent the probability of belonging to each action class. Feature extraction ------------------ The raw frames encode information about pixels, and flow fields encode information about pixels displacements. Although flow fields and raw frames streams do not precisely disentangle spatial content and motion information in videos, we may expect that the raw frames stream better represent local spatial features while the flow fields stream more efficiently convey dynamic information. Following [@eickenberg2017seeing] we consider that the activation statistics in the first layers (the ones closer to those of the input) have a low level of abstraction, whereas the last layers (closer to the labels) represent high-level information. Therefore each activity in both streams can be considered as specific features or representations of the video. If we were to extract all network activities of the Berkeley Video Dataset we would need to store more than 6 millions floats per frame in the dataset. Such a representation would be highly redundant. In order to keep the volume of data reasonable, in each stream we only focus on four convolutional layers $L_1, L_2, L_3, L_4$ ranked by complexity. We further compress the data using spatial smoothing, and use temporal smoothing so that we get one representation every two seconds of video, which allows us to match the acquisition rate of fMRI scanners. Regression ---------- 10 subjects were scanned while watching the color natural movies of the Berkeley Video Dataset. The fMRI images were acquired at high spatial resolution (1.5mm), from a Prisma Scanner, using Multi-band and IPAT accelerations (mb factor=3, ipat=2). These data are part of a large-scale mapping project on a limited number of participants, called Human Brain Charting. Data acquisition procedures and initial experiments run in this project are described in [@ibc]. In order to link extracted deep video features to the internal representation of videos in each subject we use a simple linear model to fit their brain activity in each voxel. The use of a very simple model allows us to posit that the performance of the predictive model from a particular video representation is mostly linked to the suitability of the video representation. Hence the performance of the algorithm can be seen as a measure of the biological suitability of the video representation. We use a kernel ridge regression with an hyper-parameter setting the magnitude of the l2-penalization on the weights. The resulting prediction is obtained using a cross validation procedure (11 sessions are used for train, 1 for test and at least 5 different splits are considered). To set the value of the hyper-parameter, we use a 5-fold cross validation on the train set and consider 20 different values. During hyper parameter selection, we only focus on the visual cortex to make this computation efficient. The chosen measure of performance of our prediction algorithm is the coefficient of determination $m_{cv}$. Let $\mathbf{y}_{pred}$ and $\mathbf{y}_{real}$ be the respectively the prediction of a voxel activity and the real voxel activity. Then $$m_{cv}(\mathbf{y}_{pred}, \mathbf{y}_{real}) \;=\; 1 - \frac{\sum_{t=1}^{n_b} (\mathbf{y}_{pred}[t] - \mathbf{y}_{real}[t])^2}{\sum_{t=1}^{n_b}(\mathbf{y}_{real}[t] - \overline{\mathbf{y}_{real}})^2}$$ The metric used to select the best parameter is the number of voxels having a coefficient of determination $m_{cv}$ greater than $0.1$. This procedure leads to different parameter values depending on the chosen layer activities. Figure \[fig:feature\_extraction\] gives an overview of the pipeline used to extract and process deep video features to estimate the brain activity of subjects. ![ Feature extraction and regression scheme: at each time frame we compute and extract the activities of four layers $L_1, \cdots, L_4$ of the temporal segment network on a single frame and on a stack of 5 consecutive optical flow fields. The extracted activities are spatially and temporally down-sampled and then used to predict brain activity of subjects exposed to the video stimuli.[]{data-label="fig:feature_extraction"}](conceptual_figure.pdf) Results ======= The extracted deep network features lead to different prediction performance depending on the down-sampling procedure, the stream used and the localization of target voxels. An efficient spatial compression scheme --------------------------------------- We show that preserving the channel structure of the network during spatial compression procedure is key for developing an efficient compression scheme. We compare three spatial compression schemes for network activities: (1) Standard principal component analysis (PCA) with $2000$ components; the transformation is learned on training sessions before it is applied to all sessions. (2) Average pooling inside channels (APIC) which computes local means of activities located in the same channel. (3) Average pooling inside and between convolution layers (APBIC) which is used to get the same number of output features for all layers while minimizing the number of convolutions between channels. It allows us to check that the performance of the predictive algorithm is not merely driven by the number of features. The procedure for activities extraction, temporal down-sampling and brain activity prediction is not changed while the spatial compression scheme varies. The benchmark is performed using a leave-one-out cross-validation procedure with two splits in three subjects. Figure \[fig:spatial\_compression\] shows that both approaches preserving channel organization structure outperform PCA by a large margin. ![ Comparison of the different neural network compression streams. The APIC approach slightly outperforms APIBC, and both strongly outperform PCA. When using APIC or APIBC we predict correctly up to 850 times more voxels than when using PCA.[]{data-label="fig:spatial_compression"}](spatial_compression.pdf) These results suggest that data stored in the same channel are similar and that mixing data between channels tends to destroy valuable information. In our pipeline, we average only inside same channels (APIC) because it yields the best performance. Choosing APBIC would be trading performance for computation speed since its high compression rate enables a much faster training of the prediction algorithm. Data based parcellation of the brain using deep video representation -------------------------------------------------------------------- Depending on the considered region of the brain, the best fitting representation varies. We show that the compressed activities of different layers show contrasts between low-level (retinotopic) versus high-level (object-responsive) areas, but also between foveal and peripheral areas. The difference between the prediction score from high level layer activity and low level layer activity of both streams ($L_4^{flow} - L_2^{flow}$ and $L_4^{rgb} - L_2^{rgb}$) yields a clear contrast between occipital (low-level) and lateral (high-level) areas (see Fig \[fig:L4-L1\]). This highlights a gradient of complexity in neural representation along the ventral stream which was also found in [@gucclu2015deep]. ![ High level and low level areas contrasts: Difference between predictions score from high level layer activity and low level activity of the raw frames stream $L_4^{rgb} - L_2^{rgb}$ (top) and flow fields stream $L_4^{flow} - L_2^{flow}$ (bottom). The results show a clear contrast between occipital areas better predicted by lower level layers (blue) and lateral areas better predicted from highest level layers (red), illustrating a gradient of complexity across areas.[]{data-label="fig:L4-L1"}](layer4-layer1-rgb.png "fig:") ![ High level and low level areas contrasts: Difference between predictions score from high level layer activity and low level activity of the raw frames stream $L_4^{rgb} - L_2^{rgb}$ (top) and flow fields stream $L_4^{flow} - L_2^{flow}$ (bottom). The results show a clear contrast between occipital areas better predicted by lower level layers (blue) and lateral areas better predicted from highest level layers (red), illustrating a gradient of complexity across areas.[]{data-label="fig:L4-L1"}](layer4-layer1-flow.png "fig:") The difference between predictions score from low-level layers activity of flow fields stream and high level layers activity of raw frames stream ($L_1^{flow} - L_4^{rgb}$) yields a contrast that does not match boundaries between visual areas; instead, it does coincide with the retinotopic map displaying preferred eccentricity (see Figure \[fig:eccentricity-comparison\]). Intuitively this means that regions where brain activity is better predicted from the highest layer of optical flow fields than from the lowest layer of raw frames stream are involved in peripheric vision whereas regions where activity is better predicted from the lowest layer of raw frames stream than from the highest layer of optical flow fields are mainly foveal. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![ The difference between predictions score from low-level layers activity of flow fields and high-level layers activity of raw frames stream $L_1^{flow} - L_4^{rgb}$ (top) resembles the preferred eccentricity map of the same subject (bottom). Areas that are better predicted from low level flow fields streams are mostly involved in peripheric vision whereas areas better predicted from high level raw frames stream are mainly foveal.[]{data-label="fig:eccentricity-comparison"}](layer1-flow-layer4-rgb2.pdf "fig:") ![ The difference between predictions score from low-level layers activity of flow fields and high-level layers activity of raw frames stream $L_1^{flow} - L_4^{rgb}$ (top) resembles the preferred eccentricity map of the same subject (bottom). Areas that are better predicted from low level flow fields streams are mostly involved in peripheric vision whereas areas better predicted from high level raw frames stream are mainly foveal.[]{data-label="fig:eccentricity-comparison"}](eccentricity.pdf "fig:") -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- We use the contrasts between high level layers and low level layers, and the eccentricity related contrast to construct a parcellation of the brain based on these contrasts (see Figure \[fig:resulting-parcelisation.png\]). From the 8 possible resulting profiles, three major clusters stand out allowing us to successfully depict a clustering of the voxels using contrasts from deep representation of the stimuli. ![Parcellation summarizing artificial-biological correspondences: The set of active voxels were split into subgroups according to their differential response to three contrasts: $L2^{flow} - L4^{flow}$, $L2^{rgb} - L4^{rgb}$, and $L1^{flow}-L4^{rgb}$. From the 8 possible resulting profiles, 3 major clusters stand out: deep blue, $L2^{flow} > L4^{flow}$, $L2^{rgb} > L4^{rgb}$, and $L1^{flow} < L4^{rgb}$; it corresponds to a voxel set in primary visual areas that has low eccentricity (foveal regions); green, $L2^{flow} > L4^{flow}$, $L2^{rgb} > L4^{rgb}$, and $L1^{flow} > L4^{rgb}$, it corresponds to the same visual areas, but for voxels with higher eccentricity (peripheric voxels); yellow , $L2^{flow} < L4^{flow}$, $L2^{rgb} < L4^{rgb}$, and $L1^{flow} < L4^{rgb}$, it corresponds to lateral and lateral visual areas that encode more abstract representations of the objects. []{data-label="fig:resulting-parcelisation.png"}](areas_vertical.png) Discussion ========== Reproducing the results of [@gucclu2015deep] we have shown that lateral areas are best predicted by the last layers of both streams whereas occipital areas are best predicted by first layers of both streams. We have also shown that foveal areas are best predicted by last layers of the raw frames stream and that peripheric areas are best predicted by the first layers of the flow fields stream. We have introduced a compression procedure for video representation that does not alter too much the channel structure of the network, yielding tremendous gains in performance compared to PCA. The linear prediction from deep video features yields predictions scores that are far better than chance. However the TVL1 algorithm [@zach2007duality] used in the TSN network does not produce high quality flow fields. Using more recent algorithms to compute optical flow such as Flownet 2 [@ilg2017flownet], our performance could be further improved. The TSN Network would have to be retrained though. In contrast to [@gucclu2017increasingly], the data used to train the network are not the same as the data presented to the subjects. We rely in fact on transfer between computer vision datasets and the visual content used for visual stimulation. This transfer is imperfect: the Berkeley video dataset contains videos of landscapes and animated pictures that are not present in the Kinetic dataset, which introduces some noise. In conclusion, our study provides key insights that areas have a role linked to their retinotopic representation when performing action recognition. Future studies should focus on finessing this result by using a network tuned for other tasks. Acknowledgments =============== This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 720270 (HBP SGA1).
--- abstract: | A strictly empirical review is given of presently available data on the evolution of galaxy morphology. From HST observations of distant galaxies and ground-based observations of nearby ones observed at the same rest-frame wavelength it is found that late-type (Sbc-Sc) galaxies evolve more rapidly with increasing redshift than do early-type (E-Sa-Sab) galaxies. Furthermore the fraction of peculiar objects, that cannot be shoehorned into the Hubble tuning fork classification scheme, increases rapidly with redshift. Unexpectedly it is found that, over a wide range of densities, the fraction of barred galaxies is independent of environment. However, this fraction of barred galaxies appears to decline rapidly with increasing redshift. [*Don’t assume anything - Look!*]{}\ Gen. Joe Stilwell\ author: - Sidney van den Bergh title: EVOLUTION OF GALAXY MORPHOLOGY --- INTRODUCTION ============ Theories of galaxy evolution remain speculative and uncertain. However, strong constraints on such theories are becoming available from the marvelous new imaging data on galaxy morphology at various redshifts that have have become available during the last seven years. Such observations show that most star formation in galaxies with $z < 1$ takes place in disks, whereas stars in objects with $z > 2$ occurs mainly in luminous “blobs” and chaotic structures. Furthermore the typical galaxy at $z >2$ is, at any given time, undergoing a merger, whereas such major mergers are seen to be relatively rare at $z < 1$. Additionally, late-type galaxies are seen to change their appearance rapidly with increasing redshift, whereas the morphological evolution of early-type galaxies seems to be much slower. Unexpectedly the fraction of barred galaxies is observed to be a steeply decreasing function of redshift, while the fraction of nearby barred galaxies is found to be almost independent of their environment. BARRED GALAXIES =============== Some time ago van den Bergh et al. (1996) noticed that barred galaxies appeared to be much rarer in the Hubble Deep Field than they are in nearby regions of the Universe. Making detailed corrections for band-shift effects, changes in resolution, and the increase of noise in observations of more distant galaxies, van den Bergh et al. (2002) were able to confirm the reality of this effect. As viewed in rest frame blue light, the fraction of barred galaxies appears to decrease from 23% at $z = 0.0$ to $\sim4\%$ at $z \sim 0.7$. Possibly this decrease in the fraction of barred galaxies with increasing redshift is due to the fact that young, recently formed, spiral galaxies are still too chaotic (dynamically “hot”) to undergo global bar-like instabilities. Another unexpected effect \[see Table 1\] is that the frequency of of bars in disk (S0-Im) galaxies appears to be almost independent of galaxy environment (van den Bergh 2002, and work in preparation). [lc]{} Nearby field & $25 \pm 3$\ Nearby groups & $19 \pm 4$\ Nearby clusters & $28 \pm 3$\ Coma & $32 \pm 5$\ Among 1103 nearby disk galaxies that have been observed with large reflecting telescopes (Sandage & Tammann 1981), it is found that 26 $\pm$ 2% are barred. This value does not appear to differ significantly from 32 $\pm$ 5% barred objects among 107 Coma S0-Sc galaxies with $m < 17.0$. Since the fraction of barred objects depends slightly on Hubble type it is, perhaps, fairest to compare the fraction of barred galaxies of types S0 + S0/a in the entire Shapley-Ames catalog directly with the corresponding fraction in the Coma cluster. For the entire Shapley-Ames catalog 25 $\pm$ 4% of 190 S0 + S0/a galaxies are barred, compared to 24 $\pm$ 6% of 62.5 such objects among galaxies with $m < 17.0$ in the Coma cluster. This result suggests that that the process that results in the formation of galactic bars is an internal one that is almost independent of galaxy environment. PECULIAR GALAXIES ================= A galaxy is defined as being “peculiar” if it differs in some significant way from the prototypes used to define the Hubble tuning fork classification system. It is one of the beauties of Hubble’s system that most luminous nearby galaxies fit it so well, and do not need to be “shoehorned” into the system. One of the most striking results obtained from the imaging of the Hubble Deep Field (Ferguson, Dickinson & Williams 2000) was that such a large fraction of the HDF(N) galaxies had peculiar morphologies. From my own classifications I find that the overall fraction of peculiar galaxies (as viewed in the restframe blue band) increases from 12% at z = 0.0 to 46% at $z \sim 0.7$. However, these overall figures are a bit misleading because of the difference that is observed between the way in which early-type and late-type galaxies “age”. At $z \sim 0.7$ only $\sim$5% of E-S0-Sa galaxies appear to be peculiar. For comparison 69% of Sbc-Sc galaxies are peculiar at $z \sim 0.7$. Taken at face value this result suggests that late-type galaxies have required a longer time arrive at their present morphology than have systems of early type. Also the nature of the peculiarities seen in early-type galaxies are systematically different from those that are observed in late-type systems. In distant Sa-Sb spirals the arm structure is generally less well-developed than it is among nearby spirals with $z \sim 0$. On the other hand the peculiarity of late-type spirals at high redshifts is mainly due to the fact that their spiral structure is more chaotic than it is for nearby Sbc and Sc galaxies. A special kind of peculiarity occurs among spirals that are located in dense cluster environments (van den Bergh 1976). As a result of what is nowadays referred to as “galaxy harassment” (Moore et al. 1996) the spiral arms of tidally interacting early-type galaxies (and galaxies in rich clusters) have a “fuzzy” appearance. \[In extreme cases the spiral structure becomes “anemic”.\] On the other hand the spiral arms of late-type interacting spirals take on a “knotty” morphology, which is presumably due to an enhanced formation rate of clusters and associations. THE MADAU PLOT ============== Perhaps the most striking feature of galaxy morphology and star formation is that most stars at $z < 1$ appear to be forming in disks. On the other hand the majority of stars in objects with $z >2$ seem to form in luminous clumps or in chaotic structures. Possibly the change in slope of the Madau (1997) plot near $z \sim 1.5$ is due to this transition from chaotic/clumpy star formation at high redshifts to star creation in disks among the majority of nearby galaxies. MERGERS ======= On deep exposures with the HST the surface density of galaxies is high. As a result many galaxies are members of optical, i.e. nonphysical pairs. One can try to circumvent this problem by only accepting those objects which either (1) exhibit (tidal) distortions, or (2) galaxies with physically overlapping main bodies as merger candidates. Adopting this definition it is found that only $\sim$5% of the galaxies in the HDF(N) +HDF(S) that have $z < 1.2$ are merger candidates. On the other hand it turns out that $\sim$57% of the objects with $z > 2$.0 are merger candidates. In other words most galaxies at $z > 2$ are, at any given time, involved in a merger with a luminous (massive) companion, whereas nearby galaxies are typically single. It is noted in passing that only 1.5% of the galaxies with $m < 17.0$ in the Coma cluster appear to be merging (or have double or multiple cores). Presumably this low merger frequency is a direct consequence of the very high (1038 $\pm$ 60 km/s, Colless & Dunn 1996) velocity dispersion in the Coma cluster. CONCLUSIONS =========== Perhaps the most important insight that has been obtained in recent times is that galaxy morphology depends strongly on both the environment and on the redshifts of galaxies. The Hubble tuning fork classification system is strictly applicable only to nearby galaxies with $z < 0.5$, with bars apparently becoming ever less frequent with increasing redshift. Furthermore The Hubble system becomes degenerate in very dense environments where the majority of galaxies are of Hubble types E, S0 and SB0. Colless, M., & Dunn, A. M. 1996, ApJ, 458, 435 Ferguson, H. C., Dickinson, M., & Williams, R. 2000, ARAA, 38,667 Madu, P. 1997, in Structure and Evolution of the Intergalactic Medium From QSO Absorption Lines, Eds. P. Petitjeand and S. Charlot, (Paris: Editions Frontières), p. 295 Moore, B., Katz, N., Lake, G., Dressler, A. & Oemler, A. 1996, Nature, 379,613 Sandage, A. & Tammann, G. A. 1981, A Revised Shapey-Ames Catalog of Bright Galaxies (Washington: Carnegie Institution), p.91. van den Bergh, S. 1976, ApJ, ApJ, 206, 883 van den Bergh, S. 2002, AJ, 124, XXXX van den Bergh, S.,Abraham, R. G., Ellis, R. S., Tanvir, N. R., & Santiago, B. X. 1996, AJ, 112, 359 van den Bergh, S., Abraham, R.G., Whyte, L. F., Merrifield, M. R. Frogel, J. A., Pogge, R. & Eskridge, P. 2002 AJ, 124, 782
--- abstract: 'We present the detailed bifurcation structure and associated flow patterns near the onset of zero Prandtl number Rayleigh Bénard convection. We employ both direct numerical simulation and a low-dimensional model ensuring qualitative agreement between the two. Various flow patterns originate from a stationary square observed at a higher Rayleigh number through a series of bifurcations starting from a pitchfork followed by a Hopf and finally a homoclinic bifurcation as the Rayleigh number is reduced to the critical value. Global chaos, intermittency, and crises are observed near the onset.' author: - Pinaki Pal - Pankaj Wahi - 'Mahendra K. Verma' - Supriyo Paul - Krishna Kumar - 'Pankaj K. Mishra' title: Bifurcation and chaos in zero Prandtl number convection --- Thermal convection is observed almost everywhere in the universe: industrial appliances, liquid metals, atmosphere, oceans, interiors of planets and stars, galaxies etc. An idealized version of convection called Rayleigh Bénard convection (RBC) has been studied for almost a century and it is still an area of intense research [@rbc_etc]. The two most important parameters characterizing convection in RBC are the Rayleigh number, describing the vigour of buoyancy, and the Prandtl number, being the ratio of kinetic viscosity and thermal diffusivity. Solar [@solar] and geological flows [@geo] are considered to have very low Prandtl numbers, as do flows of liquid metals [@metal]. RBC exhibits a wide range of phenomena including instabilities, patterns, chaos, spatio-temporal chaos, and turbulence for different ranges of Rayleigh number and Prandtl number [@rbc_etc]. Origin of instabilities, chaos, and turbulence in convection is one of the major research topics of convection. Direct numerical simulation (DNS), due to its high dimensionality, generates realistic but excessively voluminous numerical outputs which obscure the underlying dynamics. Lower dimensional projections lead to models which, if done improperly, lose the overall physics. In this letter, our aim is to unfold and discover the underlying physics of low Prandtl number flows [@lowp] by examining the natural limit of zero Prandtl number (zero-P) [@thual; @spiegel; @kumar1; @busse; @herring; @knobloch; @pal]. This offers a dramatic simplification without sacrificing significant physics, as well as displays a fascinatingly rich dynamic behaviour. In particular, since zero-P flows are chaotic immediately upon initiation of convection, we adopt a nonstandard strategy of approaching this system from the post-bifurcation direction. Moreover, we attack the problem simultaneously with DNS (to ensure accuracy) as well as a low dimensional model (to aid physical interpretation); and we stringently refine both the model and DNS until satisfactory agreement is obtained at all levels of observed behaviour. Our results show a diverse variety of both new and previously observed flow patterns. These flow patterns emerge as a consequence of various bifurcations ranging from pitchfork, Hopf and homoclinic bifurcations to bifurcations involving double zero eigenvalues. Convection in an arbitrary geometry is quite complex, so researchers have focused on convective flow between two conducting parallel plates called Rayleigh Bénard convection [@rbc_etc]. The fluid has kinematic viscosity $\nu$, thermal diffusivity $\kappa$, and coefficient of volume expansion $\alpha$. The top and bottom plates are separated by distance $d$ and are maintained at temperatures $T_1$ and $T_2$ respectively with $T_1 > T_2$. Convective flow in RBC is characterized by the Rayleigh number $R = \alpha (T_1 - T_2)g d^3/\nu\kappa$, where $g$ is the acceleration due to gravity, and the Prandtl number $P = \nu/\kappa$. Various instabilities, patterns, and chaos, are observed for different ranges of $R$ and $P$ [@rbc_etc; @thual; @pattern]. Transition to chaotic states through various routes have been observed in convection [@expt; @dns]. In this letter, we focus on zero-P convection. The governing zero-P Boussinesq equations [@spiegel] are nondimensionalized using $d$ as length scale, $d^2/\nu$ as time scale, and $\nu (T_1 - T_2)/\kappa$ temperature scale which yields $$\begin{aligned} \partial_t(\nabla^2 v_3) &=& \nabla^4 v_3 + R \nabla^2_H \theta \nonumber\\ & & - \hat{\bf e}_3\cdot\nabla\times \left[(\mbox{\boldmath $\omega$}{\cdot}\nabla){\bf v} -( {\bf v}{\cdot}\nabla)\mbox{\boldmath $\omega$} \right], \label{motion}\\ \partial_t \omega_3 &=& \nabla^2 \omega_3 +\left[(\mbox{\boldmath $\omega$}{\cdot}\nabla) v_3 -({\bf v}{\cdot}\nabla)\omega_3\right], \label{vorticity}\\ {\nabla}^2 \theta&=& - v_3,\label{energy}\\ \nabla{\cdot}{\bf v} &=& 0, \label{continuity}\end{aligned}$$ where ${\bf v} \equiv (v_1,v_2,v_3)$ is the velocity field, $\theta$ is the deviation in the temperature field from the steady conduction profile, $\mbox{\boldmath $\omega$} = \nabla\times{\bf v}$ is the vorticity field, $\hat{\bf e}_3 $ is vertically directed unit vector, and $ \nabla_H^2 = \partial_{xx} + \partial_{yy}$ is the horizontal Laplacian. We consider [*perfectly conducting and free-slip boundary*]{} conditions at the top and bottom plates, and periodic boundary conditions along horizontal directions [@thual; @dns]. In the following discussion we also use the reduced Rayleigh number $r=R/R_c$ where $R_c$ is the critical Rayleigh number. Straight two-dimensional (2D) rolls that have zero vorticity are neutrally stable solution of zero-P convection at $r=1$. However they become unstable for $r>1$. Busse [@busse], Thual [@thual] and Kumar et. al. [@kumar1] showed that these 2D rolls saturate through generation of vorticity (wavy rolls) for $r>1$ both for low Prandtl number and zero-P fluids. Thus vorticity plays a critical role in zero-P convection. Herring [@herring] was first to simulate these equations under free-slip boundary conditions. However he observed divergence of the solutions possibly due to the instabilities described above. The first successful simulation of zero-P equations with free-slip boundary conditions was done by Thual [@thual]. He reported many interesting flow patterns including relaxation oscillation of square patterns (SQOR) and stationary square patterns (SQ). Later Knobloch [@knobloch] explained the stability of the SQ patterns using the amplitude equations. Pal and Kumar [@pal] explained the mechanism of selection of square patterns using a 15-dimensional model. Note that asymmetric squares (referred to as ‘cross roll’ in literature) and other patterns have been observed in experiments of low-Prantl number convection [@expt_pattern; @rbc_etc]. We performed around 100 DNS runs of zero-P convection (Eqs. (\[motion\]-\[continuity\])) using a pseudo-spectral code for various $r$ values on $64^3$ box. The aspect ratio of our simulation is $2\sqrt{2} : 2\sqrt{2} : 1$. In DNS we observe stationary squares (SQ), stationary asymmetric squares (ASQ), oscillatory asymmetric squares (OASQ), relaxation oscillations with squares (SQOR), and chaos. For our bifurcation analysis we construct a low-dimensional model using the energetic modes of the above-mentioned simulation in the range of $r = 1 - 1.4$. We pick 9 large-scale vertical velocity modes (real Fourier amplitudes): $W_{101}$, $W_{011}$, $W_{202}$, $W_{022}$, $W_{103}$, $W_{013}$, $W_{112}$, $W_{121}$, $W_{211}$, and 4 large-scale vertical vorticity modes (real Fourier amplitudes): $Z_{110}$, $Z_{112}$, $Z_{121}$, $Z_{211}$. The three subscripts are the indices of wavenumber along x,y, and z directions. Cumulative energy contained in these modes ranges from 85% to 98% of the total energy of DNS, and each of these modes has 1% or more of the total energy. We derive the model equations by the Galerkin projection of Eqs. (\[motion\]-\[continuity\]) on the subspace of these modes. This results in thirteen coupled first-order ordinary differential equations for the above variables. The low-dimensional model captures all the flow patterns of DNS mentioned above. The range of $r$ for these patterns for the model and DNS are shown in Table \[tab:range\_r\_patterns\], and they are reasonably close to each other. Interestingly, the stable steady values of the modes $W_{101}$, $W_{011}$, $W_{112}$, $W_{121}$, $W_{211}$ for SQ and ASQ patterns match with corresponding DNS values within 10%. Flow patterns r (Model) r (DNS) --------------- ----------------- ----------------- Chaotic 1 - 1.0045 1 - 1.0048 SQOR 1.0045 - 1.0175 1.0048 - 1.0708 OASQ 1.0175 - 1.0703 1.0709 - 1.1315 ASQ 1.0703 - 1.2201 1.1316 - 1.2005 SQ 1.2201 - 1.4373 1.2006 - 1.4297 : Range of reduced Rayleigh number $r$ corresponding to various flow patterns observed in the model and the DNS. Here SQ, ASQ, OASQ, and SQOR represent stationary squares, stationary asymmetric squares, oscillatory asymmetric squares, and relaxation oscillation of squares respectively. []{data-label="tab:range_r_patterns"} The origin of the above flow patterns can be nicely understood using the bifurcation diagram of the low-dimensional model. To generate the bifurcation diagram, we first evaluate a fixed point numerically using Newton-Raphson method for a given $r$. The branch of fixed points is subsequently obtained using a fixed arc-length based continuation scheme [@wahi]. Stability of the fixed points is ascertained through an eigenvalue analysis of the Jacobian and accordingly the bifurcation points are located. New branches of fixed points are born when the eigenvalue(s) become zero (pitchfork), and branches of periodic solutions appear when the eigenvalue(s) become purely imaginary (Hopf). Subsequent branches are generated by calculating and continuing the new steady solutions close to the bifurcation points. Fixed points are the backbone of bifurcation diagram. For $r<1$, the origin is the unique stable fixed point corresponding to the pure conduction state. There is a double zero eigenvalue at $r=1$ [@Guck_Holmes], and all the fixed points (13 in number) arising from $r=1$ are unstable for $r>1$. These fixed points are shown as dotted lines in Fig. \[fig:3d\_bifurcation\]. Four of these branches of fixed points bifurcate from the origin; these fixed points satisfy $|W_{101}| = |W_{011}|$. The other 8 branches of unstable fixed points emerge from nonzero $W_{101}$ or $W_{011}$, and they obey $|W_{101}| \ne |W_{011}|$ (see Fig. \[fig:3d\_bifurcation\]). With an increase of $r$, these 8 branches become stable and merge with the 4 branches that originate from the origin. In Fig. \[fig:3d\_bifurcation\], modes $W_{101}$ and $W_{011}$ are presented even though all other modes are also nonzero. ![Three dimensional view of the bifurcation diagram showing the fixed points with solid and dashed curves representing the stable and unstable fixed points respectively. Black, blue, and cyan curves represent stationary squares (SQ), asymmetric stationary squares (ASQ), and conduction state respectively. All the points on the axis (purple lines) are 2D roll solutions.[]{data-label="fig:3d_bifurcation"}](3dbifurcation.eps){height="7cm" width="8.5cm"} After a discussion on fixed points, we focus on the complete bifurcation diagram shown in Fig. \[fig:bifurcation\]. Chaotic solutions are observed at the onset of convection itself, i.e., just above $r=1$. A better insight into the origin of the various solutions is facilitated by starting the analysis at a higher $r$ value and tracking the various bifurcations while approaching $r=1$. We start our analysis at $r=1.4$ where we observe stable symmetric squares ([SQ]{}) with $|W_{101}| = |W_{011}|$ (black curve in Fig. \[fig:3d\_bifurcation\]). In Fig. \[fig:bifurcation\] we represent only the $W_{101} = W_{011}$ solution. As $r$ is reduced from $1.4$, the SQ branch of fixed points loses stability via a supercritical pitchfork bifurcation at $r \approx 1.2201$, after which we observe stationary solutions with $W_{101} \ne W_{011}$ (blue curves of Figs. \[fig:3d\_bifurcation\] and \[fig:bifurcation\]). These solutions correspond to asymmetric square patterns (ASQ), either dominant along x axis ($|W_{101}| > |W_{011}|$), or dominant along y axis ($|W_{101}| < |W_{011}|$). The SQ solution $|W_{101}| = |W_{011}|$ continues as a saddle. With a further reduction of $r$, ASQ branches lose stability through a supercritical Hopf bifurcation at $r \approx 1.0703$ and limit cycles are born. These limit cycles are represented by red curves in Fig. \[fig:bifurcation\]. Physically they represent oscillatory asymmetric square patterns (OASQ). Fig. \[fig:limit\_cycle\](a) illustrates the projection of two of these stable limit cycles (for $r=1.0494$) on the $W_{101}-W_{011}$ plane. The limit cycles grow in size as $r$ is lowered. A homoclinic orbit is formed at $r \approx 1.0175$. Afterwards, homoclinic chaos is observed in a narrow window. At lower $r$ the attractor becomes regular resulting in a larger limit cycle that corresponds to the relaxation oscillations with an intermediate square pattern (SQOR). Fig. \[fig:limit\_cycle\]($b$) illustrates the projection of this limit cycle at $r=1.0099$. The flow pattern in this regime changes from an approximate pure roll in one direction to a symmetric square, and then to an approximate pure roll in the perpendicular direction. The SQOR solution is represented by the green curve in Fig. \[fig:bifurcation\]. The flow becomes chaotic as $r \rightarrow 1$. The chaotic flow manifests itself in three different forms: Ch1, Ch2, and Ch3 as shown in the inset of Fig. \[fig:bifurcation\] as (i), (ii), and (iii) respectively. The phase space projection for these three solutions are depicted in Fig. \[fig:chaos\] for $r=1.0041$, 1.0038 and 1.0030 for the 13-mode model, and for $r=1.0045$, 1.0032 and 1.0023 in the DNS. Their chaotic nature is confirmed by the positivity of the largest Lyapunov exponent (0.0131, 0.0254 and 0.0389) calculated using the 13-mode model for $r=1.0041$, 1.0038 and 1.0030 respectively. The first chaotic solution, Ch1 (Figs. \[fig:chaos\](a) and \[fig:chaos\](b)) results from the broadening of the limit cycle attractor with chaotic switchings between the two lobes of the attractor. This global chaos could probably be attributed to homoclinic tangles. The time-series of the solution shows intermittency. At $r \approx 1.004$, these four chaotic attractors of Ch1 merge in a ‘crisis’ to yield a single large chaotic attractor Ch2 (Figs. \[fig:chaos\](c) and \[fig:chaos\](d)). This chaotic solution persists till $r \approx 1.0035$ after which it splits into four different chaotic attractors Ch3 in another ‘crisis’, one of which is shown in Figs. \[fig:chaos\](e) and \[fig:chaos\](f). The time-series again shows intermittency, and the flow pattern switches from an approximately pure roll in one direction to an asymmetric square. With a further reduction in the Rayleigh number, the size of these chaotic attractors decreases and they ultimately merge with one of the branches of the unstable ASQ fixed points at $r=1$. In Fig. \[fig:bifurcation\], we exhibit the merger of one of these chaotic attractors with the unstable ASQ fixed point with $W_{101} \rightarrow 0$. In conclusion, we present for the first time a numerically obtained, DNS validated, detailed bifurcation diagram and associated flow structures of zero-P convective flow near the onset of convection. The whole spectrum of phenomena observed in DNS near the onset of convection is replicated by the low-dimensional model. Hence, the bifurcation structure presented here explains the origin and dynamics of various patterns near the onset of convection. Recent analysis of VKS (Von-Karman-Sodium) experimental results indicate a strong role of large-scale modes for the magnetic field reversal [@VKS]. A study of large-scale modes as outlined in this letter may provide useful insights into the mechanism behind the generation and reversal of magnetic field. The dynamics of large-scale modes in other hydrodynamic systems like rotating turbulence, magneto-convection etc. could also be captured by a similar approach. A careful analysis of DNS results indicate the existence of fringe attractors apart from the main attractors presented in this letter. These new attractors are relatively insignificant and they exist only in localized regimes of $r$. A detailed investigation of all the attractors will be reported elsewhere. Bifurcation analysis for $r>1.4$ is reasonably complex as well. Also preliminary results show a reasonable amount of similarity between low Prandtl number convection and zero-P convection. These issues are under investigation. We thank S. Fauve, A. Chatterjee, A. K. Mallik, and V. Subrahmanyam for useful discussions. We thank Computational Research Laboratory, India for providing us access to the supercomputer EKA where part of this work was done. This work was supported by the research grant of Department of Science and Technology, India. [99]{} S. Chandrasekhar, Hydrodynamic and Hydromagnetic Stability (Cambridge University Press, Cambridge, 1961); F. H. Busse, in [*Hydrodynamic Instabilities and the Transition to Turbulence*]{}, edited by H. L. Swinney and J. P. Gollub, [Topics in Appl. Phys.]{}, [**45**]{} (Springer, Berlin, 1985), pp. 97 - 137; P. Manneville, [*Instabilities, Chaos and Turbulence*]{}, (Imperial College Press, London, 2004); G. Ahlers, S. Grossmann, and D. Lohse, Rev. Mod. Phys [**81**]{}, 503 (2009). R. F. Stein and A. Norlund, Solar Physics [**192**]{}, 91 (2000); F. Cattaneo, T. Emonet, and N. Weiss, Astrophysical J. [**588**]{}, 1183 (2003). F. H. Busse, in [*Fundamentals of thermal convection,Mantle Convection, Plate tectonics and Global Dynamics*]{}, edited by W. R. Peltier, (Gordon and Breach, 1989), pp 23-95. P. A. Davidson, [*An Introduction to Magnetohydrodynamics*]{}, (Cambridge University Press, Cambridge, 2001). M. R. E. Proctor, J. Fluid Mech. [**82**]{}, 97 (1977); R. M. Clever and F. H. Busse, J. Fluid Mech. [**102**]{}, 61 (1981); I. G. Kevrekidis et al., Physica D [**71**]{}, 342 (1994); E. Bodenschatz, W. Pesch, and G. Ahlers, [Annu. Rev. Fluid Mech.]{}, [**32**]{}, 709 (2000). O. Thual, [J. Fluid Mech.]{} [**240**]{}, 229 (1992). E. A. Spiegel, [J. Geophys. Res.]{} [**67**]{}, 3063 (1962). K. Kumar, S. Fauve, and O. Thual, [J. Phys. II]{} (France) [**6**]{}, 945 (1996); K. Kumar, [*Woods Hole Oceanogr. Inst. Tech. Rep.*]{} WHOI-[**90-01**]{}, (1990) (unpublished). F. H. Busse, [J. Fluid Mech.]{} [**52**]{}, 97 (1972). J. R. Herring, [*Woods Hole Oceanogr. Inst. Tech. Rep.*]{} WHOI-[**70-01**]{}, (1970). E. Knobloch, [J. Phys. II France]{} [**2**]{}, 995 (1992). P. Pal, and K. Kumar, [Phys. Rev. E]{} [**65**]{}, 047302 (2002). M. C. Cross and P. C. Hohenberg, [Rev. Mod. Phys.]{} [**65**]{}, 851 (1993); Y. C. Hu, R. Ecke and G. Ahlers, [Phys. Rev. Lett.]{} [**72**]{}, 2191 (1994); J. Liu and G. Ahlers, [Phys. Rev. Lett.]{} [**77**]{}, 3126 (1996); D. A. Egolf [*et al.*]{}, Nature [**404**]{} 733 (2000); K. Kumar, S. Chaudhuri, and A. Das, [Phys. Rev. E]{} [**65**]{}, 026311 (2002). J. Maurer and A. Libchaber [J. Physique Lett.]{}, [**41**]{}, 515 (1980); P. Berge [*et al.*]{}, [J. Physique Lett.]{}, [**41**]{}, 341 (1980); J. P. Gollub and S. V. Benson, [J. Fluid Mech.]{}, [**100**]{}, 449 (1980); M. Giglio, S. Mussazi and V. Perini, [Phys. Rev. Lett.]{}, [**47**]{}, 243 (1981); A. Libchaber, C. Laroche and S. Fauve, [J. Physique Lett.]{}, [**43**]{}, 211 (1982). M. Meneguzzi [*et al.*]{}, [J. Fluid Mech.]{} [**182**]{}, 169 (1987). V. Croquette, Contemporary Physics [**30**]{}, 113 (1989); V. Croquette, Contemporary Physics [**30**]{}, 153 (1989). K. Nandakumar, and A. Chatterjee, [Nonlin. Dynam.]{} [**40**]{}, 143 (2005); P. Wahi, and A. Chatterjee, [Int. J. Nonlin. Mech.]{} [**43**]{}(2), 111 (2008). J. Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields (New York, Springer, 1983). F. Pétrélis et al., [Phys. Rev. Lett.]{} [**102**]{}, 144503 (2009).
--- author: - 'Ferrari, C.' - 'Arnaud, M.' - 'Ettori, S.' - 'Maurogordato, S.' - 'Rho, J.' date: 'Received 1 August 2005; accepted 8 September 2005' title: observation of the multiple merger cluster Abell 521 --- \#1\#2 Introduction ============ In the concordant cosmological model ($\Lambda$CDM, $\Omega_{m}$=0.3 and $\Omega_{\Lambda}$=0.7), small structures are the first to form, and then they merge giving rise to more and more massive systems in a hierarchical way. Both numerical and observational results show that galaxy clusters form and evolve through the merging of sub-clusters and groups of galaxies along filamentary structures (e.g. West et al. 1995; Bertschinger 1998; Durrett et al. 1998; Arnaud et al. 2000; Bardelli et al. 2000; Borgani et al. 2004; Adami et al. 2005). Combined optical and X-ray studies have been particularly successful in revealing the dynamics of merging clusters (Flores et al. 2000; Henriksen et al. 2000; Donnelly et al. 2001; Bardelli et al. 2002; Barrena et al. 2002; Czoske et al. 2002; Rose et al. 2002; Valtchanov et al. 2002; Boschin et al. 2004; Belsole et al. 2005; Demarco et al. 2005; Durret et al. 2005; Ferrari et al. 2005). This field is more and more active since precise spectro-imaging data in X-rays are now available with and XMM, allowing to derive high resolution temperature and density maps, in which very typical signatures of merging events have been detected such as strong temperature and density variations (Markevitch & Vikhlinin 2001; Belsole et al. 2004, 2005; Henry et al. 2004; Durrett et al. 2005), bow shocks (Markevitch et al. 2002; Markevitch et al. 2005) and cold fronts (Markevitch et al. 2000; Vikhlinin et al. 2001; Mazzotta et al. 2001; Sun et al. 2002; Dupke et al. 2003). Detailed multi-wavelength studies of galaxy clusters are essential to determine the scenario of their formation, and to analyse the complex physical processes acting during their evolution. Abell 521 (z=0.247) is a relatively rich (R=1) cluster of Bautz-Morgan Type III (Abell 1958; Abell et al. 1989). After its first detection in X-ray with HEAO1 (Johnson et al. 1983; Kowalski et al. 1984), the dynamical state of A521 has been investigated in detail through a combined X-ray and optical analysis (Arnaud et al. 2000; Maurogordato et al. 2000; Ferrari et al. 2003). A severe segregation between the gas and galaxy distributions was detected. ROSAT/HRI observations revealed the presence of two peaks of X-ray emission, associated with a diffuse main cluster and with a compact less massive group (Arnaud et al. 2000). Unlike what is usually observed in relaxed systems, the Brightest Cluster Galaxy (BCG) is not located at the barycentre of A521, but in the compact sub-group, with a surprising off-set from its X-ray peak. The galaxy isodensity map in the central 20’${\times}$20’ field of A521 has a very irregular and strongly sub-clustered morphology. Its general structure follows a NW/SE direction, crossed by a perpendicular high density ridge of galaxies in the core region (Arnaud et al. 2000; Ferrari et al. 2003). The analysis of the dynamical and kinematic properties of more than one hundred cluster members confirmed that A521 is far from dynamical equilibrium: its radial velocity distribution significantly different from a Gaussian and characterised by a very high dispersion ($1325 ^{+145}_{-100}$ km/s) is typical of merging systems (Ferrari et al. 2003). A detailed dynamical analysis revealed at least two different and not contemporary episodes of merging: a) a dynamically bound complex of galaxies, hosting the BCG and corresponding to the compact group detected in X-ray, is currently infalling on the plane of the sky toward the centre of the main cluster, and b) two or more sub-clusters have recently collided along the over-dense central [*ridge*]{}, with a collision axis nearly along the line of sight (Ferrari et al. 2003). The recent analysis by Umeda et al. (2004) of A521 ${\rm H}_{\alpha}$ luminosity function showed that this cluster contains more currently star-forming galaxies than local clusters, consistently with the observed Butcher-Oemler effect. The excess of star formation (SF) can be at least partly related to the particular dynamical state of A521, since an increase of SF has been observed in several merging systems (e.g. Gavazzi et al. 2003; Poggianti et al. 2004; Ferrari et al. 2005). The complex dynamical state of A521 and its unique morphological features motivated our observations with the aim of better characterising the physics of this exceptional cluster. In this paper the data are analysed. Sect. \[ObsData\] briefly describes the observations and the data reduction. In Sect. \[morphology\] we study the X-ray morphology and the temperature structure of A521. Results are discussed in Sect. \[disc\] and summarised in Sect. \[summ\]. As in Ferrari et al. (2003), all numbers are expressed as a function of $h_{75}$, the Hubble constant in units of 75 km ${\rm s^{-1}~Mpc^{-1}}$. We have used the $\Lambda$CDM model with $\Omega_m=0.3$ and $\Omega_{\Lambda}=0.7$, thus 1 arcmin corresponds to $\sim$0.217  Mpc in the following. Observations and data reduction {#ObsData} =============================== A521 was observed with ACIS-I and ACIS-S in “VFAINT” mode. The datasets were processed and cleaned using CIAO 3.2 software and calibration files in CALDB 3.0.0. The first exposure was done on Dec 23, 1999 with ACIS-I and focal plane temperature of $-110^o$ for an effective exposure time of 38.0 ksec after standard cleaning (88% of the nominal exposure time). On Oct 13, 2000, a second exposure of 41 ksec was done with ACIS-S and focal plane temperature of $-120^o$. After cleaning the light curve from the several flares present by requiring a mean count rate of 0.085 cts/s, an exposure of 18.4 ksec is obtained. Results {#morphology} ======= X-ray morphology ---------------- The raw image of A521 diffuse emission is presented in Fig. \[strucKT\]. The cluster shows two highest density regions that we will call clumps [*A*]{} and [*B*]{} in the following. They correspond respectively to the Northern group and the central part of the main cluster identified in the ROSAT image (Arnaud et al. 2000). The new observations uncover the presence of several features inside and around each of the two clumps, as it appears more clearly in Fig. \[morf1\], which represents a smoothed image of the cluster central field ($7{\times}7~{\rm arcmin}^2$) in the 0.5-5 keV energy band. In order to get a smoothed image of the diffuse emission of the cluster, the programme “mrp\_filter” of the package “MR/1 Multiresolution Analysis” (Stark, Murtagh & Bijaoui 1998) has been applied. The programme does a wavelet filtering for images with Poisson noise. We used a significance level of 1.E-04, corresponding to a 3.7 $\sigma$ Gaussian detection level. The image has been thresholded and reconstructed such that both point sources and the background are excluded, and it has been exposure corrected. In Fig. \[morf2\] the X-ray contours are overlaid on the X-ray (top) and I-band (bottom) images of the cluster. The observations reveal a structure of A521 diffuse emission that is even more complicated and irregular than the morphology obtained through the ROSAT observations (Arnaud et al. 2000), confirming that this cluster is out of hydrostatic equilibrium. The general X-ray structure of A521 is elongated along the axis joining the two main X-ray peaks ([*SX2*]{} in Fig. \[morf2\]). The green line in the bottom panel of Fig. \[morf2\] shows the direction followed by the general structure of the cluster at optical wavelengths ([*S2*]{} in Ferrari et al. 2003). A deeper optical analysis of the alignment effects in A521 revealed that the NW/SE direction indicated in green is the preferred one for the formation of the cluster, since it is the main elongation axis of a) the brightest cluster galaxies, b) the main sub-structures detected in the red-sequence iso-density map, and c) the general cluster structure out to $\sim$ 5${h}^{-1}$ Mpc (Plionis et al. 2003). A slight misalignment is present between the main axis of the X-ray and optical emission ([*SX2*]{} and [*S2*]{}). The X-ray peak of the main cluster (labelled [*X*]{} in Fig. \[morf2\]) is close to the second brightest cluster galaxy and its position corresponds to the barycentre of the optical emission of A521. South of [*SX1*]{}, the axis perpendicular to [*SX2*]{} (see Fig. \[morf2\]), the cluster appears rather relaxed with quite regular isophotes (Figs. \[morf1\] and \[morf2\]). The Northern part (North of [*SX1*]{}) is much more complex, as it shows evidence of several sub-structures and elongations in the ICM distribution. The most prominent structure is the clump [*A*]{} (see Figs. \[strucKT\] and \[morf2\]). It is brighter than the equivalent region centred on the main cluster and roughly elliptical in shape, with a main axis along the [*SX2*]{} direction. Two internal sub-structures are detected, a northern clump and a southern one, labelled respectively [*A\_N*]{} and [*A\_BCG*]{} in Fig. \[morf2\]. The brightest peak of clump [*A*]{}, located inside the [*A\_BCG*]{} substructure, is clearly centred on the BCG position. The substructure [*A\_BCG*]{} shows an elongation more toward North-West with respect to the [*SX2*]{} direction, being in fact more aligned with the main axis of the optical distribution and of the BCG ([*S2*]{} in Ferrari et al. 2003, green line in Fig. \[morf2\]). A compression of the X-ray isophotes is also observed in the South of the BCG along the [*S2*]{} direction. The clump [*A\_BCG*]{} shows a secondary X-ray peak centred on one of the blobs of the optical arc-like structure surrounding the BCG ([*A\_g*]{} in Fig. \[morf2\]). The observations of Ferrari et al. (2003) have shown that all of these blobs are at the same redshift of the cluster, and that they could be galaxies falling onto the brightest cluster object. [*A\_g*]{} is likely to be a point like emission of one of these galaxies. The [*A\_N*]{} structure is centred on a bright object (a star, based on the spectral analysis by Ferrari et al. 2003), but its X-ray emission is extended. The northern part of the whole substructure [*A*]{} ([*A\_N*]{} and surrounding regions) could therefore be a tail of gas of the clump [*A\_BCG*]{}. Two other less prominent substructures are present in the North. First, a North-East clump which appears as an excess of emission east of clump [*A*]{} ([*NE*]{} ellipse in Fig. \[morf2\]). Second, we observe an elongation of the X-ray isocontours on the West side of the main cluster X-ray peak towards a sub-structure (labelled [*W*]{} in Fig. \[morf2\]) at $\sim$2.5 arcmin from it in a North-West direction. Fig. \[morf2\] shows that several galaxies are concentrated in the [*W*]{} region, i.e. 8 quite bright objects (${\rm I}_{AB}$=18.5-19) and several faint ones. Spectroscopic observations reveal that 3 of them are confirmed cluster members, but note that in this region the spectroscopic observations are complete at no more than 50-60% level (Ferrari et al. 2003). The X-ray morphology of A521 is therefore regular South of the [*SX1*]{} direction, strongly substructured in the northern part. This is even clearer in Fig. \[residuals\], which shows the residuals obtained after subtracting in each region the emission from the symmetric region with respect to the X-ray peak of the main cluster (labelled [*X*]{} in Fig.3). This way we subtract in the North the corresponding ’unperturbed’ part of the main cluster, as measured in the South. Clumps [*A*]{}, [*W*]{} and [*NE*]{} emerge clearly North of [*SX1*]{} (with a possible tail of X-ray emission towards South-East for the substructure [*W*]{}). Finally, observations revealed two sharp edges in the X-ray surface brightness in the region to the North of the BCG (labelled [*A1*]{} and [*A2*]{} in Fig. \[strucKT\]), with changes by a factor of 2 over scales shorter than $10$ arcsec (see Fig. \[A1A2\]). To emphasise the departure from a smoothed spherically symmetric emission, we modelled the surface brightness of the region with two $\beta-$models, one obtained as best-fit results of the radial profile extracted from the Northern (P.A.=$[-90^o, 90^o]$) semicircle centred on the BCG (RA, Dec)=(04:54:06.9, -10:13:20), the second one extracted from the Southern semicircle centred at (RA, Dec)=(04:54:07.9, -10:14:27). Then, we compute the surface brightness profile in two strips crossing the two edges both in the $(0.5-5)$ keV exposure-corrected image and in the faked two-dimensional emission obtained by the sum of the two $\beta-$models. The faked profile is indicated by dashed lines in Fig. \[A1A2\]. These edges are located orthogonal to the main X-ray axis [*SX2*]{} and trace the Northern boundary of the clump [*A*]{} (see Fig. \[strucKT\]). In summary, we distinguish the following main structures in the X-ray observation of A521: - [*clumps A and B*]{}: the two main clumps in the X-ray emission of A521. Clump [*B*]{} is the central part of the main cluster, which seems to be nearly unperturbed in the southern region. Clump [*A*]{} is a northern sub-cluster centred on the BCG; - [*SX1, SX2, S2*]{}: [*SX2*]{} is the axis connecting the X-ray peaks of the main cluster and of the northern sub-cluster (with a position angle of 163$^\circ$[^1]), [*SX1*]{} its perpendicular direction, [*S2*]{} the optical main elongation axis; - [*A\_BCG*]{}, [*A\_N*]{} and [*A\_g*]{}: X-ray structures inside the Northern sub-cluster [*A*]{}. [*A\_BCG*]{} is a centred on the BCG galaxy and [*A\_g*]{} is associated with one of the optical knots surrounding the BCG; - [*W*]{}: substructure present at 2.5 arcmin North-West of the X- ray peak of the main cluster; - [*NE*]{}: substructure located North-East of clump [*A*]{}; - [*A1*]{} and [*A2*]{}: two arc-like edges detected as in the X- ray brightness map at the North of the clump [*A*]{}. The spherical symmetry and hydrostatic equilibrium assumptions are clearly not valid in the case of A521 due to its very disturbed morphology. No gas density and temperature profiles or total mass profiles will therefore be presented in the following. Temperature analysis -------------------- ### Temperature map from hardness ratio Fig. \[hr\] shows a temperature map of A521 obtained using the hardness ratio technique. Images of the region covered by the ACIS-S3 field of view have been extracted in the energy bands 0.5-2 keV and 2-5 keV from the ACIS-S and ACIS-I event files. Point sources have been detected and removed using the CIAO tools [wavdetect]{} and [dmfilth]{}. Each image has been background subtracted using blank field data and corrected for vignetting effect and exposure variations. The resulting ACIS-S and ACIS-I count rate images in each energy band have then been added and adaptively smoothed using the CIAO tool [csmooth]{}. The smoothing scales were defined from the raw ACIS-(S+I) image in the $ 2-5$ keV energy band with a minimum significance of $4~\sigma$ and a maximum significance of $5~\sigma$ . The 2-5 keV smoothed image has been divided by the corresponding 0.5-2 keV image to obtain a hardness ratio map, which has been converted to a temperature map. The theoretical conversion factors have been computed using an absorbed thermal model ([tbabs (mekal)]{} in XSPEC 11.3.1) with a column density fixed to the Galactic value of $5.79{\times}10^{20}$ cm$^{-2}$ , a redshift of 0.247 and an abundance fixed to 0.4, convolved with the instrument responses. Since the images have been corrected for vignetting effects, we used the on-axis Auxiliary Response and Redistribution Matrix files, obtained accordingly to the period and configuration of the observation. An inverse edge to account for the underestimate of the effective area around 2 keV is also applied (see Vikhlinin et al. 2005). A521 is clearly characterised by a highly sub-structured temperature map, which presents: - a cold region ($T\leq$5 keV) in the North-East (labelled [*NE\_T*]{}) around the [*NE*]{} substructure; - a cold substructure corresponding to the region of the BCG ([*A\_BCG*]{}). It is surrounded by an annulus of warmer gas ([*A\_NBCG*]{}), that shows higher temperatures in its northern and eastern parts; - a central hot region ($\sim$6 to 8 keV) that, starting from East, runs roughly parallel to the SX1 direction ([*Ridge E*]{}) and reaches a maximum in a very hot central peak ([*Central*]{}); - a gradual decrease of temperature in the South-West sector ([*Sect 1*]{} and [*Sect 2*]{}). The temperature gradient is less pronounced in the South-East sector, with a possible cold substructure South-East of the cluster barycentre ([*SE*]{}). ### Spectral analysis --------------- ------------------ ------------------ ------------------------- --------------------------------- ------------------- ------------------- Region $T$ $Z$ $f_X$(0.5-2 keV) $f_X$(bol) $L_X$(0.5-2 keV) $L_X$(bol) (keV) ($Z_{\odot}$) ($10^{-12}$ erg/s/${\rm ($10^{-12}$ erg/s/${\rm cm}^2$) ($10^{44}$ erg/s) ($10^{44}$ erg/s) cm}^2$) Whole cluster $5.85{\pm}0.23$ $0.55 {\pm}0.08$ $2.03{\pm}0.03$ $6.47{\pm}0.13$ $3.42{\pm}0.05$ $12.94{\pm}0.21$ Clump A $5.40{\pm}0.50$ $0.50{\pm}0.22$ $0.35{\pm}0.02$ $1.06{\pm}0.07$ $0.60{\pm}0.04$ $2.16{\pm}0.13$ Clump B $5.79 {\pm}0.56$ $0.40 {\pm}0.21$ $0.38{\pm}0.02$ $1.18{\pm}0.08$ $0.64{\pm}0.04$ $2.38{\pm}0.14$ --------------- ------------------ ------------------ ------------------------- --------------------------------- ------------------- ------------------- Spectra in different regions of interest were extracted, together with the corresponding Auxiliary Response and Redistribution Matrix files. As above, an absorbed thermal model is used to fit the accumulated spectra in bins with a minimum number of 20 counts, with a column density fixed to the Galactic value. An inverse edge to account for the underestimate of the effective area around 2 keV is also applied (see Vikhlinin et al. 2005). The temperatures, abundances, fluxes and rest-frame luminosities of the whole cluster (big circle in Fig. \[strucKT\][^2]) and of the two main clumps [*A*]{} and [*B*]{} (smaller circles in Fig. \[strucKT\]) are listed in Table \[tab:spectral\]. The quoted values have been derived using ACIS-I data, for which a local background can be estimated. They do not show significant variation when blank field background are used, validating the use of blank field data for the temperature map. The spectroscopic temperatures ($T_{\rm spec}$) of the different regions identified in the temperature map (see previous section) have also been estimated. The ACIS-I and ACIS-S spectra were fitted simultaneously [^3]. These temperatures are compared in Table \[tab:specTOT\] with the temperatures obtained using the hardness ratio technique. The latter ($T_{\rm HR}$) have been estimated using both the mean value from the temperature map and the ratio of the count rates in the smoothed images used to derive the temperature map, these two methods giving results which always differ by less than $0.1$ keV. The temperatures derived from the spectral analysis are in good agreement, within 1$ \sigma$ error, with the value derived from the hardness-ratio technique (see Table \[tab:specTOT\]), with no systematic differences. The largest discrepancy ($\sim 1.1\sigma$) is observed for the region around the BCG ([*A\_BCG*]{}) which is even colder in the spectroscopic analysis. The spectral analysis clearly confirms: a) the low temperature of the clump centred on the BCG, which is surrounded by a hot region, b) the presence of a hot central bar elongated in a East/West direction with a maximum in the central region, and c) the cold temperature of the North-East part of the cluster. --------------- ----------------- -------------- Region $T_{spec}$ $T_{\rm HR}$ (keV) (keV) [*NE\_T*]{} $4.10{\pm}0.66$ 4.5 [*A\_BCG*]{} $4.25{\pm}0.80$ 5.1 [*A\_NBCG*]{} $6.51{\pm}0.89$ 6.0 [*Ridge E*]{} $5.26{\pm}1.75$ 6.6 [*Center*]{} $7.44{\pm}1.19$ 7.4 [*Sect 1*]{} $5.77{\pm}0.68$ 5.9 [*Sect 2*]{} $4.77{\pm}0.66$ 4.6 [*SE*]{} $4.70{\pm}0.88$ 5.3 --------------- ----------------- -------------- : Spectroscopically derived temperatures, $T_{spec}$, of the different structures detected in the temperature map of A521 (see Fig. \[hr\]). $T_{spec}$ has been obtained by fitting simultaneously the ACIS-I and ACIS-S spectra. All the quoted results are obtained with $N_{\rm H}$ fixed to the galactic value, solar abundance Z=0.4 as in Anders & Grevesse (1989), and provide a reduced $\chi2 <1$. The second column gives the temperature derived using the hardness ratio technique.[]{data-label="tab:specTOT"} Discussion {#disc} ========== In agreement with previous results by Arnaud et al. (2000), Maurogordato et al. (2000) and Ferrari et al. (2003), the analysis of observations confirms that A521 is far from dynamical equilibrium and that it is a particularly complex system, made up by several subclusters in different phases of the merging process. A first evidence that A521 is in a disturbed dynamical state is the slight misalignment between the main axis of X-ray and optical emissions ([*SX2*]{} and [*S2*]{} in Fig. \[morf2\]). This could be due to on-going merging event(s), since we know that in clusters the collisional component (i.e. ICM) and the non-collisional one (i.e. galaxies and DM) have significantly different dynamical time scales (Röttiger et al. 1993). BCG group and surrounding regions --------------------------------- While the Southern part of A521 has a quite regular morphology and shows a peak of X-ray emission very close to the second BCG, the Northern region has a much more structured X-ray morphology, rich in sub-clusters. The main feature that appears in its ICM density and temperature maps is a compact cold component (clump [*A\_BCG*]{}, Fig. \[morf2\]), centred on the BCG, which is the highest density region of the principal northern substructure (clump [*A*]{}). The very high angular resolution of has helped to solve one of the open questions of the previous X-ray analysis of A521 (Arnaud et al. 2000). In the ROSAT observations a compact group in the North of the cluster was also detected, hosting the BCG. Curiously, it was not centred on its brightest galaxy, but on a northern position. observations show two main X-ray peaks in the subcluster [*A*]{} corresponding to the ROSAT group, one centred on a stellar object ([*A\_N*]{}) and one associated to the BCG ([*A\_BCG*]{}), which were not resolved by the previous X-ray observations. The [*A\_BCG*]{} substructure is perfectly centred on the BCG and it corresponds spatially to the gravitationally bound system of galaxies detected by Ferrari et al. (2003). The gas of this clump, significantly hotter than the diffuse ISM of early type galaxies ($\sim$0.5-1.5 keV, Forman et al. 1985), is associated to the whole system of galaxies surrounding the BCG. Its temperature, colder than in the rest of the cluster, is in agreement with the lower radial velocity dispersion of this region ($256 ^{+82}_{-133} {\rm km}{\rm s}^{-1}$ in a circle of 240 ${h_{75}}^{-1}$ kpc around the brightest galaxy, Ferrari et al. 2003). What we need to investigate is the origin of the high gas density in [*A\_BCG*]{}, $\sim$1.5 times higher than in the centre of the main cluster[^4]. Similar to 28% of the first-rank galaxies in rich clusters (Hoessel 1980), the brightest galaxy of A521 is clearly a case of BCG with multiple nuclei (see Fig. 8 of Maurogordato et al. 2000). Since all of them are at the same redshift (Ferrari et al. 2003), they are very likely remnants of galaxies “cannibalised” by the most massive object, around which, however, the halo typical of cD galaxies has not been detected (Maurogordato et al. 2000). The formation of the halo in cD galaxies seems to result from the tidal disruption of a large fraction of dwarf galaxies during the early stages of cluster evolution (Merritt 1984; López-Cruz et al. 1997). Subsequently, violent relaxation redistribute the stars from the disrupted objects throughout the cluster’s potential, giving rise to the cD’s halo, while the gas originally confined in the cannibalised galaxies can contribute significantly to the ICM mass (López-Cruz et al. 1997). In A521 we are probably observing the initial phase of the formation of a cD at the centre of a low-mass subcluster, in which galaxy merging is efficient due to the low velocity dispersion of the system, while the extended halo has not yet had time to form. The very high ICM density of the [*A\_BCG*]{} group could therefore be due to stripped material related to the cannibalism of the BCG. Galaxy cannibalism occurs very early in a cluster lifetime. If it would happen after cluster virialization, we should expect randomly oriented cD galaxies, while it has been shown that the shape of cD’s aligns with its nearest neighbour, the cluster shape and the filaments of large scale structure (West 1994). The scenario that the BCG of A521 is becoming a cD galaxy in a dynamically young group is therefore supported by the observed alignment of the BCG main axis with the cluster major axis and with the nearest cluster neighbour (Plionis et al. 2003). It is also in agreement with the idea that cD’s form via the merging of galaxies in the centre of poor groups, which then fall into richer clusters (Merritt 1984; Zabludoff & Mulchaey 1998 and references therein). Notice that a similar case of a cold and very dense clump of gas, detected around a BCG with multiple nuclei and aligned along the merging axis of the cluster, has been observed with in A3266 (Henriksen & Tittley 2002). In that case, the BCG is probably cannibalising galaxies from a merging subcluster. In agreement with the previous optical analysis, we therefore conclude that the main northern clump [*A*]{} is a group of galaxies in interaction with the main cluster. The subcluster shows a higher density component centred on the BCG, a northern tail of gas, and compression of the X-ray isophotes South of the BCG. All these results are in agreement with the merging scenario suggested by our optical analysis of the cluster (i.e. the group [*A*]{} is infalling toward the main cluster along a North/South direction). By comparing the ICM temperature and density maps of A521 with the numerical simulations of Ricker & Sarazin (2001), we suggest that the group [*A*]{} and the main cluster are in a pre-merger phase ($\sim$–0.5 Gyr from the closest cores encounter), with a quite low impact parameter ($\lesssim 1-2 r_s$) and a merger axis nearly perpendicular to the line of sight. A higher ICM temperature has been observed in the region surrounding the clump [*A\_N*]{}, and in particular in its North and North-East sides (see Fig. \[hr\]). This higher temperature could be related to the presence of another substructure observed in the ICM density map of A521, i.e. the North-East clump [*NE*]{}. This substructure could be a dynamically separated group of galaxies, since it shows a low gas temperature and it hosts some faint galaxies (Fig. \[morf2\]). The clump might be: a) another infalling sub-structure at the redshift of A521, or b) a background gravitationally structure, by chance seen in projection nearby the central field of A521. In the first case, the low ICM temperature of this region suggests that the possible interaction would be in its very initial phase. Several galaxies are present in the [*NE*]{} clump, for which we have no redshift information due to their very faint magnitudes (${\rm B}_{AB}>$23.5, ${\rm I}_{AB}{\geq}$21.5). Their positions on the ${\rm (B-I)}_{AB} vs. {\rm I}_{AB}$ diagram (circles in Fig. \[CMDNE\]) do not exclude that the [*NE*]{} clump could actually correspond to a group of galaxies at the distance of A521, since some objects (i.e. the brightest ones, ${\rm I}_{AB}{\simeq}21.5$) lie on the cluster red-sequence. They could therefore be faint ellipticals at the cluster redshift, surrounded by bluer late-type galaxies. It is however not clear why all the galaxies of this group are fainter than the confirmed members of A521 if they are at the same redshift. The low temperature of the gas in the [*NE*]{} region could also be in agreement with the second hypothesis, i.e. the [*NE*]{} clump could be a background group of galaxies. This seems however less probable, since the faint galaxies located in the clump lie on or are even bluer than the cluster red-sequence (Fig. \[CMDNE\]), while they should be redder if they were massive ellipticals at higher redshift. They could be a grouping of late-type background galaxies, but in such a case the system would not be massive enough to have such a strong X-ray emission, and therefore it would not be associated to the [*NE*]{} clump detected by [*Chandra*]{}. Due to the faint magnitudes of these galaxies and the consequent bigger errors in their colour and magnitude determination, it is however not possible to exclude that they are elliptical galaxies at a higher but not so different redshift than A521 (${\Delta}z\sim$0.1). With the optical observations of A521 available at present it is therefore impossible to give a definitive conclusion on the nature of the clump [*NE*]{}. Central hot region ------------------ Between the main cluster and clump [*A*]{} we detect a very high temperature bar roughly parallel to [*SX1*]{} and extending from East to the centre of the cluster, where the ICM temperature reaches its maximum value (Fig. \[hr\]). The hot bar corresponds to the eastern part of the over-dense ridge of galaxies detected by Arnaud et al. (2000) and Ferrari et al. (2003) on the galaxy iso-density maps (Fig. \[T\_IsoD\]). In optical, the ridge bends towards South-West on the western side of the cluster centre, while the hot bar does not extend in this western region. The dynamical properties of this part of the cluster were interpreted as the result of a recent merger in the ridge region with a significant component along the line of sight (Ferrari et al. 2003). The high temperature of the eastern side of the ridge, due to gas compression and heating, could therefore be associated either a) to the on-going infall of the subcluster [*A*]{} toward the centre of the main cluster, or b) to the merging event nearly along the line of sight detected in optical. The first hypothesis is in agreement with the results of numerical simulations, which, in the case of low impact parameters, show a high temperature bar nearly perpendicular to the collision axis during the pre-merger phases (Schindler & Müller 1993; Takizawa 1999; Ricker & Sarazin 2001). Considering hypothesis b), we can exclude that we are observing the central phases of a subclusters’ collision: due to the merging geometry of the substructures in the ridge reconstructed through the optical observations, we should not detect such a high temperature bar (collision axis nearly along the line of sight prevents to observe strong signatures of interactions in the X-ray temperature and density maps, e.g. Schindler & Müller 1993). The high temperature could be explained by the b) hypothesis in the case of a post-merger, since we could then be observing in projection the shock fronts moving outwards. Of course, the hot ridge in the centre of the cluster could be due to a combination of the two merging events, i.e. the pre-merger phase of the clump [*A*]{} and the post-collision nearly along the line of sight along the ridge. Other features in the density and temperature maps {#other} -------------------------------------------------- Other features have been detected in the northern part of the cluster (i.e. North of the [*SX1*]{} direction). First of all, the western clump [*W*]{}. Optical observations reveal that it hosts some bright cluster members (bottom panel of Fig. \[morf2\]); it could therefore be a dynamically bound group of galaxies at some stage of interaction with the main cluster. Since the ICM temperature is higher in the North than in the South of the group, and the map of the residuals (Fig. \[residuals\]) shows a tail of gas elongated in the SE direction, we could be witnessing a merging event between the main cluster and the clump [*W*]{}, the latter coming from somewhere in the S-SE direction. An off-axis collision could have prevented the total assimilation of the less massive group [*W*]{} in the main component of A521. Two edges [*A1*]{} and [*A2*]{} have been detected in the X-ray surface brightness of A521. They are located orthogonally to, and at opposite directions with respect to, the merging axis [*SX2*]{} and trace the Northern boundary of the clump [*A*]{}. In the scenario in which the [*A\_BCG*]{} group is falling from North onto the main cluster, the edges might be interpreted as residuals of the sloshing activity of the ICM during the merger that is taking place along the [*SX2*]{} axis. However, due to the very complex optical and X-ray properties of A521, nothing can exclude that [*A1*]{} and [*A2*]{} could be alternatively related to other merging events in A521, either in its central field (e.g. the possible collisions of the clumps [*W*]{} or [*NE*]{} with the main cluster), or in its outer regions not covered by our observations. In this respect, several other substructures appear at larger scales in the North of the iso-density map of the projected distribution of the red sequence galaxies (see Fig.\[LSS\]). New optical observations ([email protected] ESO) have recently revealed the presence of several galaxies at the cluster redshift in the two Northern clumps [*E1*]{} and [*E2*]{} (Fig.\[LSS\]), confirming that they are very likely other merging subclusters at 1.5-2 Mpc from the cluster centre. A radio relic in the South-East region of the cluster has also been discovered through new VLA observations of A521 (Ferrari 2003, see Appendix \[radio\] for more details). Summary and conclusions {#summ} ======================= Through our [*Chandra*]{} observations of A521 we have confirmed that this cluster is in a disturbed dynamical state, as shown by previous X-ray and optical analysis (Arnaud et al. 2000; Maurogordato et al. 2000; Ferrari et al. 2003). A sketch of the possible merging scenario in the central field of the cluster covered by our [*Chandra*]{} observations is shown in Fig. \[cartoon\], in which the following features emerge: - a main cluster centred on the X-ray/optical barycentre of the system; - a group of galaxies (clump [*A*]{}) with its ICM density peak centred on the BCG, which is infalling on the main cluster along a NW/SE direction ($\sim$-0.5 Gyr from the closest cores encounter); - two other structures possibly interacting with the main cluster ([*W*]{} and [*NE*]{}), the former in the central phases of an off-axis collision coming somewhere from S-SE, the latter at the beginning of the interaction and coming from North-East. The nature of these two substructures, and in particular of [*NE*]{}, is however very uncertain; - two edges in the ICM density ([*A1*]{} and [*A2*]{}), probably due to ongoing merging events either in the central field of the cluster observed by [*Chandra*]{}, or in its outer regions. In conclusion, [*Chandra*]{} observations confirm that A521 is made up by several sub-clusters and groups of galaxies converging towards the centre of the cluster and observed in different phases of their merging process. The higher resolution density and temperature maps allow to corroborate and refine the merging scenario of the group hosting the BCG (i.e. clump [*A*]{}), and to identify new signatures of other possible interactions (i.e. the groups [*W*]{} and [*NE*]{}, the arcs [*A1*]{} and [*A2*]{}). A deeper and wider optical spectroscopic coverage is now necessary to understand the most puzzling regions of this system and clarify its extremely complex multiple merging scenario. Radio emission in A521 {#radio} ====================== New VLA observations at 1.4 GHz, with angular resolution of 12”$\times$12” and sensitivity of 0.025 mJy/beam (1$\sigma$) have revealed the presence of a faint radio relic in the South/East region of the cluster (Fig. \[fig:radio\] and Ferrari 2003), thus supporting the perturbed dynamics of A521. Such low-brightness extended radio sources are indeed only detected in cluster mergers (Feretti 2003). Detailed results on this radio source will be presented elsewhere (Ferrari et al. in preparation). A Wide Angle Tail (“WAT”) radio source has also been detected in the North of the cluster (Fig. \[fig:radio\] and Ferrari 2003). The WAT is located in the clump E1, which is probably merging with the main cluster (see Sect. \[other\] and Fig. \[LSS\]). Further spectroscopic observations could reveal if the optical galaxy associated to the WAT is at the cluster redshift. The relative motion between the host galaxy and the ICM, due to the infall of the clump E1 towards the cluster centre, would then be responsible for the observed bend of the radio jets (Feretti and Venturi 2002). We warmly thank Wolfgang Kapferer, Magdalena Mair and Jean-Luc Sauvageot for intensive and fruitful discussions on the merging scenario of the cluster. We are very grateful to Luigina Feretti for her helpful contribution to the analysis of the radio properties of A521. The authors thank the anonymous referee for his/her suggestions that improved the presentation of the paper. This research was supported in part by Marie Curie individual fellowship MEIF-CT-2003-900773 (CF). [^1]: The position angle being defined from North to East. [^2]: The CCD gaps have been masked in doing the spectral analysis of the whole cluster. [^3]: We checked that fully consistent results are obtained using the ACIS-I exposure only [^4]: The density ratio has been measured as the square root of the surface brightness ratio. The 0.5-5 keV surface brightness maps (ACIS-I and ACIS-S) have been used, considering the [*A\_BCG*]{} region and an equivalent region around the X-ray centre of the main cluster.
--- abstract: | There has been much success in describing the limiting spatial fluctuations of growth models in the Kardar-Parisi-Zhang (KPZ) universality class. A proper rescaling of time should introduce a non-trivial temporal dimension to these limiting fluctuations. In one-dimension, the KPZ class has the dynamical scaling exponent $z=3/2$, that means one should find a universal space-time limiting process under the scaling of time as $t\,T$, space like $t^{2/3} X$ and fluctuations like $t^{1/3}$ as $t\to\infty$. In this paper we provide evidence for this belief. We prove that under certain hypotheses, growth models display temporal slow decorrelation. That is to say that in the scalings above, the limiting spatial process for times $t\, T$ and $t\, T+t^{\nu}$ are identical, for any $\nu<1$. The hypotheses are known to be satisfied for certain last passage percolation models, the polynuclear growth model, and the totally / partially asymmetric simple exclusion process. Using slow decorrelation we may extend known fluctuation limit results to space-time regions where correlation functions are unknown. The approach we develop requires the minimal expected hypotheses for slow decorrelation to hold and provides a simple and intuitive proof which applied to a wide variety of models. address: - | I. Corwin\ Courant Institute of Mathematical Sciences\ New York University\ 251 Mercer Street\ New York, NY 10012, USA - | P.L. Ferrari\ Institute of Applied Mathematics\ University of Bonn\ Endenicher Allee 60\ 53115 Bonn, Germany - | S. Péché\ Institut Fourier\ 100 Rue des maths\ 38402 Saint Martin d’Heres, France author: - Ivan Corwin - 'Patrik L. Ferrari' - Sandrine Péché date: '23. February 2011' title: Universality of slow decorrelation in KPZ growth --- Introduction ============ Kardar, Parisi and Zhang (KPZ) [@KPZ:1986d] proposed on physical grounds that a wide variety of irreversible stochastically growing interfaces should be governed by a single stochastic PDE (with two model dependent parameters $D,\lambda\neq 0$). Namely, let $x\mapsto h(x,t)\in{\ensuremath{\mathbb{R}}}$ be the height function at time $t$ and position $x\in{\ensuremath{\mathbb{R}}}^d$, then the KPZ equation is $$\frac{\partial h(x,t)}{\partial t} = D \Delta h(x,t) + \lambda|\nabla h(x,t)|^2 +\eta(x,t),$$ where $\eta(x,t)$ is a local noise term modeled by space-time white noise. Since then, it has been of significant interest to make mathematical sense of this SPDE (which is ill-posed due to the non-linearity) and to find the solutions for large growth time $t$. Significant progress has been made towards understanding this equation in the one-dimensional $d=1$ case. Specifically, it is believed that the dynamical scaling exponent is $z=3/2$. This should mean that for any growth model (also polymer models) in the same universality class as the KPZ equation (i.e., the KPZ universality class), after centering $h$ by its asymptotic value $\bar{h}(v):=\lim_{t\to\infty} \frac1t h(vt,t)$ and rescaling[^1] $$\label{fixedpt} h_{t}(X,T)= \frac{h(v T t+X (T t)^{2/3},T t)-T t\, \bar{h}(v+ X (T t)^{-1/3})}{(T t)^{1/3}},$$ the limit of $h_{t}(X,T)$ should exist (as $t\to\infty$) and be independent of $v$. Moreover, the limit should, regardless of microscopic differences in the original models, converge to the same space-time process [^2]. Most of the rigorous work[^3] done in studying the statistics associated with this fixed point have dealt with the spatial process (obtained as the asymptotic statistics of $h_{t}(X,T=1)$ as a process in $X$, as $t\to\infty$) and not on how the spatial process evolves with $T$. The exact form of these statistics depend only on the type of initial geometry of the growth process (e.g., the $\operatorname*{Airy}_1$ process for non-random flat geometries and $\operatorname*{Airy}_2$ process for wedge geometries; see the review [@Fer10b]). Computations of exact statistics require a level of solvability and thus have only been proved in the context of certain solvable discrete growth models or polymer models in the KPZ universality class. The partially/totally asymmetric simple exclusion process (P/TASEP), last passage percolation (LPP) with exponential or geometric weights, the corner growth model, and polynuclear growth (PNG) model constitute those models for which rigorous spatial fluctuation results have been proved. Recently, progress was made on analyzing the solution of the KPZ equation itself [@ACQ:2010p; @SS:2010u; @CQ:2010u; @BG:1997s], though this still relied on the approximation of the KPZ equation by a solvable discrete model. The slow decorrelation phenomenon provides one of the strongest pieces of evidence that the above scaling is correct. Indeed, slow decorrelation means that $h_{t}(X,T) - h_{t}(X,T+t^{\nu-1})$ converges to zero in probability for any $\nu<1$. Fix $m$ times of the form $Tt + \alpha_i t^{\nu}$ (for $\alpha_i\in {\ensuremath{\mathbb{R}}}$ and $0<i\leq m$). Then, as long as $\nu<1$, the height function fluctuations, scaled by $t^{1/3}$ and considered in a spatial scale of $t^{2/3}$, will be asympotically (as $t\rightarrow \infty$) the same as those at time $Tt$. Specifically, we introduce a generalized LPP model which encompasses several KPZ class models. Then we give sufficient conditions under which such LPP models display slow decorrelation. These conditions (the existence of a limit shape and one-point fluctuation result) are very elementary and hold for all the solvable models already mentioned, and are believed to hold for all KPZ class models. The proof that slow decorrelation follows from these two conditions is very simple – it relies on the superadditivity property of LPP and on the simple observation that if $X_t\geq Y_t$ and both $X_t$ and $Y_t$ converge in law to the same random variable, then $X_t-Y_t$ converges in probability to zero (see Lemma \[BAC\_lemma\]). Previously, the slow decorrelation phenomenon was proved for the PNG model [@PLF:2008s]. Therein the proof is based on very sharp estimates known in the literature only for the PNG. Apart from the PNG, the only other model for which slow decorrelation has been proved is TASEP under the assumption of stationary initial distribution [@BFP:2009l]. Besides being of conceptual interest, the slow decorrelation phenomenon is an important technical tool that allows one to, for instance: (a) easily translate limit process results between different related observables (e.g., total current, height function representation, particle positions in TASEP; see [@BFP:2009l]), and more importantly, (b) prove limit theorems beyond the situations where the correlation functions are known [@SI07; @BF:2008l; @BFS:2008l] (see Section \[corner\_growth\_model\_sec\]). A further application is in extending known process limit results to prove similar results for more general initial conditions / boundary conditions [@CFP:2009l]. Outline {#outline .unnumbered} ------- In Section \[gen\_theory\_sec\] we introduce the general framework for LPP models in which we prove a set of criteria for slow decorrelation (Theorem  \[growth\_thm\]). In the rest of the paper, we apply Theorem \[growth\_thm\] to various models in the KPZ class, which can be related in some way with a LPP model: the corner growth model, point to point and point to line LPP models, TASEP, PASEP (which requires a slightly different argument since it cannot be directly mapped to a LPP problem) and PNG models. Finally we note extensions of the theorem to first passage percolation and directed polymers, provided that (as conjectured) the same criteria are satisfied. Acknowledgments {#acknowledgments .unnumbered} --------------- The authors wish to thank Jinho Baik for early discussions about this and related problems. I. Corwin wishes to thank the organizers of the “Random Maps and Graphs on Surfaces” conference at the Institut Henri Poincaré, as much of this work was done during that stay. Travel to that conference was provided through the PIRE grant OISE-07-30136 for which thanks goes to Charles Newman and Gérard Ben Arous for arranging for this funding. I. Corwin is funded by the NSF Graduate Research Fellowship. S. Péché would like to thank Hervé Guiol for useful discussions on TASEP and her work is partially supported by the Agence Nationale de la Recherche grant . The authors are very grateful to the anonymous referee for careful reading and a number of constructive remarks. A sufficient condition for slow decorrelation {#gen_theory_sec} ============================================= In this section we consider a general class of last passage percolation models (or equivalently growth models). Given the existence of a law of large numbers (LLN) and central limit theorem (CLT) for last passage time (or for the associated height function), we prove that such models display slow decorrelation along their specific, model dependent, “characteristic” directions. We consider growth models in ${\ensuremath{\mathbb{R}}}^{d+1}$ for $d\geq 1$ which may be lattice based or driven by Poisson point processes. We define a directed LPP model to be an almost surely sigma-finite random non-negative measure $\mu$ on ${\ensuremath{\mathbb{R}}}^{d+1}$. For example we could take $\mu$ to be a collection of delta masses at every point of ${\ensuremath{\mathbb{Z}}}^{d+1}$ with weights given by random variables (which need not be independent or identically distributed). Alternatively we could have a Poisson point process such as in the LPP realization of the PNG model. We will focus on a statistic we call the [*directed half-line to point last passage time*]{}. We choose to study this since, by specifying different distributions on the random measure $\mu$ one can recover statistics for a variety of KPZ class models. In order to define this passage time we introduce the half-line $${\ensuremath{\mathcal{HL}}}=\{p:p_1=p_2=\cdots =p_{d+1} \leq 0\},$$ where $p_i$ is the $i$ coordinate of the point $p$. \[c\][$x$]{} \[l\][$t$]{} \[l\][$p$]{} \[l\][$\pi$]{} \[c\][$\mathcal{HL}$]{} ![The black line is half-line ${\ensuremath{\mathcal{HL}}}$ and the space and time axis are label. A directed path $\pi$ from ${\ensuremath{\mathcal{HL}}}$ to the point $p$ is shown.[]{data-label="slow_dec_space_time"}](LPPGeom.eps "fig:"){height="4.5cm"} It is convenient for us to define a second coordinate system which we call the space-time coordinate system as follows: Let $R$ be the rotation matrix which takes ${\ensuremath{\mathcal{HL}}}$ to $\{p:p_1\leq 0, p_2=\cdots=p_{d+1}=0\}$. Then the space-time coordinate system is $R^{-1}$ applied to the standard basis. The line $\{p:p_1=p_2=\cdots=p_{d+1}\}$ (which contains ${\ensuremath{\mathcal{HL}}}$) is the inverse image of $\{p:p_2=\cdots=p_{d+1}=0\}$ and we call it the $t$-axis (for “time”), see Figure \[slow\_dec\_space\_time\] for an illustration. The other space-time axes are labeled $x_1$ through $x_d$ (these are considered to be “space” axes). Call a curve $\pi$ in ${\ensuremath{\mathbb{R}}}^{d+1}$ a directed path if $\gamma=R\pi$ is a function of $t$ and is 1-Lipschitz. Two points are called “time-like” if they can be connected by such a path. Otherwise they are called “space-like”. To a directed path we assign a passage time $$T(\pi) = \mu(\pi)$$ which is the measure, under the random measure $\mu$ of the curve $\pi$. Now we define the last passage time from the half-line ${\ensuremath{\mathcal{HL}}}$ to a point $p$ as $$L_{{\ensuremath{\mathcal{HL}}}}(p)=\sup_{\pi:{\ensuremath{\mathcal{HL}}}\to p} T(\pi),$$ where we understand the supremum as being over all directed paths starting from the half-line and going to $p$. One may also consider point to point last passage time between $p$ and $q$ which we write as $L_{{\ensuremath{\mathcal{PP}}}}(p,q)$. This is the special case of $\mu\equiv 0$ on $\{x: x-p \in{\ensuremath{\mathbb{R}}}^{d+1}\setminus {\ensuremath{\mathbb{R}}}_+^{d+1}\}$. In Section \[apps\] we show how, by specifying the random measure $\mu$ differently, this model encompasses a wide variety of LPP models and related processes (such as TASEP and PNG). Just to illustrate though, take $d=1$ and let $\mu$ be composed of only delta masses at points $p$ in ${\ensuremath{\mathbb{Z}}}^2_+$ with mass $w_{p}$ exponentially distributed with rate 1. Then $L_{{\ensuremath{\mathcal{HL}}}}(p)$ is the last passage time for the usual LPP in a corner (or equivalently the corner growth model considered in Section \[corner\_growth\_model\_sec\]). We present our result in this more general framework to allow for non-lattice models such as the PNG model. We can now state a result showing that *slow decorrelation occurs* in any model which can be phrased in terms of this type of last passage percolation model *provided both a LLN and a CLT hold*. \[growth\_thm\] Fix a last passage model in dimension $d+1$ with $d\geq 1$ by specifying the distributions of the random variables which make up the environment. Consider a point $p\in {\ensuremath{\mathbb{R}}}^{d+1}$ and a time-like direction $u\in {\ensuremath{\mathbb{R}}}_+^{d+1}$. If there exist constants (depending on $p$, $u$, and the model): $\ell_{{\ensuremath{\mathcal{HL}}}}$ and $\ell_{{\ensuremath{\mathcal{PP}}}}$ non-negative; $\gamma_{{\ensuremath{\mathcal{HL}}}},\gamma_{{\ensuremath{\mathcal{PP}}}}\in (0,1)$; $\nu\in(0,\gamma_{{\ensuremath{\mathcal{HL}}}}/\gamma_{{\ensuremath{\mathcal{PP}}}})$; distributions $D$, $D'$; and scaling constants $c_{{\ensuremath{\mathcal{HL}}}},c_{{\ensuremath{\mathcal{PP}}}}$ such that $$\begin{aligned} \chi_1(t)&:=\frac{L_{{\ensuremath{\mathcal{HL}}}}(tp)-t\ell_{{\ensuremath{\mathcal{HL}}}}}{c_{{\ensuremath{\mathcal{HL}}}}t^{\gamma_{{\ensuremath{\mathcal{HL}}}}}}\Longrightarrow D, \quad \textrm{as }t\textrm{ goes to infinity},\\ \chi_2(t)&:=\frac{L_{{\ensuremath{\mathcal{HL}}}}(tp+t^{\nu}u)-t\ell_{{\ensuremath{\mathcal{HL}}}}-t^\nu \ell_{{\ensuremath{\mathcal{PP}}}}}{c_{{\ensuremath{\mathcal{HL}}}}t^{\gamma_{{\ensuremath{\mathcal{HL}}}}}}\Longrightarrow D, \quad \textrm{as }t\textrm{ goes to infinity},\\ \chi_3(t)&:=\frac{L_{{\ensuremath{\mathcal{PP}}}}(tp,tp+t^{\nu}u)-t^\nu \ell_{{\ensuremath{\mathcal{PP}}}}}{c_{{\ensuremath{\mathcal{PP}}}}(t^{\nu})^{\gamma_{{\ensuremath{\mathcal{PP}}}}}}\Longrightarrow D', \quad \textrm{as }t\textrm{ goes to infinity}, \end{aligned}$$ then we have slow decorrelation of the half-line to point last passage time at $tp$, in the direction $u$ and with scaling exponent $\nu$, which is to say that for all $M>0$, $$\label{slow_dec} \lim_{t\to \infty} {\ensuremath{\mathbb{P}}}(|L_{{\ensuremath{\mathcal{HL}}}}(tp+t^{\nu}u)-L_{{\ensuremath{\mathcal{HL}}}}(tp)-t^\nu \ell_{{\ensuremath{\mathcal{PP}}}}|\geq M t^{\gamma_{{\ensuremath{\mathcal{HL}}}}})=0.$$ \[non\_const\_remark\] There are many generalizations of this result whose proofs are very similar. For instance the fixed (macroscopic) point $p$ and the fixed direction $u$ can, in fact, vary with $t$ as long as they converge as $t\to \infty$. One may also think of the random LPP measure $\mu$ (and the associated probability space $\Omega$) as depending on $t$. Thus for each $t$ the LPP environment is given by $\mu_t$ defined on the space $\Omega_t$. The probability ${\ensuremath{\mathbb{P}}}$ will therefore also depend on $t$, however an inspection of the proof below shows that the whole theorem still holds with ${\ensuremath{\mathbb{P}}}$ replaced by ${\ensuremath{\mathbb{P}}}_t$. Recall the super-additivity property: $$L_{{\ensuremath{\mathcal{HL}}}}(tp+t^{\nu}u)\geq L_{{\ensuremath{\mathcal{HL}}}}(tp)+L_{{\ensuremath{\mathcal{PP}}}}(tp,tp+t^{\nu}u),$$ which holds provided the last passage times are defined on the same probability space. This follows from the fact that, by restricting the set of paths which contribute to $L_{{\ensuremath{\mathcal{HL}}}}(tp+t^{\nu}u)$ to only those which go through the point $tp$, one can only decrease the last passage time. The following lemma plays a central role in our proof. \[BAC\_lemma\] Consider two sequences of random variables $\{X_n\}$ and $\{\tilde{X}_n\}$ such that for each $n$, $X_n$ and $\tilde{X}_n$ are defined on the same probability space $\Omega_n$. If $X_n\geq\tilde{X}_n$ and $X_n\Rightarrow D$ as well as $\tilde{X}_n\Rightarrow D$ then $X_n-\tilde{X}_n$ converges to zero in probability. Conversely if $\tilde{X}_n\Rightarrow D$ and $X_n-\tilde{X}_n$ converges to zero in probability then $X_n\Rightarrow D$ as well. From now on, we assume that the different last passage times $L_{{\ensuremath{\mathcal{HL}}}}(\cdot) $ and $L_{{\ensuremath{\mathcal{PP}}}}(\cdot)$ are realized on the same probability space. Also, by absorbing the constants $c_{{\ensuremath{\mathcal{HL}}}}$ and $c_{{\ensuremath{\mathcal{PP}}}}$ into the distributions, we may fix them to be equal to one. Using super-additivity we may write $$\label{growth_compensator} L_{{\ensuremath{\mathcal{HL}}}}(tp+t^{\nu}u)=L_{{\ensuremath{\mathcal{HL}}}}(tp)+L_{{\ensuremath{\mathcal{PP}}}}(tp,tp+t^{\nu}u)+X_t,$$ where $X_t\geq 0$ is a (compensator) random variable. Rewriting the above equation in terms of the random variables $\chi_1(t)$, $\chi_2(t)$ and $\chi_3(t)$ and dividing by $t^{\gamma_{{\ensuremath{\mathcal{HL}}}}}$ we are left with $$\chi_2(t) = \chi_1(t) + \chi_3(t) t^{\nu\gamma_{{\ensuremath{\mathcal{PP}}}}-\gamma_{{\ensuremath{\mathcal{HL}}}}} +X_t t^{-\gamma_{{\ensuremath{\mathcal{HL}}}}}.$$ By assumption on $\nu$, $\nu\gamma_{{\ensuremath{\mathcal{PP}}}}-\gamma_{{\ensuremath{\mathcal{HL}}}}<0$ and hence we know that the distribution of $\chi_2(t)$ and separately of $\chi_1(t) + \chi_3(t) t^{\nu\gamma_{{\ensuremath{\mathcal{PP}}}}-\gamma_{{\ensuremath{\mathcal{HL}}}}}$ converge to the same distribution $D$. However, since $X_t t^{-\gamma_{{\ensuremath{\mathcal{HL}}}}}$ is always non-negative, we also know that $\chi_2(t) \geq \chi_1(t) + \chi_3(t) t^{\nu\gamma_{{\ensuremath{\mathcal{PP}}}}-\gamma_{{\ensuremath{\mathcal{HL}}}}}$. Therefore, by Lemma \[BAC\_lemma\] their difference, $X_t t^{-\gamma_{{\ensuremath{\mathcal{HL}}}}}$, converges to zero in probability. Thus $\chi_2(t)-\chi_1(t)$ converges to zero in probability. Since $$\chi_2(t)-\chi_1(t) = \frac{L_{{\ensuremath{\mathcal{HL}}}}(tp+t^{\nu}u)-L_{{\ensuremath{\mathcal{HL}}}}(tp)-t^\nu \ell_{{\ensuremath{\mathcal{PP}}}}}{t^{\gamma_{{\ensuremath{\mathcal{HL}}}}}}$$ the theorem immediately follows. Slow decorrelation in KPZ growth models:\ examples and applications {#apps} ========================================= The aim of this section is to make a non-exhaustive review of the possible fields of applications of Theorem \[growth\_thm\]. We introduce a few standard models and explain, briefly, how they fit into the framework of half-line to point LPP and what the consequences of Theorem \[growth\_thm\] are for these models. Corner growth model {#corner_growth_model_sec} ------------------- We choose to first develop in detail the implications of slow decorrelation for a simple KPZ growth process. This process is known as the [*corner growth model*]{} and is related to both LPP and TASEP . Consider a set $A_t$ in ${\ensuremath{\mathbb{R}}}_+^2$ with initial condition $A_0={\ensuremath{\mathbb{R}}}_+^2$ and evolving under the following dynamics: from each [*outer corner*]{} of $A_t$ a $[0,1)\times[0,1)$-box is fill at rate one (i.e., after exponentially distributed waiting time of mean $1$). See Figure \[corner\_fig\] for an illustration of this growth rule where the model has been rotated by $\pi/4$. \[cb\][$X$]{} \[c\][$x$]{} \[c\][$y$]{} \[c\][$w_{1,1}$]{} \[c\][$w_{1,2}$]{} \[c\][$w_{1,3}$]{} \[c\][$w_{2,1}$]{} \[c\][$w_{2,2}$]{} \[c\][$w_{3,1}$]{} ![Corner growth height function. The $w_{i,j}$ are the random, exponentially distributed times it takes to fill an outer corner. The black line is the height function at time $t=w_{1,1}$. There are two outer corners at that time.[]{data-label="corner_fig"}](Corner.eps "fig:"){height="5cm"} One can record the evolution of the growing interface $\partial A_t$ in terms of the random variable $$L(x,y):=\inf\{ t\geq 0 | (x-\tfrac12,y-\tfrac12)\not\in A_t\}\textrm{ for }(x,y)\in {\ensuremath{\mathbb{Z}}}_+^2.$$ This random variable is well known to define a last passage time in a related directed percolation model, as we now recall. Let $w_{i,j}$ be the waiting time for the outer corner $(i,j)$ to be filled, once it appears. A path $\pi$ from $(1,1)$ to $(x,y)$ is called directed if it moves either up or to the right along lattice edges from $(1,1)$ to $(x,y)$. To each such path $\pi$, one associates a passage time $T(\pi)=\sum_{(i,j)\in\pi} w_{i,j}$. Then $$L(x,y)=\max_{\pi:(1,1)\to (x,y)} T(\pi),$$ i.e., $L$ is the last passage times from $(1,1)$ to $(x,y)$. Alternatively one can keep track of $A_t$ in terms of a height function $h$ defined by the relationship $$\label{height_lpp} \{h(x-y,t)\geq x+y\}= \{L(x,y)\leq t\}\quad (x,y)\in{\ensuremath{\mathbb{Z}}}_+^2,$$ (together with linear interpolation for non-integer values of $x-y$). Note that for given $t$, $h(X,t)=|X|$ for $|X|$ large enough. Thus the corner growth process is equivalent to the stochastic evolution of a height function $h(X,t)$ with $h(X,0)=|X|$ and growing according to the rule that local valleys ($\diagdown\;\!\!\diagup$) are replaced by local hills ($\diagup\;\!\!\diagdown$) at rate one. We will speak mostly about the height function, but when it comes to computing and proving theorems it is often easier to deal with the last passage picture. ### LLN and CLT Analogous to the LLN for sums of i.i.d. random variables, there exists an almost sure limit shape for the height function [@HR:1981n; @TS:1998h] of this growth model: $$\bar{h}(v):=\lim_{t\to \infty} \frac{h(vt,t)}{t}= \begin{cases} \frac{1}{2}(v^2+1), & \textrm{ for }v\in (-1,1),\\ |v|, & \textrm{ for }v\notin (-1,1). \end{cases}$$ If we more generally consider the height function arising from a LPP model with an ergodic measure (e.g., iid random lattice weights), then super-additivity and the Kingman’s ergodic theorem implies the existence of a (possibly infinity) limit growth evolution $$\tilde{h}(X,T) := \lim_{t\rightarrow \infty} \frac{h(Xt,Tt)}{t}.$$ Since LPP is a variation problem, the limiting profile is also given by the solution to a variational problem which, when translated into the limiting height function means that $\tilde{h}$ satisfies a Hamilton-Jacobi PDE $\partial_T \tilde{h} = f(\partial_X \tilde{h})$ for a model dependent flux function $f$. Such PDEs may have multiple solutions, and $\tilde{h}$ corresponds to the unique weak solutions subject to entropy conditions. Such PDEs can be solved via the method of characteristics [@E:1998p]. Characteristics are lines of slope $f'$ along which initial data for $\tilde{h}$ is transported. In our present case if we set $\rho = \tfrac{1}{2}(1-\partial_X \tilde{h})$ then $\rho$ satisfies the Burgers equation $\partial_T \rho =\partial_X(\rho(1-\rho))$ and the characteristic lines are of constant velocity emanating out of the origin. It is the fluctuations around this macroscopic profile which are believed to be universal. Johansson [@KJ:2000s] proved that asymptotic one-point fluctuations are given by $$\label{eq8} \lim_{t\to \infty} {\ensuremath{\mathbb{P}}}\left(\frac{h(vt,t)-t \bar{h}(v)}{2^{-1/3} (1-v^2)^{2/3} t^{1/3}} \geq s\right) = F_{\operatorname*{GUE}}(s),$$ where $F_{\operatorname*{GUE}}$ is the $\operatorname*{GUE}$ Tracy-Widom distribution defined in [@TW:1994l]. Unlike the traditional CLT the fluctuations here are in the order of $t^{1/3}$ and the limiting distribution is not Gaussian. Likewise we may consider the fluctuations at multiple spatial locations by fixing $$\begin{aligned} X(\tau) &= vt +\tau(2(1-v^2))^{1/3} t^{2/3},\\ H(\tau,s) &= \frac{1+v^2}{2}t + \tau\, v(2(1-v^2))^{1/3} t^{2/3} +(\tau^2-s)\frac{(1-v^2)^{2/3}}{2^{1/3}} t^{1/3}. \end{aligned}$$ Here $H(\tau,0)=t\, \bar{h}(X(\tau)/t)+o(1)$ and $H(\tau,s)-H(\tau,0)$ measures the fluctuations with respect to the limit shape behavior. Then, in the large time limit, the joint-distributions of the fluctuations are governed by the so-called $\operatorname*{Airy}_2$ process, denoted by ${\ensuremath{\mathcal{A}_2}}$. This process was introduced by Prähofer and Spohn [@PS:2002s] in the context of the PNG model (see also [@KJ:2003d]). A complete definition of the Airy$_2$ process is recalled in [@CFP:2009l]. More precisely, it holds that $$\label{theta_zero_thm} \lim_{t\to \infty} {\ensuremath{\mathbb{P}}}\left(\bigcap_{k=1}^{m} \{h(X(\tau_k),t)\geq H(\tau_k,s_k)\}\right) = {\ensuremath{\mathbb{P}}}\left(\bigcap_{k=1}^{m}\{{\ensuremath{\mathcal{A}_2}}(\tau_k)\leq s_k\}\right),$$ where $m \geq 1$, $\tau_1<\tau_2<\cdots <\tau_m$ and $s_1,\ldots, s_m$ are real numbers. Of course, (\[eq8\]) is the special case of (\[theta\_zero\_thm\]) for $m=1$ and $\tau_1=0$. ### Slow decorrelation in the corner growth model We now consider how fluctuations in the height function are carried through time. For instance, if the fluctuation of the height function above the origin is known at time $t$ (large) for how long can we expect to see this fluctuation persist (in the $t^{1/3}$ scale)? The answer is non-trivial and given by applying Theorem \[growth\_thm\]: there exists a single direction in space-time along which the height function fluctuations are carried over time scales of order $t^1$, while for all other directions only at space-time distances of order $t^{2/3}$. Indeed, given a fixed velocity , any exponent $\nu<1$, and any real number $\theta$, $$\label{eq11} \lim_{t\to \infty} {\ensuremath{\mathbb{P}}}\left(\left|[h\big(v(t+\theta t^{\nu}),t+\theta t^{\nu}\big)-(t+\theta t^{\nu})\bar{h}(v)]-[h\big(vt,t\big) - t\bar{h}(v)] \right| \geq M t^{1/3}\right) = 0,$$ for any $M>0$. Thus, the height fluctuations at time $t$ at position $vt$ and at time and $t+\theta t^\nu$ at position $v(t+\theta t^\nu)$ differs only of $o(t^{1/3})$. These fixed velocity space-time lines are the characteristic of the Burgers equation above. Thus, the right space-time scaling limit to consider is that given in equation (\[fixedpt\]), with $h$ and $\bar{h}$ now taken to be the TASEP height function and asymptotic shape, rather than that of the KPZ equation. As noted below equation (\[fixedpt\]), the value of the velocity $v$ should not affect the law of the limiting space-time process. As evidence for this, equation (\[theta\_zero\_thm\]) shows that we encounter the $\mathrm{Airy}_2$ process as a scaling limit regardless of the value of $v\in (-1,1)$. This amounts to saying that the fixed $T$ marginals of the full space-time limit process of equation (\[fixedpt\]) are independent of $v$. ### Implications of slow decorrelation Up to now only the “spatial-like behavior” of the space-time process (\[fixedpt\]), i.e., the process in the variable $x$ for fixed $\beta$ (which one can set to $1$ w.l.o.g.) has been obtained, while the process in the variable $\beta$ remains to be unraveled. A consequence of (\[eq11\]) is that if we look at the fluctuations at two moments of time $t$ and $t'$ with $|t'-t|\sim t^\nu$ with $\nu<1$, it corresponds to taking $\beta=1+\mathcal{O}(t^{\nu-1})$ in the r.h.s. of (\[fixedpt\]). Then in the $t\to\infty$ limit, the limit process is identical to the process for fixed $\beta=1$. So, if we determine the limit process for any space-time cut such that in the $t\to\infty$ limit, $\beta\to 1$, then, thanks to slow decorrelation, one can extend the result to any other space-time cut with the same property. In the following we refer to this property as [*the process limit extends to general $1+0$ dimensional space-time directions*]{}, meaning that we have $1$ dimension with spatial-like behavior and $0$ dimensions in the orthogonal direction. As indicated in the Introduction, slow decorrelation also allows for instance (a) to translate the limit of different related observables and (b) to extend results on fluctuation statistics to space-time regions where correlation functions are unknown. We illustrate these features in the context of the corner growth model. For simplicity, we consider the case where $v=0$. Fix $m\geq 1$, $\nu\in[0,1)$, real numbers $\tau_1<\tau_2<\cdots <\tau_m$ and $s_1,\ldots, s_m$. Then set $$\label{eq13} \begin{aligned} x(\tau,\theta) &= \lfloor \tfrac{1}{4}(t+\theta t^{\nu}) +\tau 2^{-2/3} t^{2/3}\rfloor,\\ y(\tau,\theta) &= \lfloor \tfrac{1}{4}(t+\theta t^{\nu}) -\tau 2^{-2/3} t^{2/3}\rfloor,\\ \ell(\tau,\theta,s) &= (t+\theta t^{\nu}) + (s-\tau^2) 2^{2/3} t^{1/3}. \end{aligned}$$ \(a) We first show that one can recover (\[theta\_zero\_thm\]) from an analoguous statement in the corresponding LPP model using (\[height\_lpp\]) and slow decorrelation. We start from a result in LPP. Consider the fixed $y=t/4$ slice of space-time. This is obtained by setting $\theta_k t^\nu=\tau_k 2^{4/3}t^{2/3}$ in (\[eq13\]), for which $$\label{eq14} x(\tau,\theta)=\tfrac14 t + \tau 2^{1/3} t^{2/3},\quad y(\tau,\theta)=\tfrac14 t,\quad \ell(\tau,\theta,s)=t+\tau 2^{4/3} t^{2/3}+(s-\tau^2)2^{2/3} t^{1/3}.$$ Using the Schur process [@BP:2008a], it is proven [@SI07; @CFP:2009l] that for $\theta_k t^\nu=\tau_k 2^{4/3}t^{2/3}$, $k=1,\ldots,m $, $$\label{lpp_limit_thm_eqn} \lim_{t\to \infty} {\ensuremath{\mathbb{P}}}\left(\bigcap_{k=1}^{m} \{L(x(\tau_k,\theta_k),\tfrac14 t)\leq \ell(\tau_k,s_k)\}\right) ={\ensuremath{\mathbb{P}}}\left(\bigcap_{k=1}^{m}\{{\ensuremath{\mathcal{A}_2}}(\tau_k)\leq s_k\}\right).$$ To get the result on the height function (\[theta\_zero\_thm\]) using (\[height\_lpp\]), one would need to make the choice: $\theta_k t^{\nu}=-(s_k-\tau_k^2) 2^{2/3} t^{1/3}$. Then we have $$\label{eq16} x(\tau,\theta)-y(\tau,\theta) = \tau 2^{1/3} t^{2/3},\quad x(\tau,\theta)+y(\tau,\theta) = \tfrac12 t-(s-\tau^2) 2^{-1/3} t^{1/3},\quad \ell(\tau,\theta,s)=t.$$ Thus, to obtain (\[theta\_zero\_thm\]) (for $v=0$) from (\[lpp\_limit\_thm\_eqn\]) it is actually sufficient to project $(x,y)$ in (\[eq16\]) on the line $y=t/4$ along the characteristic line passing through $(x,y)$, see Figure \[FigureDPPext\] for an illustration. One finds that this projection gives the scaling (\[eq14\]) but with $\tau$ replaced by some $\tilde \tau=\tau (1+o(1))\to \tau$ as $t\to\infty$. The reason is that the characteristics for $\tau\neq 0$ have slope slightly different from $0$. Finally, slow decorrelation (Theorem \[growth\_thm\], see also (\[eq11\])) and the union bound imply then that $$\textrm{l.h.s.\ of }(\ref{theta_zero_thm})\big|_{v=0} = \textrm{l.h.s.\ of }(\ref{lpp_limit_thm_eqn}) ={\ensuremath{\mathbb{P}}}\left(\bigcap_{k=1}^{m}\{{\ensuremath{\mathcal{A}_2}}(\tau_k)\leq s_k\}\right).$$ \[cb\][$\mathcal{O}(T^\nu)$]{} \[cb\][$x$]{} \[c\][$y$]{} ![Assume that the black dots are $\mathcal{O}(T^\nu)$ for some $\nu<1$ away from the line $y=t/4$. Then, the fluctuations of the passage time at the locations of the black dots are, on the $T^{1/3}$ scale, the same as those of their projection along the critical direction to the line $y=t/4$, the white dots.[]{data-label="FigureDPPext"}](DPPext.eps "fig:"){height="5cm"} \(b) The results for (\[theta\_zero\_thm\]) and (\[lpp\_limit\_thm\_eqn\]) are derived by using the knowledge of (determinantal) correlation functions. The techniques used for these models are however restricted to *space-like paths* (in the best case, see [@BF:2008l]), i.e., for sequences of points $(x_k,y_k)_k$ such that $x_{k+1}-x_k\geq 0$ and $y_{k+1}-y_k\leq 0$ (which can not be connected by directed paths). Now, choose in (\[eq13\]) $\theta_k t^\nu=\tilde\theta_k t^\nu-(s_k-\tau_k^2)2^{2/3} t^{1/3}$ for some real $\tilde \theta_k$. Then, it means that we look at the height fluctuations at times $\ell(\tau_k,\theta_k,s_k)=t+\tilde \theta_k t^\nu$, with $$\label{eq18} x(\tau,\theta)-y(\tau,\theta) = \tau 2^{1/3} t^{2/3},\quad x(\tau,\theta)+y(\tau,\theta) = \tfrac12 (t+\tilde\theta t^\nu)-(s-\tau^2) 2^{-1/3} t^{1/3}.$$ Thus, one can cover much more than only the space-like regions. As before, the projection along the characteristic line of $(x,y)$ on $\tilde \theta_k=0$ leads to (\[eq18\]) with $\tilde\theta=0$ and with a slightly modified $\tau$ (i.e., $\tau\to \tau(1+o(1))$). Then, using slow decorrelation, one can extend (\[theta\_zero\_thm\]) to the following result: fix $m\geq 1$, $\nu\in [0,1)$, real numbers $\tau_1<\tau_2<\cdots <\tau_m$ and $s_1,\ldots, s_m$. Set $$X(\tau) = \tau 2^{1/3}t^{2/3}, \quad H(\tau,\theta,s) = \frac{1}{2}(t+\theta t^{\nu})+(\tau^2-s)2^{-1/3}t^{1/3}.$$ Then we have $$\label{gen_case} \lim_{t\to \infty} {\ensuremath{\mathbb{P}}}\left(\bigcap_{k=1}^{m} \{h(X(\tau_k),t+\theta_k t^{\nu})\geq H(\tau_k,\theta_k,s_k)\}\right) = {\ensuremath{\mathbb{P}}}\left(\bigcap_{k=1}^{m}\{{\ensuremath{\mathcal{A}_2}}(\tau_k)\leq s_k\}\right).$$ This type of computations can be readily adapted to the other KPZ models considered in the sequel. Point to point LPP ------------------- Consider the following random measure $\mu$ on ${\ensuremath{\mathbb{R}}}^{d+1}$: $$\mu=\sum_{p\in {\ensuremath{\mathbb{Z}}}_+^{d+1}} w_{p}\delta_{p}$$ where ${\ensuremath{\mathbb{Z}}}_+=\{0,1,\ldots\}$ and $w_p$ are non-negative random variables. One may consider directed paths to be restricted to follow the lattice edges. This is just standard (point-to-point) last passage percolation (as considered for instance in [@KJ:2000s]). We will restrict ourselves to the case where $d=1$, i.e., LPP in the 2-dimensional corner. Here we write $w_{i,j}$ for weights. The conditions for our slow decorrelation theorem to hold amount to the existence of a LLN and CLT. Presently, for point to point LPP, this is only rigorously know for the two solvable classes of weight distributions – exponential and geometric. For general weight distributions, the existence of a LLN follows from superadditivity (via the Kingman subadditive ergodic theorem), though the exact value of the LLN is not known beyond the solvable cases. None the less, universality is expected at the level of the CLT for a very wide class of underlying weight distributions. That is to say that, after centering by the LLN, and under $t^{1/3}$ scaling, the fluctuations of LPP should always be given by the $F_{\operatorname*{GUE}}$ distribution. In the results we now state we will restrict attention to exponential weights, as geometric weights lead to analogous results. Define LPP with [*two-sided boundary conditions*]{} as the model with independent exponential weights such that, for positive parameters $\pi,\eta>0$, $$\label{eq31} w_{i,j} = \begin{cases} \text{exponential of rate } \pi, &\text{if } i>0,j=0;\\ \text{exponential of rate } \eta, &\text{if } i=0,j>0;\\ \text{exponential of rate } 1, &\text{if } i>0,j>0;\\ \text{zero},& \text{if } i=0,j=0.\\ \end{cases}$$ Recall that an exponential of rate $\lambda$ has mean $1/\lambda$. This class of models was introduced in [@PS:2002c] (and for geometric weights in [@BR:2000l]) and includes the one considered in [@KJ:2000s]. A full description of the one-point fluctuation limit theorems for $L_{{\ensuremath{\mathcal{HL}}}}(tp)$ was conjectured in [@PS:2002c] (see Conjecture 7.1) and a complete proof was given in [@BAC:2009c]. These limit theorems show that the hypotheses for slow decorrelation are satisfied and hence Theorem \[growth\_thm\] applies. We present an adaptation of Theorem 1.3 of [@BAC:2009c] stated in such a way that Theorem \[growth\_thm\] is immediately applicable. As such, we also state the outcome of applying Theorem \[growth\_thm\] (see Figure \[slow\_dec\_2\_sided\_lpp\_characteristics\] for an illustration). The characteristic direction $u$ as well as the exponents and limiting distributions for the fluctuations depend on the location of the point $p$ as well as the values of $\pi$ and $\eta$. Due to the radial scaling of $p$ it suffices to consider $p$ of the form $p=(1,\kappa^2)$ for $\kappa^2\in (0,\infty)$. In the following theorem, $\gamma_{{\ensuremath{\mathcal{PP}}}}=1/3$, $c_{{\ensuremath{\mathcal{HL}}}}$ is a constant depending on the direction $p$, $c_{{\ensuremath{\mathcal{PP}}}}$ is a constant depending on the direction $u$, and $D'$ is $F_{\operatorname*{GUE}}$. We also refer the reader to  [@BAC:2009c] for the definitions of the distribution functions $F_{\operatorname*{GUE}}$, $F_{\operatorname*{GOE}}^2$ and $F_{0}$ which arise here. \[bac\_thm\] Define $\kappa_{\eta}=\frac{\eta}{1-\eta}$, $\kappa_{\pi}=\frac{1-\pi}{\pi}$ and $\kappa_{sh}=\sqrt{\frac{\eta(1-\pi)}{\pi(1-\eta)}}$. 1. If $\kappa_\eta\geq \kappa \geq\kappa_\pi$ (which implies that $\pi+\eta\geq 1$) then $\ell_{{\ensuremath{\mathcal{HL}}}}= (1+\kappa)^2$, $\gamma_{{\ensuremath{\mathcal{HL}}}} = 1/3$, and $D$ is either $F_{\operatorname*{GUE}}$ (in the case of strict inequality) or $F_{\operatorname*{GOE}}^2$ (in the case of either, but not both, equalities) or $F_{0}$ (in the case where all three terms in the inequality are equal). Then, there is slow decorrelation in the direction $u=p$ for all $\nu\in (0,1)$. 2. If $\pi+\eta\geq 1$ and $\kappa>\kappa_\eta$ then $\ell_{{\ensuremath{\mathcal{HL}}}}=\frac{\kappa^2}{\eta}+\frac{1}{1-\eta}$, $\gamma_{{\ensuremath{\mathcal{HL}}}}=1/2$, and $D=\mathcal{N}_{0,1}$ is the standard Gaussian distribution. Then there is slow decorrelation in the direction $u=(1, \kappa_{\eta}^2)$ for all $\nu\in (0,1)$. 3. If $\pi+\eta\geq 1$ and $\kappa<\kappa_\pi$ then $\ell_{{\ensuremath{\mathcal{HL}}}}=\frac{1}{\pi}+\frac{\kappa^2}{1-\pi}$, $\gamma_{{\ensuremath{\mathcal{HL}}}}=1/2$, and $D=\mathcal{N}_{0,1}$. Then there is slow decorrelation in the direction $u=(1, \kappa_{\pi}^2)$ for all $\nu\in (0,1)$. 4. If $\pi+\eta<1$ and $\kappa>\kappa_{sh}$ then $\ell_{{\ensuremath{\mathcal{HL}}}}=\frac{\kappa^2}{\eta}+\frac{1}{1-\eta}$, $\gamma_{{\ensuremath{\mathcal{HL}}}}=1/2$, and $D=\mathcal{N}_{0,1}$. Then there is slow decorrelation in the direction $u= (1,\kappa_{\eta}^2)$ for all $\nu\in (0,1)$. 5. If $\pi+\eta<1$ and $\kappa<\kappa_{sh}$ then $\ell_{{\ensuremath{\mathcal{HL}}}}=\frac{1}{\pi}+\frac{\kappa^2}{1-\pi}$, $\gamma_{{\ensuremath{\mathcal{HL}}}}=1/2$, and $D=\mathcal{N}_{0,1}$. Then there is slow decorrelation in the direction $u= (1,\kappa_{\pi}^2)$ for all $\nu\in (0,1)$. 6. If $\pi+\eta<1$ and $\kappa=\kappa_{sh}$ then $\ell_{hl} = \frac{1}{\pi(1-\pi)}= \frac{1}{\eta(1-\eta)}$, $\gamma_{hl}=1/2$, and $D$ is distributed as the maximum of two independent Gaussian distributions. *Then there is no slow decorrelation.* This last passage percolation model is related to a TASEP model with two-sided initial conditions (which we discuss in Subsection \[ASEP\]). As explained before the characteristics are those for the Burgers equation. The first three cases above correspond with a situation that is known of as a [*rarefaction fan*]{}, while the last three correspond with the [*shockwave*]{}. The above result is illustrated in Figure \[slow\_dec\_2\_sided\_lpp\_characteristics\]. The left case displays the rarefaction fan (the fanning of the characteristic lines from the origin) and the right case displays a shockwave (the joining together of characteristic lines coming from different directions). \[c\][(a)]{} \[c\][(b)]{} \[c\][Case 1]{} \[c\][Case 2]{} \[c\][Case 3]{} \[c\][Case 4]{} \[c\][Case 5]{} \[c\][$\kappa^2=\kappa^2_\eta$]{} \[l\][$\kappa^2=\kappa^2_\pi$]{} \[c\][$\kappa^2=\kappa^2_s$]{} ![The different cases of fluctuation limit theorems and accompanying directions of slow decorrelation: (a) $\eta+\pi>1$ (actually shown $\eta=\pi=2/3$); (b) $\eta+\pi<1$ (actually shown $\eta=\pi=1/3$). As $\kappa^2$ (the height along the dashed line) varies, the case of fluctuation theorem changes, as does the direction of slow decorrelation (given by the direction of the thin lines).[]{data-label="slow_dec_2_sided_lpp_characteristics"}](TwoSided.eps "fig:"){height="6cm"} In addition to one-point fluctuation limits, the above two-sided boundary condition LPP model has a fully classified limit process description. The description was given in [@BFP:2009l] for $\pi+\eta=1$ (known as the stationary case) and in [@CFP:2009l] for all other (non-equilibrium) boundary conditions. These process limits are obtained using determinantal expressions for the joint distribution of the last passage times for points along fixed directions. Thus, initially, one only gets process limits along fixed lines. As explained in Section \[corner\_growth\_model\_sec\] and in [@CFP:2009l] slow decorrelation, however, implies that the appropriately rescaled fluctuations at the points which are off of this line (to order $t^{\nu}$ for $\nu<1$) have the same joint distribution as their projection along characteristics to the line (see Figure 1 of [@BFP:2009l] for an illustration of this). A completely analogous situation arises in the case of geometric, rather than exponential weights (this model is often called discrete PNG). Such a model is described in [@BR:2000l] and the one-point limiting fluctuation statistics are identified. The spatial process limit is characterized in [@IS:2004f]. These results are only proved in a fixed space-time direction, though applying Theorem \[growth\_thm\] we can extend this process limit away from this fixed direction just as with the exponential weights. A slightly different model with boundary conditions was introduced in [@BBP:2005p] and involves [*thick one-sided boundary conditions*]{}. Fix a $k\in{\ensuremath{\mathbb{N}}}$, parameters $\pi_1,\ldots, \pi_k$, and set $\pi_i=1$ for $i>k$. Just as above, we define independent random weights on ${\ensuremath{\mathbb{Z}}}_+^2$, this time with $w_{i,j}$ exponential random variables of rate $\pi_i$ (mean $1/\pi_i$). Section 6 of [@BBP:2005p] explains how results they obtain for perturbed Wishart ensembles translate into a complete fluctuation limit theorem description for this model. Just as in the two-sided boundary case, those limit theorems show that the hypotheses of Theorem \[growth\_thm\] are satisfied and therefore there is slow decorrelation. The exponent $\gamma_{{\ensuremath{\mathcal{HL}}}}$ depends on the point $p$ and the strength of the boundary parameters $\pi_i$ and can either be $1/3$, with random matrix type fluctuations, or $1/2$ with Gaussian type (more generally $\ell \times \ell$ $\operatorname*{GUE}$ for some $1\leq \ell \leq k$) fluctuations (see [@BBP:2005p] Theorem 1.1). The exponent $\gamma_{{\ensuremath{\mathcal{PP}}}}=1/3$ and the limiting distribution $D'$ is $F_{\operatorname*{GUE}}$. The direction of the slow decorrelation depends on the parameters and the point (we do not write out a general parametrization of this direction as there are many cases to consider depending on the $\pi_i$). The fluctuation process limit theorem has not been proved for this model, though the method of [@CFP:2009l] would certainly yield such a theorem. Also, analogous results for the geometric case have not been proved either but should be deducible from the Schur process [@BP:2008a]. TASEP and PASEP {#ASEP} --------------- ### Totally asymmetric simple exclusion process (TASEP) TASEP is a Markov process in continuous time with state space $\{0,1\}^{{\ensuremath{\mathbb{Z}}}}$ (think of 1s as particles and 0s as holes). Particles jump to their right neighboring site at rate $1$, provided the site is empty. The waiting time for a jump is exponentially distributed with mean $1$ (discrete-time versions have geometrically distributed waiting times). See [@TL:1999s; @TL:2005i] for a rigorous construction of this process. TASEP with different initial conditions can be readily translated into LPP with specific measures $\mu$ and hence Theorem \[growth\_thm\] may be applied. Slow decorrelation can thus be used to show that fluctuation limit processes can be extended from fixed space-time directions to general $1+0$ dimensional space-time directions. An observable of interest for TASEP is the integrated current of particles $I(x,t)$ defined as the number of particles which jumped from $x$ to $x+1$ during the time interval $[0,t]$. Also of interest is the height function $h(x,t)$ $$\label{height_function_def} h(x,t)=\begin{cases} 2I(0,t)+\sum_{i=1}^{x}(1-2\eta_{t}(i)), &x\geq 1,\\ 2I(0,t), & x=0,\\ 2I(0,t)-\sum_{i=x+1}^{0}(1-2\eta_{t}(i)), &x\leq -1, \end{cases}$$ where $\eta_t(i)=1$ (resp. $\eta_t(i)=0$) is site $i$ is occupied (resp. empty) at time $t$. There is a simple relationship between the current and the height function given by $$\label{height_current} I(x,t)=\tfrac12 (h(x,t)-x).$$ A well-studied initial condition is [*step initial condition*]{}: At time $t=0$, $\{\ldots, -2,-1,0\}$ is filled by particles and $\{1,2,\ldots\}$ is empty, i.e., $h(x,t=0)=|x|$. There is a simple relation with the corner growth model studied in Subsection \[corner\_growth\_model\_sec\]. The weight $w_{i,j}$ is the waiting time (counted from the instant when the particle can jump) for the particle which started in position $-j+1$ to move from position $i-j$ to $i-j+1$. Thus, the TASEP height function records the boundary of the region of points $p$ for which $L_{{\ensuremath{\mathcal{HL}}}}(p)\leq t$. Therefore, as in Subsection \[corner\_growth\_model\_sec\], one can apply Theorem \[growth\_thm\] leading to slow decorrelation (in the sense of equation (\[eq11\])) for the fluctuations of the TASEP height function along space-time lines corresponding to the characteristics of Burgers equation. \[c\][(a)]{} \[c\][(b)]{} \[c\][$x$]{} \[c\][$t$]{} ![Rarefaction wave on the left and shockwave on the right.[]{data-label="slow_dec_2_sided_bc"}](TwoSidedTASEPbis.eps "fig:"){height="4.5cm"} An important generalization of the step initial condition are the [*two-sided Bernoulli initial conditions*]{} which are defined for all pairs $\rho_-,\rho_+\in [0,1]$ as the random initial conditions in which particles initially occupy sites to the left of the origin with probability $\rho_-$ (independently of other sites) and likewise to the right of the origin with probability $\rho_+$. It was proven in [@PS:2002s] that two-sided TASEP can be mapped[^4] to the LPP with two-sided boundary conditions model (\[eq31\]) with $\pi=1-\rho_+$ and $\eta=\rho_-$. Using this connection and slow decorrelation, one can show that all the results stated for the LPP model (\[eq31\]) can be translated into their counterpart for two-sided TASEP. This is made in detail in [@CFP:2009l] (which uses some arguments of this paper), where we prove a complete fluctuation process limit for $\rho_-\neq \rho_+$ which complements the recent result of [@BFP:2009l] for $\rho_-=\rho_+$. The characteristic line leaving position $x$ has slope $1-2\rho(x)$. On top of this, the entropy condition ensures that if $\rho_->\rho_+$, there will be a rarefaction fan from the origin which will fill the void between lines of slope $1-2\rho_-$ and $1-2\rho_+$. The Rankine-Hugoniot condition applies to the case where $\rho_-<\rho_+$ and introduces shockwaves with specified velocities when characteristic lines would cross. These two types of characteristics are illustrated side-by-side in Figure \[slow\_dec\_2\_sided\_bc\]. Another variation is TASEP with [*slow particles or slow start-up times*]{}, which is considered in [@JB:2006p]. It may likewise be connected to the LPP with thick one-sided boundary conditions model which we previously introduced. As a result we may similarly conclude slow decorrelation. Not all initial conditions correspond to LPP with weights restricted to ${\ensuremath{\mathbb{Z}}}_+^2$. For example, TASEP with [*flat (or periodic) initial conditions*]{} corresponds to the case where only the sites of $k{\ensuremath{\mathbb{Z}}}$, for $k\geq 2$ are initially occupied. For simplicity, we focus on the case $k=2$. Then, the height function at time $t=0$ is a saw-tooth, see Figure \[slow\_dec\_flat\_tasep\](a) (though asymptotically flat, from which the name). Rotating by $\pi/2$, it is the growth interface for half-line to point LPP where the measure $\mu$ is supported on points in $(i,j)\in{\ensuremath{\mathbb{Z}}}^2$ such that $i+j\geq 0$ and given by delta masses with independent exponential weights of rate $1$. Fluctuation theorems and limit process have been proved for several periodic initial conditions [@BFPS:2007f; @BFP:2007f] (in [@BFP:2007f] was in discrete time, i.e., with geometric weights). Similarly TASEP with [*half-flat initial conditions*]{} is defined by letting particles start at $2{\ensuremath{\mathbb{Z}}}_-=\{\cdots, -4,-2,0\}$. The corresponding last passage percolation model has non-zero weights for points $(i,j)$ such that $i+j\geq 0$ and $j\geq 0$. The limit process for this model was identified in [@BFS:2007t]. Theorem \[growth\_thm\] applies to both of these model and proves slow decorrelation. This implies that the fluctuation process limits extend to general $1+0$ dimensional space-time directions. The characteristics lines are shown in Figure \[slow\_dec\_flat\_tasep\](b). \[c\][(a)]{} \[c\][(b)]{} ![Flat (a) and half-flat (b) TASEP correspond, via their height functions, with the LPP models in the two regions shown above. Characteristic lines are perpendicular to the initial direction of the height function and in case (b) the entropy condition implies that they fan out to the right of the origin.[]{data-label="slow_dec_flat_tasep"}](HalfFlat.eps "fig:"){height="4.5cm"} A variant of half flat initial conditions has particles starting at $2{\ensuremath{\mathbb{Z}}}_-$ plus a few particles at positive even integers, with a different speed $\alpha$. This is known as [*two speed*]{} TASEP and [@BPS:2009t] gives a complete description and proof of the process limit for these initial conditions. As with all of the other examples, this can be coupled with a LPP model and hence Theorem \[growth\_thm\] applies and prove slow decorrelation and enables us to extend these process limit results as well. ### Partially asymmetric simple exclusion process (PASEP) The PASEP is a generalization of TASEP where, particles jump to the right-neighboring site with rate $p\neq 1/2$ and to the left-neighboring site with rate $q=1-p$ (always provided that the destination sites are empty). An important tool to study PASEP is the [*basic coupling*]{} [@TL:1999s; @TL:2005i]. Through a graphical construction, one can realize and hence couple together every PASEP (with different initial conditions) on the same probability space. Even though PASEP can not be mapped to a LPP model, it still has the same super-additivity properties necessary to prove a version of Theorem \[growth\_thm\]. The property comes in the form of [*attractiveness*]{}. That PASEP is attractive means that if you start with two initial conditions corresponding to height functions $h_1(x,0)\leq h_2(x,0)$ for all $x\in {\ensuremath{\mathbb{R}}}$, then for any future time $t$, $h_1(x,t)\leq h_2(x,t)$ for all $x\in {\ensuremath{\mathbb{R}}}$. We now briefly review this graphical construction. Above every integer draw a half-infinite time ladder. Fix $p$ (and hence $q$) and for each ladder place right and left horizontal arrows independently at Poisson points with rates $p$ and $q$ respectively. This is the common environment in which all initial conditions may be coupled. Particles move upwards in time until they encounter an arrow leaving their ladder. They try to follow this ladder, and hence hop one step, yet this move is excluded if there is already another particle on the neighboring ladder. That this graphical construction leads to attractiveness for the PASEP is shown, for instance, in  [@TL:1999s; @TL:2005i]. In a series of three papers [@TW:2008i; @TW:2008f; @TW:2008a] Tracy and Widom show that for step initial conditions with positive drift $\gamma=p-q>0$, PASEP behaves asymptotically the same as TASEP (when speeded-up by $1/\gamma$). Just as in TASEP the current or height function is of central interest. $I(x,t)$ is defined as the number of particles which jumped from $x$ to $x+1$ minus the ones from $x+1$ to $x$ during $[0,t]$ and $h(x,t)$ is defined by (\[height\_function\_def\]). This time, the height function does not monotonically grow, but does still have a drift. The slow decorrelation theorem for PASEP with general initial conditions is stated below. By a PASEP model we mean a measure on initial configurations, as well as a rate $p=1-q\in (1/2,1]$. We write $h(x,t)$ for the height function for this specified model, and $h'(x,t)$ for the height function for the PASEP with step initial conditions. Note that the generalizations of Remark \[non\_const\_remark\] apply in this case too. \[ASEP\_thm\] Consider a velocity $v\in {\ensuremath{\mathbb{R}}}$ and a second variable $u\in {\ensuremath{\mathbb{R}}}$. If there exist constants (depending on $v$ and $u$ and the model): $\ell$ and $\ell'$ non-negative; $\gamma, \gamma'\in (0,1)$; $\nu\in (0,\gamma/\gamma')$; and distributions $D$ and $D'$ such that $$\begin{aligned} \nonumber &&\frac{h(vt,t)-t\ell}{t^{\gamma}}\Longrightarrow D, \quad \textrm{as }t\textrm{ goes to infinity},\\ &&\frac{h(vt+ut^{\nu},t+t^{\nu})-t\ell-t^\nu \ell'}{t^{\gamma}}\Longrightarrow D, \quad \textrm{as }t\textrm{ goes to infinity},\\ \nonumber &&\frac{h'(ut,t)-t\ell'}{t^{\gamma'}}\Longrightarrow D', \quad \textrm{as }t\textrm{ goes to infinity},\end{aligned}$$ then we have slow decorrelation of the PASEP height function at speed $v$, in the direction given by $u$ and with scaling exponent $\nu$, i.e., for all $M>0$, $$\lim_{t\to \infty} {\ensuremath{\mathbb{P}}}(|h(vt+ut^{\nu},t+t^{\nu})-h(vt,t)-t^{\nu}\ell'|\geq M t^{\gamma})=0.$$ Rather than the height function we focus on the current which is related via equation (\[height\_current\]). $I(vt+ut^{\nu},t+t^{\nu})$ is equal to the current $I(vt,t)$ up to time $t$, plus the current of particles which cross the space-time line from $vt$ at time $t$ to $vt+ut^{\nu}$ at time $t+t^{\nu}$. We consider a coupled system starting at time $t$ reset so as to appear to be in step initial conditions centered at position $vt$. By attractiveness of the basic coupling, the current across the space-time line from $vt$ at time $t$ to $vt+ut^{\nu}$ at time $t+t^{\nu}$ for this “step” system will exceed that for the original system. Denote by $I'(ut^{\nu},t^{\nu})$ the current associated to the coupled “step” system and observe that, it is distributed as the current of an independent step initial condition PASEP. Thus, $$I(vt+ut^{\nu},t+t^{\nu})= I(vt,t)+I'(ut^{\nu},t^{\nu})+X_t$$ where $X_t\leq 0$. From this point on the proof follows exactly as in the proof of Theorem \[growth\_thm\]. Using the fluctuation results proved in [@TW:2008i; @TW:2008f; @TW:2008a], reviewed in [@TW:2009t], we find that the above hypotheses are satisfied for PASEP with step initial conditions and also for step Bernoulli initial conditions [@TW:2009o] with $\rho_-=0$ and $\rho_+>0$. The slow decorrelation directions are given by the characteristics just as in the case of TASEP. These two sets of initial conditions are the only ones for which fluctuations theorems are presently known for PASEP, but limit process theorems are not yet proven. The polynuclear growth (PNG) model {#PNG} ---------------------------------- As mentioned before, slow decorrelation for the (continuous time, Poisson point) PNG model was proved previously in [@PLF:2008s] in the case of the [*PNG droplet*]{} and [*stationary PNG*]{}. Theorem \[growth\_thm\] (along with the necessary preexisting fluctuation theorems) gives an alternative proof of these results as well as the analogous result for [*flat PNG*]{}. Because of the minimality of the hypotheses of our theorem we may further prove slow decorrelation for the model of [*PNG with two (constant) external sources*]{} considered in [@BR:2000l]. The way that PNG fits into the framework of our half-line to point LPP model is that one takes $\mu$ to be a Poisson point process of specified intensity on some domain. For the PNG droplet, stationary PNG and PNG with two external sources, we restrict the point process to ${\ensuremath{\mathbb{R}}}_+^2$ and (in the second and third cases) augment the measure $\mu$ with additional one dimensional point process along the boundaries. For flat PNG the support of the point process is $\{(x,y):x+y\geq 0\}$. The limit process for the PNG droplet for fixed time was proved in [@PS:2002s] and for flat PNG was proved in [@BFS:2008l] for space-like paths. It was explained in [@PLF:2008s] that slow decorrelation implies that these limit processes extend to general $1+0$ dimensional space-time directions (with time scaling $t^{\nu}$ for $\nu<1$). First passage percolation {#FPP} ------------------------- As opposed to LPP one can look to the minimum value of $T(\pi)$. This then goes by the name of directed [*first*]{} passage percolation and for simplicity we consider this only when we restrict our measure to being supported on a lattice. One may also consider undirected first passage percolation. Theorem \[growth\_thm\] can be adapted in a straightforward way for both of these models. The statement of the theorem remains identical up to replacing the last passage time variable with the first passage time. For the proof the only change is that the compensator $X_t$ now satisfies $X_t\leq 0$ rather than $X_t\geq 0$. Unfortunately no fluctuation theorems have been proved for first passage percolation, so all that we get is a criterion for slow decorrelation. Directed polymers {#DP} ----------------- We now briefly consider a lattice-based directed polymer models in $1+1$ dimension and note that just as in LPP, slow decorrelation can arise in these models. Unfortunately, just as in first passage percolation, there are no fluctuation theorems proved for such polymers. Recently, however, the order of fluctuations for a particular specialization of this model was proved in [@TS:2009s]. It should be noted that while we focus on just one model, the methods used can be applied to other polymer models and in more than $1+1$ dimension (for example line to point polymers). The model we consider is the point to point directed polymer. In this model we consider any directed, lattice path $\pi$ from $(0,0)$ to a point $p$ and assign it a Gibbs weight $e^{\beta T(\pi)}$ where $\beta\geq 0$ is known as the inverse temperature and where $T(\pi)$ is the sum of weights (which are independent) along the path $\pi$ ($-T(\pi)$ is the energy of the path $\pi$). We define the partition function and free energy for a polymer from a point $p$ to $q$ as: $$Z_{\beta}(p,q)=\sum_{\pi:p\to q} e^{\beta T(\pi)}, \qquad {\ensuremath{\mathfrak{F}}}_\beta(0,p)=\frac{1}{\beta}\log Z_\beta(0,p).$$ It is expected that the free energy satisfies similar fluctuation theorems to those of LPP (which is the $\beta=\infty$ limit of ${\ensuremath{\mathfrak{F}}}_\beta(0,p)$). \[polymer\_thm\] Consider a directed polymer model and consider a point $p\in {\ensuremath{\mathbb{R}}}_+^{2}$ and a direction $u\in{\ensuremath{\mathbb{R}}}_+^{2} $. If there exist constants (depending on $p$ and $u$ and the model weight distributions): $\ell$ and $\ell'$ non-negative; $\gamma,\gamma'\in (0,1)$; $\nu\in(0,\gamma/\gamma')$; and distributions $D$, $D'$ such that $$\begin{aligned} \chi_1(t)&:=\frac{{\ensuremath{\mathfrak{F}}}_\beta(0,tp)-t\ell}{t^{\gamma}}\Longrightarrow D, \quad \textrm{as }t\textrm{ goes to infinity},\\ \chi_2(t)&:=\frac{{\ensuremath{\mathfrak{F}}}_\beta(0,tp+t^{\nu}u)-t\ell-t^\nu \ell'}{t^{\gamma}}\Longrightarrow D, \quad \textrm{as }t\textrm{ goes to infinity},\\ \chi_3(t)&:=\frac{{\ensuremath{\mathfrak{F}}}_\beta(tp,tp+t^{\nu}u)-t^\nu \ell'}{(t^{\nu})^{\gamma'}}\Longrightarrow D', \quad \textrm{as }t\textrm{ goes to infinity}, \end{aligned}$$ then we have slow decorrelation of the point to point polymer at $tp$, in the direction $u$ and with scaling exponent $\nu$, which is to say that for all $M>0$, $$\lim_{t\to \infty} {\ensuremath{\mathbb{P}}}(|{\ensuremath{\mathfrak{F}}}_\beta(0,tp+t^{\nu}u)-{\ensuremath{\mathfrak{F}}}_\beta(0,tp)-t^\nu \ell'|\geq M t^{\gamma})=0.$$ The direction $u$ for a given $p$ should correspond to the characteristic through that point. The proof of this criterion for slow decorrelation is identical to the proof for Theorem \[growth\_thm\] and follows from the computation below (a result of super-additivity yet again): $$\begin{aligned} {\ensuremath{\mathfrak{F}}}_\beta(0,tp+t^{\nu}u) &= \frac{1}{\beta}\log \bigg(\sum_{\pi:0\to tp} e^{\beta T(\pi)} \times \sum_{\pi:tp\to tp+t^{\nu}u} e^{\beta T(\pi)} + \sum_{\substack{\pi:0\to tp+t^{\nu}u,\\ tp\notin \pi}} e^{\beta T(\pi)}\bigg)\\ &= \frac{1}{\beta}\log \bigg(\sum_{\pi:0\to tp} e^{\beta T(\pi)} \times \sum_{\pi:tp\to tp+t^{\nu}u} e^{\beta T(\pi)}\bigg) + X_t\\ &= {\ensuremath{\mathfrak{F}}}_\beta(0,tp) + {\ensuremath{\mathfrak{F}}}_\beta(tp,tp+t^{\nu}u) + X_t. \end{aligned}$$ Here $X_t\geq 0$ and the argument is analogous to (\[growth\_compensator\]). [99]{} G. Amir, I. Corwin, and J. Quastel. Probability distribution of the free energy of the continuum directed random polymer in 1+1 dimensions. , 64:466–537, 2011. J. Baik. Painlevé formulas of the limiting distributions for nonnull complex sample covariance matrices. , 33:205–235, 2006. J. Baik, G. Ben Arous, and S. Péché. Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices. , 33:1643–1697, 2005. J. Baik, P.L. Ferrari, and S. Péché. Limit process of stationary TASEP near the characteristic line. arxiv:0907.0226; [*Comm. Pure Appl. Math.*]{}, To appear. J. Baik and E. Rains. Limiting distributions for a polynuclear growth model with external sources. , 100:523–542, 2000. G. Ben Arous and I. Corwin. Current Fluctuations for TASEP: A Proof of the Prähofer-Spohn Conjecture. , 39:104–138, 2011. L. Bertini, G. Giacomin. Stochastic Burgers and KPZ equations from particle systems. 183:571–607, 1997. A. Borodin and P.L. Ferrari. Large time asymptotics of growth models on space-like paths I: PushASEP. , 13:1380–1418, 2008. A. Borodin, P.L. Ferrari and M. Prähofer. Fluctuations in the discrete TASEP with periodic initial configurations and the $\textrm{Airy}_1$ process. , rpm002, 2007. A. Borodin, P.L. Ferrari, M. Prähofer and T. Sasamoto. Fluctuation properties of the TASEP with periodic initial configuration. , 129:1055–1080, 2007. A. Borodin, P.L. Ferrari and T. Sasamoto. Transition between $\textrm{Airy}_1$ and $\textrm{Airy}_2$ processes and TASEP fluctuations. , 61:1603–1629, 2007. A. Borodin, P.L. Ferrari and T. Sasamoto. Large Time Asymptotics of Growth Models on Space-like Paths II: PNG and Parallel TASEP. , 283:417–449, 2008. A. Borodin, P.L. Ferrari and T. Sasamoto. Two speed TASEP. , 1572–9613, 2009. A. Borodin and S. Péché. Airy kernel with two sets of parameters in directed percolation and random matrix theory. , 132:275–290, 2008. I. Corwin, P.L. Ferrari and S. Péché. Limit processes for TASEP with shocks and rarefaction fans. , 140:232–267, 2010. I. Corwin, and J. Quastel. Universal distribution of fluctuations at the edge of the rarefaction fan. arXiv:1006.1338. I. Corwin, and J. Quastel. Renormalization fixed point of the KPZ universality class. In preparation. L.C. Evans. . AMS, Providence, 1998. P. L. Ferrari. Slow decorrelations in KPZ growth. , P07022, 2008. P. L. Ferrari. From interacting particle systems to random matrices. , P10016, 2010. T. Imamura and T. Sasamoto. Fluctuations of the one-dimensional polynuclear growth model with external sources. , 699:503–544, 2004. T. Imamura and T. Sasamoto. Dynamical properties of a tagged particle in the totally asymmetric simple exclusion process with the step-type initial condition. , 128:799–846, 2007. K. Johansson. Shape fluctuations and random matrices. , 209:437–476, 2000. K. Johansson. Discrete polynuclear growth and determinantal processes. , 242:277–329, 2003. H. Kallabis and J. Krug. Persistence of Kardar-Parisi-Zhang interfaces. , 45:20–25, 1999. M. Kardar, G. Parisi, Y.C Zhang. Dynamic scaling of growth interfaces. , 56:889–892, 1986. J. Krug, H. Kallabis, S.N. Majumdar, S.J. Cornell, A.J. Bray and C. Sire. Persistence exponents for fluctuating interfaces. , 56:2702, 1997. J. Krug and H. Spohn. Kinetic roughening of growning surfaces. , Godrèche ed., Cambridge Univ. Press, 1991. T.M. Liggett. . Springer, Berlin, 1999. T.M. Liggett. . Springer, Berlin, reprint of 1985 original edition, 2005. M. Prähofer and H. Spohn. Current fluctuations for the totally asymmetric simple exclusion process. , 51:185–204, 2002. M. Prähofer and H. Spohn. Scale invariance of the PNG droplet and the Airy process. , 108:1071–1106, 2002. H. Rost. Non-equilibrium behavior of a many particle process: Density profile and the local equilibrium. , 58:41–53, 1981. T. Sasamoto and H. Spohn. Universality of the one-dimensional KPZ equation. , 104:230602, 2010. T. Seppälänen. Scaling for a one-dimensional directed polymer with constrained endpoints. arXiv:0911.2446 T. Seppäläinen. Hydrodyanamic scaling, convex duality and asymptotic shapes of growth models. 4:1–26, 1998. C. Tracy and H. Widom. Level-spacing distributions and the Airy kernel. , 159:151–174, 1994. C. Tracy and H. Widom. Integral formulas for the asymmetric simple exclusion process. , 279:815–844, 2008. C. Tracy and H. Widom. A Fredholm determinant representation in ASEP. , 132:291–300, 2008. C. Tracy and H. Widom. Asymptotics in ASEP with step initial condition. 290:129–154, 2009. C. Tracy and H. Widom. Total current fluctuations in ASEP. 50,095204, 2009. C. Tracy and H. Widom. On ASEP with Step Bernoulli Initial Condition. 137:291–300, 2008. [^1]: Here we use $t$ as large parameter. In the literature also the choice $\epsilon^{-z}=t$ and $\epsilon\to 0$ is used. [^2]: As explained in the forthcoming paper [@CQ:2011r], this space-time process is expected to be a non-trivial renormalization fixed point for the whole KPZ universality class. See also [@Krug99; @Spo95; @Maj97] for previous discussion of space-time scalings in the physics literature. [^3]: Results for joint distributions at different times before taking $t$ to infinity have been derived, first in the problem of tagged particle in the TASEP [@SI07], and then in more general models [@BF:2008l; @BFS:2008l]. However, the different times are restricted to lie in an interval of width $O(t^{2/3})$. [^4]: The mapping requires a geometric number of zero weights along the boundary which do not affect asymptotics.
--- abstract: | We describe how an Ore category with a Garside family can be used to construct a classifying space for its fundamental group(s). The construction simultaneously generalizes Brady’s classifying space for braid groups and the Stein–Farley complexes used for various relatives of Thompson’s groups. It recovers the fact that Garside groups have finite classifying spaces. We describe the categories and Garside structures underlying certain Thompson groups. The Zappa–Szép product of categories is introduced and used to construct new categories and groups from known ones. As an illustration of our methods we introduce the group Braided $T$ and show that it is of type $F_\infty$. address: 'Faculty of Mathematics, Bielefeld University, Postfach 100131, 33501 Bielefeld, Germany' author: - Stefan Witzel bibliography: - 'ore\_cats.bib' title: | Classifying spaces from\ Ore categories with Garside families --- Our main object of study are groups that arise as the fundamental group of an Ore category with a Garside family. The two basic motivating examples are the braid groups ${\textsc{Braid}}_n$ and Thompson’s group $F$. We provide tools to construct classifying spaces with good finiteness properties for these groups. Our first main result can be formulated as follows (see Section \[sec:ore\_cats\] for definitions and Section \[sec:finiteness\_properties\] for the general version). \[thm:main\_complex\] Let ${\mathcal{C}}$ be a small right-Ore category that is factor-finite, let $\Delta$ be a right-Garside map, and let $* \in {\mathrm{Ob}}({\mathcal{C}})$. There is a contractible simplical complex $X$ on which $G = \pi_1({\mathcal{C}},*)$ acts. The space is covered by the $G$-translates of compact subcomplexes $K_x, x \in {\mathrm{Ob}}({\mathcal{C}})$. Every stabilizer is isomorphic to a finite index subgroup of ${\mathcal{C}}^\times(x,x)$ for some $x \in {\mathcal{C}}$. Taking ${\mathcal{C}}$ to be a Garside monoid and $\Delta$ to be the Garside element, one immediately recovers the known fact that Garside groups, and braid groups in particular, have finite classifying spaces [@charney04]. In fact, if ${\mathcal{C}}$ is taken to be the dual braid monoid, the quotient $G \backslash X$ is precisely Brady’s classifying space for ${\textsc{Braid}}_n$ [@brady01]. In the case of Thompson’s group $F$ the complex in Theorem \[thm:main\_complex\] is the Stein–Farley complex. The action is not cocompact in this case because ${\mathcal{C}}$ has infinitely many objects. In order to obtain cocompact actions on highly connected spaces, we employ Morse theory. \[thm:main\_fn\] Let ${\mathcal{C}}$, $\Delta$, $*$ be as in Theorem \[thm:main\_complex\] and let $\rho \colon {\mathrm{Ob}}({\mathcal{C}}) \to {\mathbb{N}}$ be a height function such that $\{x \in {\mathrm{Ob}}({\mathcal{C}}) \mid \rho(x) \le n\}$ is finite for every $n \in {\mathbb{N}}$. Assume that 1. ${\mathcal{C}}^\times(x,x)$ is of type $F_n$ for all $x$, 2. ${\lvert E(x) \rvert}$ is $(n-1)$-connected for all $x$ with $\rho(x)$ beyond a fixed bound. Then $\pi_1({\mathcal{C}},*)$ is of type $F_n$. The methods for proving Theorem \[thm:main\_fn\] have been repeatedly used to show that various Thompson groups are of type $F_\infty$ [@brown87; @stein92; @farley03; @fluch13; @bux16; @witzel16a; @martinezperez16; @belk]. Theorem \[thm:main\_fn\] could be used as a drop-in replacement in most of the proofs. The complexes ${\lvert E(x) \rvert}$ depend on ${\mathcal{C}}$ and $\Delta$ and are described in Section \[sec:proof\_scheme\]. In general, verifying condition is the key problem in all the $F_\infty$ proofs mentioned. Theorem \[thm:main\_fn\] provides a general scheme for proving that an (eligible) group is of type $F_\infty$: first describe the category, second analyze the complexes ${\lvert E(x) \rvert}$, and then apply the theorem. This scheme will be illustrated in Section \[sec:indirect\_product\_examples\] (describe the category) and Section \[sec:finiteness\_properties\_examples\] (analyze the complexes, apply the theorem) on the examples of Thompson’s groups $F$, $T$ and $V$, their braided versions and some other groups. To our knowledge this is the first time that Garside structures are studied in connection with Thompson groups. In the course we define the Thompson group $\mathit{BT}$, *braided $T$*, and prove (see Theorem \[thm:braided\_finiteness\_properties\]): \[thm:bt\] The braided Thompson group $\mathit{BT}$ is of type $F_\infty$. Although braided versions of $V$ [@dehornoy06; @brin07] and $F$ [@brady08] exist in the literature, our main merit is to be able to define braided $T$. The fact that it is $F_\infty$ then follows from Theorem \[thm:main\_fn\] and results from [@bux16]. The reason that $\mathit{BT}$ was not defined before is that the natural categorical approach was artificially and painfully avoided in the past, see Remark \[rem:bt\]. A helpful tool in describing the needed categories is the Zappa–Szép ${\mathcal{F}}\bowtie {\mathcal{G}}$ product of two categories ${\mathcal{F}},{\mathcal{G}}$. We call it the *indirect product* and introduce it in Section \[sec:indirect\_product\]. The article is organized as follows. The basic notions are introduced in Section \[sec:ore\_cats\]. The underlying structures for braid groups and Thompson’s group $F$ are described in Section \[sec:f\]. Section \[sec:finiteness\_properties\] contains the main construction and the proofs of Theorems \[thm:main\_complex\] and \[thm:main\_fn\]. The indirect product of categories is introduced in Section \[sec:indirect\_product\] and is used in Section \[sec:indirect\_product\_examples\] to construct the categories underlying Thompson’s groups and their braided versions. In Section \[sec:finiteness\_properties\_examples\] Theorem \[thm:main\_fn\] is applied to the examples from Section \[sec:indirect\_product\_examples\] to deduce finiteness properties, among them Theorem \[thm:bt\]. Since the results about finiteness properties and the indirect product may be of independent interest, we include the following leitfaden. =\[draw, rectangle, rounded corners\] node \[box\] at (0,0) (basic) [Section 1]{}; node \[box\] at (-2,-1) (basicex) [Section 2]{}; node \[box\] at (0,-1) (indirect) [Section 4]{}; node \[box\] at (2,-1) (fin) [Section 3]{}; node \[box\] at (-1,-2) (indirectex) [Section 5]{}; node \[box\] at (1,-2.5) (finex) [Section 6]{}; (basic) – (basicex); (basic) – (indirect); (indirect) – (indirectex); (basicex) – (indirectex); (basic) – (fin); (fin) – (finex); (indirectex) – (finex); This article arose out of the introduction to the author’s habilitation thesis [@witzel16b] which contains further examples not covered here. Categories generalizing monoids {#sec:ore_cats} =============================== We start by collecting basic notions of categories regarding them as generalizations of monoids. Our exposition is based on [@dehornoy15 Chapter II] where the perspective is similar. The main difference is notational, see Remark \[rem:dehornoy\_compatibility\] below. A monoid may be regarded as (the set of morphisms) of a category with a single object. For us categories will play the role of generalized monoids where the multiplication is only partially defined. In particular, all categories in this chapter will be small. The requirement that they be locally small is important and taking them to be small is convenient, for example, it allows us to talk about morphisms of categories as maps of sets. Let ${\mathcal{C}}$ be a category. Notationally, we follow [@dehornoy15] in denoting the set of morphisms of ${\mathcal{C}}$ by ${\mathcal{C}}$ as well (thinking of them as elements), while the objects are denoted ${\mathrm{Ob}}({\mathcal{C}})$. The identity at $x$ will be denoted $1_x$. If $f$ is a morphism from $y$ to $x$ we call $y$ the *source* and $x$ the *target* of $f$. Our notation for composition is is the familiar one for functions, that is, if $f$ is a morphism from $y$ to $x$ and $g$ is a morphism from $z$ to $y$ then $fg$ exists and is a morphism from $z$ to $x$. If $x, y \in {\mathrm{Ob}}({\mathcal{C}})$ then the set of morphisms from $y$ to $x$ is denoted ${\mathcal{C}}(x,y)$, the set of morphisms from $y$ to any object is denoted ${\mathcal{C}}(-,y)$ and the set of morphisms from any object to $x$ is denoted ${\mathcal{C}}(x,-)$. This may be slightly unusual but renders the following intuitive expression valid: $$f \in {\mathcal{C}}(x,y), g \in {\mathcal{C}}(y,z) \Rightarrow fg \in {\mathcal{C}}(x,z)\text{.}$$ The corresponding diagram is \(x) at (0,0) [$x$]{}; (y) at (1,0) [$y$]{}; (z) at (2,0) [$z.$]{}; (x) edge node\[above\][$f$]{} (y) (y) edge node\[above\][$g$]{} (z); (x) edge\[out=-30,in=-150\] node\[below\][$fg$]{} (z); When we write an expression involving a product of morphisms, the requirement that this product exists is usually an implicit condition of the expression. Thus $fg = h$ means that the source of $f$ is the target of $g$ and that the equality holds. \[rem:dehornoy\_compatibility\] The net effect of the various differences in notation is that our formalism is consistent with [@dehornoy15], only the meaning of source/target, from/to, and the direction of arrows are switched. The reason for this decision is that some of our morphisms will be group elements which we want to act from the left. Groupoids --------- A morphism $f \in {\mathcal{C}}(x,y)$ is *invertible* if there is an *inverse*, namely a morphism $g \in {\mathcal{C}}(y,x)$ such that $fg = 1_x$ and $gf = 1_y$. A *groupoid* is a category in which every morphism is invertible. Just as every monoid naturally maps to a group, every category naturally maps to a groupoid, see [@dehornoy15 Section 3.1]: For every category ${\mathcal{C}}$ there is a groupoid ${\mathcal{G}pd}({\mathcal{C}})$ and a morphism $\iota \colon {\mathcal{C}}\to {\mathcal{G}pd}({\mathcal{C}})$ with the following universal property: if $\varphi \colon {\mathcal{C}}\to {\mathcal{G}}$ is a morphism to a groupoid then there is a unique morphism $\hat{\varphi} \colon {\mathcal{G}pd}({\mathcal{C}}) \to {\mathcal{G}}$ such that $\varphi = \hat{\varphi} \circ \iota$. The groupoid ${\mathcal{G}pd}({\mathcal{C}})$ and the morphism $\iota$ are determined by ${\mathcal{C}}$ uniquely up to unique isomorphism. We call ${\mathcal{G}pd}({\mathcal{C}})$ the *enveloping groupoid* of ${\mathcal{C}}$. The morphism $\iota$ is a bijection on objects but it is not typically injective (on morphisms). One way to think about the enveloping groupoid is as the fundamental groupoid of ${\mathcal{C}}$: The *nerve* of ${\mathcal{C}}$ is the simplicial set whose $k$-simplices are diagrams \(z) at (0,0) [$x_0$]{}; (y) at (1,0) [$x_1$]{}; (x) at (2,0) [$x_2$]{}; (w) at (3,0) [$x_{k-1}$]{}; (v) at (4,0) [$x_k$]{}; (z) edge node\[above\][$f_1$]{} (y) (y) edge node\[above\][$f_2$]{} (x) (w) edge node\[above\][$f_k$]{} (v); (x) edge (w); in ${\mathcal{C}}$. The $i$th face is obtained by deleting $x_i$ and replacing $f_i, f_{i+1}$ by $f_i f_{i+1}$ and the $j$th degenerate coface is obtained by introducing $1_{x_j}$ between $f_j$ and $f_{j+1}$. The groupoid ${\mathcal{G}pd}({\mathcal{C}})$ is canonically isomorphic to the fundamental groupoid of the realization of the nerve of ${\mathcal{C}}$. In particular, the fundamental group of ${\mathcal{C}}$ in an object $x$ is just the set of endomorphisms of ${\mathcal{G}pd}({\mathcal{C}})$ in $x$: $\pi_1({\mathcal{C}},x) = {\mathcal{G}pd}({\mathcal{C}})(x,x)$. Noetherianity conditions ------------------------ If $fg = h$ then we say that $f$ is a *left-factor* of $h$ and that $h$ is a *right-multiple* of $f$. It is a *proper left-factor* respectively *proper right-multiple* if $g$ is not invertible. We say that $f$ is a *(proper) factor* of $h$ if $efg = h$ (and one of $e$ and $g$ is not invertible). The category ${\mathcal{C}}$ is *Noetherian* if there is no infinite sequence $f_0, f_1, \ldots$ such that $f_{i+1}$ is a proper factor of $f_i$. It is said to be *strongly Noetherian* if there exists a map $\delta \colon {\mathcal{C}}\to {\mathbb{N}}$ that satisfies $\delta(fg) \ge \delta(f) + \delta(g)$ and for $f \in {\mathcal{C}}$ non-invertible $\delta(f) \ge 1$. Clearly, a strongly Noetherian category is Noetherian. See [@dehornoy15 Sections II.2.3, II.2.4] for a detailed discussion. We call a *height function* a map $\rho \colon {\mathrm{Ob}}({\mathcal{C}}) \to {\mathbb{N}}$ such that $\rho(x) = \rho(y)$ if ${\mathcal{C}}(x,y)$ contains an invertible morphism and $\rho(x) < \rho(y)$ if ${\mathcal{C}}(x,y)$ contains a non-invertible morphism. Note that the existence of a height function implies strong Noetherianity by taking $\delta(f) = \rho(y) - \rho(x)$ if $f \in {\mathcal{C}}(x,y)$. We say that ${\mathcal{C}}$ is *factor-finite* if every morphism in ${\mathcal{C}}$ has only finitely many factors up pre- and post-composition by invertibles. This condition implies strong Noetherianity (cf. [@dehornoy15 Proposition 2.48]). Ore categories {#sec:ore_property} -------------- Two elements $g,h \in {\mathcal{C}}(x,-)$ have a *common right-multiple* $d$ if there exist elements $e,f \in {\mathcal{C}}$ with $ge = hf = d$. It is a *least common right-multiple* if every other common right-multiple is a right-multiple of $d$. We say that ${\mathcal{C}}$ *has common right-multiples* if any two elements with same target have a common right-multiple. We say that it *has conditional least common right-multiples* if any two elements that have a common right-multiple have a least common right-multiple. We say that it *has least common right- multiples* if any two elements with same target have a least common right- multiple. We say that ${\mathcal{C}}$ is *left-cancellative* if $ef = eh$ implies $f=h$ for all $e,f,g \in {\mathcal{C}}$. All of these notions have obvious analogues with left and right interchanged. A category is *cancellative* if it is left-cancellative and right-cancellative. \[lem:cancellative\_inverse\] If ${\mathcal{C}}$ is cancellative and $f \in {\mathcal{C}}$ has a left-inverse or right-inverse then it is invertible. Let $f \in {\mathcal{C}}(x,y)$ and assume that there is an $e \in {\mathcal{C}}(y,x)$ that is a left-inverse for $f$, that is, $ef = 1_y$. Then $fef = f$ and canceling $f$ on the right shows that $e$ is also a right-inverse. The other case is symmetric. \[lem:gcd\_vs\_lcm\] Let ${\mathcal{C}}$ be strongly Noetherian. Then ${\mathcal{C}}$ has least common right-multiples if and only if it has greatest common left-factors. Suppose that ${\mathcal{C}}$ has least common right-multiples and let $f,g \in {\mathcal{C}}(x,-)$. Let $s$ and $t$ be common left-factors of $f,g$ and let $r$ be a least common right-multiple of $s$ and $t$. Then, since $f$ and $g$ are common right-multiples of $s$ and $t$, they are right-multiples of $r$, meaning that $r$ is a common left-factor. If $s$ and $t$ are not right-multiples of each other then $\delta(r) > \delta(s), \delta(t)$ and an induction on $\delta(r) \le \delta(f),\delta(g)$ over the common left-factors of $f$ and $g$ produces a greatest common left-factor. The other direction is analogous. We say that ${\mathcal{C}}$ is right/left-Ore if it is cancellative and has common right/left-multiples. A category ${\mathcal{C}}$ that is right-Ore embeds in a groupoid ${\mathcal{G}}$ such that every element $h \in {\mathcal{G}}$ can be written as $h = fg^{-1}$ with $f, g \in {\mathcal{C}}$. The groupoid ${\mathcal{G}}$ in the theorem is called the *Ore localization* ${\mathcal{O}re}({\mathcal{C}})$ of ${\mathcal{C}}$. Using the universal property, it is not hard to see that it coincides with the enveloping groupoid of ${\mathcal{C}}$. The fundamental group of an Ore category has a particularly easy description. In general, an element of $\pi_1({\mathcal{C}},x)$ is represented by a sequence $f_0g_1^{-1}f_1 \cdots f_{n-1}g_n^{-1}$ with $f_i, g_i \in {\mathcal{C}}(x_i,-)$ and $f_j, g_{j+1} \in {\mathcal{C}}(-,y_j)$. But if ${\mathcal{C}}$ has common right-multiples, then $g_1^{-1}f_1$ can be rewritten as $f_1'{g_1'}^{-1}$ and so the sequence can be shortened to $(f_0f_1')(g_2g_1')^{-1}f_2 \cdots f_{n-1}g_n^{-1}$. Iterating this argument, we find that every element of $\pi_1({\mathcal{C}},x)$ is of the form $fg^{-1}$ with $f,g \in {\mathcal{C}}(x,-)$. Presentations ------------- We introduce presentations for categories. This is analogous to the situation for monoids and we will be brief. See [@dehornoy15 Section II.1.4] for details. A (small) *precategory* ${\mathcal{S}}$ consists of a set of objects ${\mathrm{Ob}}({\mathcal{S}})$ and a set of morphisms ${\mathcal{S}}$. As for categories, each morphism has a *source* and a *target* that are objects and it is a morphism from the source to its target. The set of morphisms from $y$ to $x$ is denoted ${\mathcal{S}}(y,x)$. The monoidal aspects of a category are missing in a precategory: it does not have identities or a composition. Given a precategory ${\mathcal{S}}$ there exists a free category ${\mathcal{S}}^*$ generated by ${\mathcal{S}}$. It has the universal property that if $\phi \colon {\mathcal{S}}\to {\mathcal{C}}$ is a morphism of precategories and ${\mathcal{C}}$ is a category, then $\phi$ uniquely factors through ${\mathcal{S}}\to {\mathcal{S}}^*$. One can construct ${\mathcal{S}}^*$ to have the same objects as ${\mathcal{S}}$ and have morphisms finite words in ${\mathcal{S}}$ that are composable. A *relation* is a pair $r = s$ of morphisms in ${\mathcal{S}}^*$ with same source and target. If $\phi \colon {\mathcal{S}}^* \to {\mathcal{C}}$ is a morphism, the relation *holds* in ${\mathcal{C}}$ if $\phi(r) = \phi(s)$. A *presentation* consists of a precategory ${\mathcal{S}}$ and a family of relations ${\mathcal{R}}$ in ${\mathcal{S}}^*$. The category it presents is denoted ${\left\langle{\mathcal{S}}\mid {\mathcal{R}}\right\rangle}$. It has the universal property that if $\phi \colon {\mathcal{S}}\to {\mathcal{C}}$ is a morphism of precategories and ${\mathcal{C}}$ is a category in which all relations in ${\mathcal{R}}$ hold then $\phi$ uniquely factors through ${\mathcal{S}}\to {\left\langle{\mathcal{S}}\mid {\mathcal{R}}\right\rangle}$. One can construct ${\left\langle{\mathcal{S}}\mid {\mathcal{R}}\right\rangle}$ by quotienting ${\mathcal{S}}^*$ by the symmetric, transitive closure of the relations. Garside families ---------------- The following notions are at the core of [@dehornoy15]. We will sometimes be needing the notions with the reverse order. What in [@dehornoy15] referred to as a Garside family in a left-cancellative category, will be called a left-Garside family here to avoid confusion in categories that are left- and right-cancellative. Let ${\mathcal{C}}$ be a left-cancellative category and let ${\mathcal{S}}\subseteq {\mathcal{C}}$ be a set of morphisms. We denote by ${\mathcal{S}}^\sharp$ the set ${\mathcal{C}}^\times \cup {\mathcal{S}}{\mathcal{C}}^\times$ of morphisms that are invertible or left-multiples of invertibles by elements of ${\mathcal{S}}$. We say that ${\mathcal{S}}^\sharp$ is *closed under (left/right-) factors* if every (left/right-) factor of an element in ${\mathcal{S}}^\sharp$ is again in ${\mathcal{S}}^\sharp$. An element $s \in {\mathcal{S}}$ is an *${\mathcal{S}}$-head* of $f \in {\mathcal{C}}$ if $s$ is a left-factor of $f$ and every left-factor of $f$ is a left-factor of $s$ [@dehornoy15 Definition IV.1.10]. The set ${\mathcal{S}}$ is a *left-Garside family* if ${\mathcal{S}}^\sharp$ generates ${\mathcal{C}}$, is closed under right-factors, and every non-invertible element of ${\mathcal{C}}$ admits and ${\mathcal{S}}$-head [@dehornoy15 Proposition IV.1.24]. If ${\mathcal{S}}$ is a left-Garside family then ${\mathcal{C}}^\times {\mathcal{S}}\subseteq {\mathcal{S}}^\sharp$, so in fact ${\mathcal{S}}^\sharp = {\mathcal{C}}^\times \cup {\mathcal{C}}^\times {\mathcal{S}}{\mathcal{C}}^\times$ [@dehornoy15 Proposition III.1.39]. All notions readily translate to right-Garside families, except that the head is called an *${\mathcal{S}}$-tail* if ${\mathcal{S}}$ is a right-Garside family. Note that ${\mathcal{S}}^\sharp$ is defined as ${\mathcal{C}}^\times \cup {\mathcal{C}}^\times{\mathcal{S}}$ when ${\mathcal{S}}$ is (regarded as) a right-Garside family. We will be interested in Garside families that are closed under factors. We describe two situations where this is the case. Let ${\mathcal{C}}$ be left-cancellative and consider a map $\Delta \colon {\mathrm{Ob}}({\mathcal{C}}) \to {\mathcal{C}}$ with $\Delta(x) \in {\mathcal{C}}(x,-)$. We write $$\begin{aligned} {\operatorname{Div}}(\Delta) &= \{g \in {\mathcal{C}}\mid {\mathcal{C}}(x,-) \ni g\ \exists h \in {\mathcal{C}}\ gh = \Delta(x)\}\\ {\widetilde{\operatorname{Div}}}(\Delta) &= \{h \in {\mathcal{C}}\mid \exists x \exists g \in {\mathcal{C}}(x,-)\ gh = \Delta(x)\}\end{aligned}$$ for the families of left- respectively right-factors of morphisms in the image of $\Delta$. Such a map is a *right-Garside map* if ${\operatorname{Div}}(\Delta)$ generates ${\mathcal{C}}$, if ${\widetilde{\operatorname{Div}}}(\Delta) \subseteq {\operatorname{Div}}(\Delta)$, and if for every $g \in {\mathcal{C}}(x,-)$ the elements $g$ and $\Delta(x)$ admit a greatest common left-factor. If $\Delta$ is a right-Garside map then ${\operatorname{Div}}(\Delta)$ is a left-Garside family closed under left-factors and thus under factors [@dehornoy15 Proposition V.1.20]. We note the following for future reference. \[obs:gars\_map\_to\_family\] Let ${\mathcal{C}}$ be a left-cancellative, factor-finite category and let $\Delta$ be a right-Garside map. Then ${\mathcal{S}}{\mathrel{\vcentcolon =}}{\operatorname{Div}}(\Delta)$ is a left-Garside family closed under factors and ${\mathcal{S}}(x,-)$ is finite for every $x \in {\mathrm{Ob}}({\mathcal{C}})$. Let ${\mathcal{C}}$ be right-Ore. A right-Garside family is *strong* if for $s,t \in {\mathcal{S}}^\sharp$ there exist $s',t' \in {\mathcal{S}}^\sharp$ such that $st' = ts'$ is a least common right-multiple of $s$ and $t$ [@dehornoy15 Definition 2.29]. If ${\mathcal{S}}$ is a strong right-Garside family then ${\mathcal{S}}^\sharp$ is also closed under left-factors and thus is closed under factors [@dehornoy15 Proposition 1.35]. Fundamental examples {#sec:f} ==================== Thompson’s group $\mathbf{F}$ and the category $\mathbf{{\mathcal{F}}}$ ----------------------------------------------------------------------- Our description of Thompson’s groups is not the standard one, which can be found in [@cannon96]. An element of Thompson’s group $F$ is given by a pair $(T_+,T_-)$ of finite rooted binary trees with the same number of leaves, say $n$. If we add a caret to the $i$th leaf ($1 \le i \le n$) of $T_+$, that is we make it into an inner vertex with to leaves below it, we obtain a tree $T_+'$ on $n+1$ vertices. If we also add a caret to the $i$th leaf of $T_-$ we obtain another tree $T_-'$. We want to regard $(T_+',T_-')$ as equivalent to $(T_+,T_-)$ so we take the reflexive, symmetric, transitive closure of the operation just described and write the equivalence class by $[T_+,T_-]$. Thompson’s group $F$ is the set of equivalence classes $[T_+,T_-]$. In order to define the product of two elements $[T_+,T_-]$ and $[S_+,S_-]$ we note that we can add carets to both tree pairs to get representatives $[T_+',T'] = [T_+,T_-]$ and $[T',T_-'] = [S_+,S_-]$ where the second tree of the first element and the first tree of the second element are the same. Therefore multiplication is completely defined by declaring that $[T_+',T'] \cdot [T',T_-'] = [T_+',T_-']$. It is easy to see that $[T,T]$ is the neutral element for any tree $T$ and that $[T_+,T_-]^{-1} = [T_-,T_+]$. We have defined the group $F$ in such a way that a categorical description imposes itself, cf. [@belk04]. We define ${\mathcal{F}}$ to be the category whose objects are positive natural numbers and whose morphisms $m {\leftarrow}n$ are binary forests on $m$ roots with $n$ leaves. Multiplication of a forest $E \in {\mathcal{F}}(\ell,m)$ and a forest $F \in {\mathcal{F}}(m,n)$ is defined by identifying the leaves of $E$ with the roots of $F$ and taking $EF$ to be the resulting tree. Pictorially this corresponds to stacking the two forests on top of each other (see Figure \[fig:forest\_mult\]). (-3,-3) – (0,0) – (3,-3) (1,-3) – (-1,-1) (-1,-3) – (0,-2); (-3,-3) circle (1.5pt) (0,0) circle (1.5pt) (3,-3) circle (1.5pt) (-1,-1) circle (1.5pt) (-1,-3) circle (1.5pt) (0,-2) circle (1.5pt) (1,-3) circle (1.5pt) (2,0) circle (1.5pt) (4,0) circle (1.5pt) (6,0) circle (1.5pt); at (8,0) [$\dots$]{}; (-4,-1) – (-3,0) – (-2,-1) (1,-2) – (3,0) – (5,-2) (3,-2) – (2,-1); (-4,-1) circle (1.5pt) (-3,0) circle (1.5pt) (-2,-1) circle (1.5pt) (-1,0) circle (1.5pt) (1,0) circle (1.5pt) (1,-2) circle (1.5pt) (3,0) circle (1.5pt) (5,-2) circle (1.5pt) (3,-2) circle (1.5pt) (2,-1) circle (1.5pt) (5,0) circle (1.5pt); at (7,0) [$\dots$]{}; at (10,2) [$=$]{}; (-3,-3) – (0,0) – (3,-3) (1,-3) – (-1,-1) (-1,-3) – (0,-2) (-4,-4) – (-3,-3) – (-2,-4) (1,-5) – (3,-3) – (5,-5) (3,-5) – (2,-4); (-3,-3) circle (1.5pt) (0,0) circle (1.5pt) (3,-3) circle (1.5pt) (-1,-1) circle (1.5pt) (-1,-3) circle (1.5pt) (0,-2) circle (1.5pt) (1,-3) circle (1.5pt) (2,0) circle (1.5pt) (4,0) circle (1.5pt) (6,0) circle (1.5pt) (-4,-4) circle (1.5pt) (-2,-4) circle (1.5pt) (1,-5) circle (1.5pt) (5,-5) circle (1.5pt) (3,-5) circle (1.5pt); at (8,0) [$\dots$]{}; \[prop:thomcat\_ore\] The category ${\mathcal{F}}$ is strongly Noetherian and right-Ore. In fact, it has least common right-multiples and greatest common left-factors. The identity map $\rho \colon {\mathbb{N}}= {\mathrm{Ob}}({\mathcal{F}}) \to {\mathbb{N}}$ is a height function on ${\mathcal{F}}$. Thus ${\mathcal{F}}$ is strongly Noetherian. The least common right-multiple of two forests in ${\mathcal{F}}(m,-)$ is their union (regarding both forests as subforests of the leafless binary forest on $m$ roots). The greatest common left-factor is their intersection. Left cancellativity means that given a forest $f \in {\mathcal{F}}(m,\ell)$ and a left-factor $a \in {\mathcal{F}}(m,n)$ the forest in $b \in{\mathcal{F}}(n,\ell)$ with $f = ab$ is unique. Indeed it is the forest obtained from $f$ by removing $a$ and turning the leaves of $a$ into roots. Right cancellativity means that $a$ is uniquely determined if $f = ab$. To see this, we identify the leaves of $f$ with the leaves of $b$. Now the common predecessor in $f$ of a set of leaves of a tree of $b$ is a leaf of $a$ and every leaf of $a$ arises in that way. The proposition together with the remark at the end of Section \[sec:ore\_property\] shows that every element of $\pi_1({\mathcal{F}},1)$ is represented by $fg^{-1}$ where $f,g \in {\mathcal{F}}(1,-)$ are binary trees. Cancellativity ensures that $fg^{-1} = {f'}{g'}^{-1}$ if and only if there exist $h$ and $h'$ such that $fh=f'h'$ and $gh=g'h'$. Comparing this description with our definition of $F$ we see: Thompson’s group $F$ is isomorphic to $\pi_1({\mathcal{F}},1)$. Later on it will be convenient to have a presentation for ${\mathcal{F}}$. The shape of the relations will not come as a surprise to the reader familiar with Thompson’s groups. A proof can be found in [@witzel16b]. \[prop:thomcat\_presentation\] The category ${\mathcal{F}}$ has a presentation with generators morphisms $\lambda_i^n \colon n {\leftarrow}n+1$ for $1 \le i \le n$ subject to the relations $$\label{eq:caret_relation} \lambda^n_i\lambda^{n+1}_j = \lambda^n_j\lambda^{n+1}_{i+1} \quad \text{for}\quad 1 \le j < i \le n\text{.}$$ Every morphism in ${\mathcal{F}}(m,n)$ can be written in a unique way as $\lambda_{i_m}^m \cdots \lambda_{i_{n-1}}^{n-1}$ with $(i_j)_j$ non-decreasing. \[rem:commuting\] The relations reflect a commutation phenomenon: for any forest adding a caret to the $i$th leaf and then to the $j$th leaf has the same effect as doing it the other way around. That it does not algebraically look like a commutation relation is due to the fact the the index of the right one of the two leaves has changed when adding the left caret. This is inevitable in the present setup because the $i$th leaf has no identity as a particular vertex in the infinite rooted binary tree but simultaneously represents all $i$th leaves of trees with $n$ leaves. A larger category in which the relations are algebraically commutation relations will appear in Section \[sec:graph\_rewriting\]. Note that since ${\mathcal{F}}$ is connected, the fundamental groups at different objects are isomorphic. This corresponds to the elementary fact that the tree pair $(T_+,T_-)$ representing an element of $F$ can always be chosen so that $T_+,T_-$ contain an arbitrary fixed subtree. The most convenient way to exhibit a Garside family in ${\mathcal{F}}$ is by describing a right-Garside map: for every $n \in {\mathbb{N}}= {\mathrm{Ob}}({\mathcal{F}})$ let $\Delta(n)$ be the forest where every tree is a single caret. \[prop:f\_garside\_map\] The map $\Delta \colon {\mathrm{Ob}}({\mathcal{F}}) \to {\mathcal{F}}$ is a right-Garside map. The family ${\operatorname{Div}}(\Delta)$ consists of morphisms where every forest is either a single caret or trivial. Every forest can be built of from these, for example by adding one caret at a time. This shows that ${\operatorname{Div}}(\Delta)$ generates ${\mathcal{F}}$. The family ${\widetilde{\operatorname{Div}}}(\Delta)$ also consists of morphisms where every forest is either a single caret or trivial with the additional condition that the total number of leaves is even and the left leaf of every caret has an odd index. In particular ${\widetilde{\operatorname{Div}}}(\Delta) \subseteq {\operatorname{Div}}(\Delta)$. If $g \in {\mathcal{F}}(x,-)$ then $g$ and $\Delta(x)$ have a greatest common left-factor by Proposition \[prop:thomcat\_ore\]. With Observation \[obs:gars\_map\_to\_family\] we get: \[cor:f\_garside\_family\] The category ${\mathcal{F}}$ admits a left-Garside family ${\mathcal{S}}$ that is closed under factors such that ${\mathcal{S}}(x,-)$ is finite for every $x \in {\mathcal{F}}$. The family ${\operatorname{Div}}(\Delta)$ is in fact a right- as well as a left-Garside family. It is strong as a right-Garside family but not as a left-Garside family. If instead of rooted binary trees one takes rooted $n$-ary trees ($n \ge 2$) in the description above, one obtains the category ${\mathcal{F}}_n$. Everything is analogous to ${\mathcal{F}}$ but the new aspect that occurs for $n > 2$ is that the category is no longer connected: the number of leaves of an $n$-ary tree with $r$ roots will necessarily be congruent to $r$ modulo $n-1$, hence there is no morphism in ${\mathcal{F}}_n$ connecting objects that are not congruent modulo $n-1$. As a consequence, the point at which the fundamental group is taken does matter and we obtain $n-1$ different groups for each category. It turns out, however, that the fundamental groups are in fact isomorphic independently of the basepoint and so one only defines $$F_{n} = \pi_1({\mathcal{F}}_n,1)\text{.}$$ The groups $F_n$ are the smallest examples of the *Higman–Thompson groups* introduced by Higman [@higman74]. As we will see later, the fundamental groups of the different components are non-isomorphic in the categories for the larger Higman–Thompson groups. Braid groups {#sec:garside} ------------ The *braid group* on $n$ strands, introduced by Artin [@artin25], is the group given by the presentation $$\begin{aligned} {\textsc{Braid}}_n = \left\langle \sigma_1,\ldots,\sigma_{n-1} \begin{array}{c|cl} &\sigma_i\sigma_j = \sigma_j\sigma_i, &{\lvert i-j \rvert} \ge 2,\\ &\sigma_i\sigma_{i+1}\sigma_i = \sigma_{i+1}\sigma_i\sigma_{i+1}, &1 \le i \le n-2 \end{array}\right\rangle\text{.}\label{eq:braid}\end{aligned}$$ Its elements, called *braids*, can be conveniently depicted as braid diagrams as in Figure \[fig:braid\_diagram\], illustrating a physical interpretation as braids on $n$ strands. The first relations are *commutation relations*, the second are *braid relations*. The group ${\textsc{Braid}}_n$ arise as the fundamental group of the configuration space of $n$ unordered points in the disc and as the mapping class group of the $n$-punctured disc, see [@birman74; @kassel08] for more details. (2,0) – (2,1) to \[out=90, in=-90, looseness=1\] (1,2) to \[out=90, in=-90, looseness=1\] (0,3); (1,0) to \[out=90, in=-90, looseness=1\] (0,1) – (0,2) to \[out=90, in=-90, looseness=1\] (1,3); (1,0) to \[out=90, in=-90, looseness=1\] (0,1) – (0,2) to \[out=90, in=-90, looseness=1\] (1,3); (0,0) to \[out=90, in=-90, looseness=1\] (1,1) to \[out=90, in=-90, looseness=1\] (2,2) – (2,3); (0,0) to \[out=90, in=-90, looseness=1\] (1,1) to \[out=90, in=-90, looseness=1\] (2,2) – (2,3); in [0,1,2]{} at (,0) ; in [0,1,2]{} at (,3) ; at (3,1.5) [$=$]{}; (2,0) to \[out=90, in=-90, looseness=1\] (1,1) to \[out=90, in=-90, looseness=1\] (0,2) – (0,3); (1,0) to \[out=90, in=-90, looseness=1\] (2,1) – (2,2) to \[out=90, in=-90, looseness=1\] (1,3); (1,0) to \[out=90, in=-90, looseness=1\] (2,1) – (2,2) to \[out=90, in=-90, looseness=1\] (1,3); (0,0) – (0,1) to \[out=90, in=-90, looseness=1\] (1,2) to \[out=90, in=-90, looseness=1\] (2,3); (0,0) – (0,1) to \[out=90, in=-90, looseness=1\] (1,2) to \[out=90, in=-90, looseness=1\] (2,3); in [0,1,2]{} at (,0) ; in [0,1,2]{} at (,3) ; What is known as Garside Theory today arose out of Garside’s study of braid groups [@garside69]. In this classical case, the category ${\mathcal{C}}$ has a single object and thus is a monoid. Specifically, a *Garside monoid* is a monoid $M$ with an element $\Delta \in M$, called a *Garside element*, such that 1. $M$ is cancellative and has least common right- and left-multiples and greatest common right- and left-factors, 2. the left- and right-factors of $\Delta$ coincide, they are finite in number, and generate $M$, 3. there is a map $\delta \colon M \to {\mathbb{N}}$ such that $\delta(fg) \ge \delta(f) + \delta(g)$ and $\delta(g) > 0$ if $g \ne 1$.\[item:garside\_length\] A *Garside group* is the group of fractions of a Garside monoid. Among the main features of Garside groups is that they have solvable word problem and conjugacy problem. Note that a Garside monoid, regarded as a category with one object is, by definition, left- and right-Ore and strongly Noetherian. Moreover the family of factors of $\Delta$ is a left- and right-Garside family. To see that braid groups are in fact Garside groups consider the *braid monoid* ${\textsc{Braid}}_n^+$. It is obtained by interpreting the presentation as a monoid presentation. It is a non-trivial consequence of Garside’s work that the obvious map ${\textsc{Braid}}_n^+ \to {\textsc{Braid}}_n$ is injective so that the braid monoid can be regarded as a subset of the braid groups. Its elements are called *positive braids* and are characterized by the property that left strands always overcrosses the right strand. The element $\Delta$ in ${\textsc{Braid}}_n^+$ is the braid that performs a full half twist and is characterized by the fact that every strand crosses every other strand precisely once, see Figure \[fig:delta\]. Its (left- or right-) factors are the braids where every strand crosses every other strand at most once. The function $\delta$ is simply number of crossings, which is the same as length as a word in the generators. Now ${\textsc{Braid}}_n^+$ is a Garside monoid with Garside element $\Delta$, see [@dehornoy15 Section I.1.2, Proposition IX.1.29]. Its group of fractions is ${\textsc{Braid}}_n$ which is therefore a Garside group. It was noted by Birman–Ko–Lee [@birman98] that there is in fact another monoid ${\textsc{Braid}}_n^{*+}$, called the *dual braid monoid*, that also admits a Garside element $\Delta^*$ and has ${\textsc{Braid}}_n$ as its group of fractions, see also [@dehornoy15 Section I.1.3]. This monoid is in many ways better behaved than ${\textsc{Braid}}_n^+$. Brady [@brady01] used the dual braid monoid to construct a finite classifying space for the braid group. Note that adding the relations $\sigma_i^2$ to the presentation results in a presentation for the symmetric group ${\textsc{Sym}}_n$. In particular, there is a surjective homomorphism $\pi \colon {\textsc{Braid}}_n \to {\textsc{Sym}}_n$ that takes $\sigma_i$ to the transposition $s_i {\mathrel{\vcentcolon =}}(i\ i+1)$. The symmetric group is a finite Coxeter group and the braid group is its corresponding *Artin group*. For every Coxeter system $(W,S)$ there exists an Artin group $A_W$ obtained analogously and a morphism $\pi \colon A_W \to W$. Whenever $W$ is finite, the Artin group $A_W$ again contains a Garside monoid and a dual Garside monoid, see [@brieskorn72; @bessis03]. Finiteness properties of fundamental groups of Ore categories {#sec:finiteness_properties} ============================================================= A classifying space for a group $G$ is a CW complex $B$ whose fundamental group is $G$ and whose universal cover $X = \tilde{B}$ is contractible. Since $G$ acts freely on $X$ with quotient $B = G \backslash X$, one can equivalently say that a classifying space is the quotient of a contractible CW complex by a free $G$-action. Our goal in this section is to construct “good” classifying spaces for fundamental groups of Ore categories. The best classifying spaces are compact ones; they have finitely many cells so we also refer to them as *finite*. If $G$ admits a finite clasifying space, we say that it is of type $F$. If a finite classifying space does not exist we aim at classifying spaces with weaker finiteness properties. We start by constructing an action on a contractible space. Contractible spaces from Ore categories with Garside families {#sec:space_ore_garside} ------------------------------------------------------------- Let ${\mathcal{C}}$ be a category that is right-Ore and strongly Noetherian. Let ${\mathcal{S}}$ be a left- or right-Garside family such that ${\mathcal{S}}^\sharp$ is closed under factors. Let $* \in {\mathrm{Ob}}({\mathcal{C}})$ be a base object. Our goal is to construct a contractible space $X$ on which $\pi_1({\mathcal{C}},*)$ acts with good finiteness properties of the stabilizers as well as the quotient. In the whole discussion ${\mathcal{C}}$ can be replaced by the component of $*$ in ${\mathcal{C}}$ so all assumptions only need to be made for objects and morphisms in that component. We put ${\mathcal{E}}= {\mathcal{S}}^\sharp$ and recall that ${\mathcal{E}}= {\mathcal{C}}^\times \cup {\mathcal{C}}^\times {\mathcal{S}}{\mathcal{C}}^\times$. We call the elements of ${\mathcal{E}}$ *elementary*. Let $\delta \colon {\mathcal{C}}\to {\mathbb{N}}$ be a map that witnesses strong Noetherianity. Note that if $f \in {\mathcal{C}}(x,y)$ and $g \in {\mathcal{C}}^\times(-,x)$ and $h \in {\mathcal{C}}^\times(y,-)$ are invertible then $$-\delta(g^{-1}) + \delta(f) - \delta(h^{-1}) \ge \delta(gfh) \ge \delta(g) + \delta(f) + \delta (h)$$ so $\delta(f) =\delta(gfh)$ and $\delta$ is invariant under pre- and postcomposition by invertibles. We define the set $P = {\mathcal{O}re}({\mathcal{C}})(*,-)/{\mathcal{C}}^\times$, that is, elements of $P$ are equivalence classes $\bar{a}$ of elements $a \in {\mathcal{O}re}({\mathcal{C}})(*,-)$ modulo the equivalence relation that $\bar{a} = \bar{a\smash{'}}$ if there exists a $g \in {\mathcal{C}}^\times$ with $ag = a'$. We define a relation $\le$ on $P$ by declaring $\bar{a} \le \bar{b}$ if there exists an $f \in {\mathcal{C}}$ with $af = b$. \[lem:poset\_contractible\] The relation $\le$ is a partial order on $P$ in which any two elements have a common upper bound. In particular, the realization ${\lvert P \rvert}$ is contractible. Note first that whether $f = a^{-1}b$ lies in ${\mathcal{C}}$ is independent of the representatives. Reflexivity and transitivity are clear. If $\bar{a} \le \bar{b} \le \bar{a}$ then there exist $f, h \in {\mathcal{C}}$ and $g \in {\mathcal{C}}^\times$ such that $af=b$ and $bh=ag$ showing that $fh$ is a unit. In particular, $f$ has a right-inverse and $h$ has a left-inverse so $f$ and $h$ are units by Lemma \[lem:cancellative\_inverse\]. This shows $\bar{a} = \bar{b}$. For any $a \in {\mathcal{O}re}({\mathcal{C}})$ there is an $f \in {\mathcal{C}}$ such that $af \in {\mathcal{C}}$. Since ${\mathcal{C}}$ has common right-multiples, it follows that for any two elements $a_1,a_2 \in {\mathcal{O}re}({\mathcal{C}})$ there exist $f_1, f_2 \in {\mathcal{C}}$ with $a_1f_1 = a_2f_2$. We define a second, more restrictive relation $\preceq$ on $P$ by declaring that $\bar{a} \preceq \bar{b}$ if there exists an $e \in {\mathcal{E}}$ with $ae = b$. Note that this relation will typically not be transitive. However, if $\bar{a} \preceq \bar{b}$ and $\bar{a} \le \bar{c} \le \bar{b}$ then $\bar{a} \preceq \bar{c} \preceq \bar{b}$ because ${\mathcal{E}}$ is closed under factors. The complex $X \subseteq {\lvert P \rvert}$ consists of those chains in ${\lvert P \rvert}$ that are chains with respect to $\preceq$. In particular, $P$ is the vertex set of $X$. \[prop:stein\_farley\_contractible\] The complex $X$ is contractible. Note that $X$ is a subspace of ${\lvert P \rvert}$ containing all the vertices. One can obtain ${\lvert P \rvert}$ from $X$ by gluing in (realizations of) intervals $[\bar{a},\bar{b}]$ not yet contained in $X$. To organize the gluing, note the following: if $[\bar{c}, \bar{d}]$ is a proper subinterval of $[\bar{a},\bar{b}]$ with $f = a^{-1}b \in {\mathcal{C}}$ and $h = c^{-1}d \in {\mathcal{C}}$ then $h$ is a proper factor of $f$. To an interval $[\bar{a},\bar{b}]$ with $f = a^{-1}b$ we assign the height $\hat{\delta}([\bar{a},\bar{b}]) = \delta(f)$. Note that this is well-defined, because any other representative $f'$ will differ from $f$ only by invertibles and $\delta$ is invariant under pre- and postcomposition by invertibles. Note also that proper subintervals have strictly smaller $\hat{\delta}$-value. We can therefore glue in the intervals with increasing value of $\hat{\delta}$ and are sure that when we glue in an interval, any proper subinterval is already glued in. For any $n \in {\mathbb{N}}$ let ${\lvert P \rvert}_{\hat{\delta} < n}$ be the subcomplex of ${\lvert P \rvert}$ consisting of $X$ and intervals of $\hat{\delta}$-value $< n$. If $X$ was not contractible, there would be a sphere in $X$ that could not be contracted in $X$ but in ${\lvert P \rvert}$. The contraction would be compactly supported, hence use simplices supported on finitely many simplices. It therefore suffices to show that the inclusion $X \to {\lvert P \rvert}_{\hat{\delta} < n}$ is a homotopy equivalence for all $n \in {\mathbb{N}}$. For $n = 0$ this is clear, so assume $n > 0$. Then $${\lvert P \rvert}_{\hat{\delta} < n} = {\lvert P \rvert}_{\hat{\delta} < n-1} \cup \bigcup_{\hat{\delta}([\bar{a},\bar{b}]) = n-1} {\lvert [\bar{a},\bar{b}] \rvert}\text{.}$$ The intervals that are glued in meet only in ${\lvert P \rvert}_{\hat{\delta} < n-1}$ and they are glued in along ${\lvert [\bar{a},\bar{b}) \rvert} \cup {\lvert (\bar{a},\bar{b}] \rvert}$. This is a suspension of ${\lvert (\bar{a},\bar{b}) \rvert}$ and so it suffices to show that the open interval is contractible. If ${\mathcal{S}}$ is a left-Garside family, every element $h$ of ${\mathcal{C}}$, and every left-factor of $f$ in particular, has an ${\mathcal{S}}$-head ${\operatorname{head}}(g)$. We define the map $\theta \colon [\bar{a},\bar{b}] \to [\bar{a},\bar{b}]$ by $\overline{ah} \mapsto \overline{a{\operatorname{head}}(h)}$. Note that $\theta(\bar{b}) < \bar{b}$ because otherwise $[\bar{a},\bar{b}]$ is already contained in ${\lvert P \rvert}$. Note also that $\theta(\bar{c}) > \bar{a}$ for $\bar{c} > \bar{a}$ because the head of a non-invertible is not invertible. This shows that $\theta$ restricts to a map $(\bar{a},\bar{b}) \to (\bar{a},\bar{b})$ with $\bar{c} \ge \theta(\bar{c}) \le \theta(\bar{b})$ and we can apply [@quillen78 Section 1.5] to see that ${\lvert (\bar{a},\bar{b}) \rvert}$ is contractible. If ${\mathcal{S}}$ is a right-Garside family, $\theta$ is defined by $\overline{bh^{-1}} \mapsto \overline{b{\operatorname{tail}}(h)^{-1}}$. For the same resasons as above $\theta$ restricts to a map $(\bar{a},\bar{b}) \to (\bar{a},\bar{b})$ with $\bar{c} \le \theta(\bar{c}) \ge \theta(\bar{a})$ and we can again apply [@quillen78 Section 1.5]. There is an obvious action of $\pi_1({\mathcal{C}},*)$ on $X$ which is given by precomposition: if $g \in \pi_1({\mathcal{C}},*) = {\mathcal{O}re}({\mathcal{C}})(*,*)$ and $a \in {\mathcal{O}re}({\mathcal{C}})(*,-)$ then $g \bar{a} = \overline{ga}$ and the relations $\le$ and $\preceq$ are clearly preserved under this action. Next we want to look at stabilizers and weak fundamental domains. These will be particularly well-behaved with an additional assumption. We say that ${\mathcal{S}}$ is *(right-)locally finite* if for every object $x \in {\mathrm{Ob}}({\mathcal{C}})$ the set ${\mathcal{S}}(x,-)$ is finite up to pre- and post-composition by invertibles. Local finiteness of ${\mathcal{S}}$ does *not* imply that $X$ is locally finite but does imply: \[obs:loc\_fin\] Assume that ${\mathcal{S}}$ is locally finite. For every $\bar{a} \in P$ there are only finitely many $\bar{b} \in P$ with $\bar{a} \preceq \bar{b}$. In particular, there are only finitely many simplices for which $\bar{a}$ is $\preceq$-minimal. \[lem:stabilizers\] Every simplex-stabilizer of the action of $\pi_1({\mathcal{C}},*)$ on $X$ is isomorphic to a subgroup of ${\mathcal{C}}^\times(x,x)$ for some $x \in {\mathrm{Ob}}({\mathcal{C}})$. If ${\mathcal{S}}$ is locally finite, the subgroup has finite index. Let $\bar{a}$ be a vertex in $X$ with $a \in {\mathcal{O}re}({\mathcal{C}})(*,x)$ and suppose that $g \in \pi_1({\mathcal{C}},*)$ fixes $\overline{a}$, that is, $\overline{a} = g\overline{a} = \overline{ga}$. Then $a^{-1}ga \in {\mathcal{C}}^\times(x,x)$. This shows that the stabilizer of $\bar{a}$ is conjugate to ${\mathcal{C}}^\times(x,x)$. If ${\mathcal{S}}$ is locally finite then Observation \[obs:loc\_fin\] implies that the stabilizer of an arbitrary simplex has finite index in a vertex stabilizer. \[cor:free\_proper\] If ${\mathcal{C}}^\times(x,x) = \{1_x\}$ for every object $x \in {\mathrm{Ob}}({\mathcal{C}})$ then the action of $\pi_1({\mathcal{C}},*)$ on $X$ is free. If ${\mathcal{C}}^\times(x,x)$ is finite then the action is proper. Now let us pick, for every $x \in {\mathrm{Ob}}({\mathcal{C}})$ a morphism $f_x \in {\mathcal{O}re}({\mathcal{C}})(*,x)$ arbitrarily and let $K_x \subseteq X$ be the union of the realizations of the intervals $[\overline{f_x},\overline{f_xe}]$ with $e \in {\mathcal{E}}(x,-)$. \[lem:weak\_fundamental\_domain\] The complex $X$ is covered by the $\pi_1({\mathcal{C}},*)$-translates of the complexes $K_x, x \in {\mathrm{Ob}}({\mathcal{C}})$. If ${\mathcal{S}}$ is locally finite then each $K_x$ is compact. If $\sigma = \{f \prec fe_1 \prec \ldots \prec fe_k\}$ is a simplex in $X$ with $f \in {\mathcal{O}re}({\mathcal{C}})(*,x)$ and $e_1, \ldots,e_k \in{\mathcal{E}}(x,-)$ then $f_xf^{-1} \in \pi_1({\mathcal{C}},*)$ and $f_xf^{-1} K_x$ contains $\sigma$. The second statement is clear. The ideal special case is: If ${\mathcal{C}}$ has no non-identity invertible morphisms and has only finitely many objects and if ${\mathcal{S}}$ is locally finite then $\pi_1({\mathcal{C}},*)$ has a finite classifying space. Under the assumption the action of $\pi_1({\mathcal{C}},*)$ is free by Lemma \[cor:free\_proper\] and cocompact by Lemma \[lem:weak\_fundamental\_domain\]. The quotient is then finite a classifying space. In particular we recover the main result of [@charney04]. \[cor:garside\_complex\] Every Garside group $G$ has a finite classifying space. In the case of the dual braid monoid, the complex we constructed is precisely the *dual Garside complex* constructed by Brady [@brady01]. Finiteness properties --------------------- Topological finiteness properties of a group $G$ were introduced by Wall [@wall65; @wall66] and are conditions on how finite a classifying space for $G$ can be chosen. A group is said to be *of type $F_n$* if it admits a classifying space $B$ whose $n$-skeleton $B^{(n)}$ has finitely many cells. Equivalently a group is of type $F_n$ if it acts freely on a contractible space $X$ such that the action on $X^{(n)}$ is cocompact. It is clear that type $F_n$ implies type $F_m$ for $m < n$ and one defines the finiteness length $\phi(G)$ to be the supremal $n$ for which $G$ is of type $F_n$. If $\phi(G) = \infty$ then $G$ is said to be of type $F_\infty$. In low dimensions, these properties have familiar descriptions: a group is of type $F_1$ if and only if it is finitely generated, and it is of type $F_2$ if and only if it is finitely presented. Given a group $G$, in order to study its finiteness properties, one needs to let $G$ act on a highly connected space $X$. If the action is free, then the low-dimensional skeleta of $G \backslash X$ are those of a classifying space. A useful result is Brown’s criterion which says that one does not have to look at free actions, see [@brown87 Propositions 1.1, 3.1]: \[thm:browns\_criterion\] Let $G$ act cocompactly on an $(n-1)$-connected CW complex $X$. If the stabilizer of every $p$-cell of $X$ is of type $F_{n-p}$ then $G$ is of type $F_n$. The full version of Brown’s criterion also gives a way to decide that a group is not of type $F_n$. We formulate it here only to explain why we will not be able to apply it: \[thm:browns\_criterion\_negative\] Let $G$ act on an $(n-1)$-connected CW complex $X$ and assume that the stabilizer of every $p$-cell of $X$ is of type $F_{n-p}$. If $G$ is of type $F_n$ then for every cocompact subspace $Y$ and any basepoint $* \in Y$ there exists a cocompact subspace $Z \supseteq Y$ such that the maps $\pi_k(Y,*) \to \pi_k(Z,*)$ induced by inclusion have trivial image for $k \le n-1$. Theorem \[thm:browns\_criterion\_negative\] can be used to show that a group is not of type $F_n$ if this is visible in the topology of $X$. On the other hand, if the stabilizers have bad finiteness properties we cannot decide whether $G$ has good finiteness properties or not: in that case we are looking at the wrong action. Combinatorial Morse theory -------------------------- In order to study connectivity properties of spaces and apply Brown’s criterion we will be using combinatorial Morse theory as introduced by Bestvina and Brady [@bestvina97]. Here we give the most basic version used in Section \[sec:proof\_scheme\]. Let $X$ be the realization of an abstract simplicial complex, regarded as a CW complex. A *Morse function* is a function $\rho \colon X^{(0)} \to {\mathbb{N}}$ with the property that $\rho(v) \ne \rho(w)$ if $v$ is adjacent to $w$. For $n \in {\mathbb{N}}$ the sublevel set $X_{\rho < n}$ is defined to be the full subcomplex of $X$ supported on vertices $v$ with $\rho(v) < n$. The *descending link* ${\operatorname{lk}}^{\downarrow} v$ of a vertex $v$ is the full subcomplex of ${\operatorname{lk}}v$ of those vertices $w$ with $\rho(w) \le \rho(v)$ and the *descending star* ${\operatorname{st}}^{\downarrow}$ is defined analogously. That $\rho$ is a Morse function implies that that the inequality $\rho(w) \le \rho(v)$ is strict for the descending link and for the descending star is not strict only when $w = v$. In particular, the descending star is the cone over the descending link. The goal of combinatorial Morse theory is to compare the connectivity properties of sublevel sets to each other and to those of $X$. The tool to do so is a basic lemma, called the Morse Lemma: \[lem:morse\] Let $\rho$ be a Morse function on $X$. Let $m \le n \le \infty$ and assume that for every vertex $v$ with $m \le \rho(v) < n$ the descending link of $v$ is $(d-1)$-connected. Then the pair $(X_{\rho <n}, X_{\rho < m})$ is $d$-connected, that is, $\pi_k(X_{\rho < m} \to X_{\rho < n})$ is an isomorphism for $k < d$ and an epimorphism for $k = d$. The basic observations are that $$X_{\rho < m+1} = X_{\rho < m} \cup \bigcup_{\rho(v) = m} {\operatorname{st}}^{\downarrow} v\text{,}$$ that ${\operatorname{st}}^\downarrow v \cap {\operatorname{st}}^\downarrow v' \subseteq X_{\rho< m}$ for $\rho(v) = m = \rho(v')$, and that ${\operatorname{st}}^\downarrow v \cap X_{\rho < m} = {\operatorname{lk}}^\downarrow v$. As a consequence (using compactness of spheres) it suffices to study the extension $Y {\mathrel{\vcentcolon =}}X_{\rho < m} \cup_{{\operatorname{lk}}^\downarrow v} {\operatorname{st}}^\downarrow v$ for an individual vertex $v$ with $\rho(v) = m$. In this situation $\pi_k(Y, X_{\rho<m}) \cong \pi_k({\operatorname{st}}^\downarrow v, {\operatorname{lk}}^\downarrow v)$ for $k \le d$. This can be seen by separately looking at $\pi_1$ and $H_*$ (where excision holds) and applying Hurrewicz’s theorem [@hatcher02 Theorem 4.37]. The statement now follows from the long exact homotopy/homomlogy sequence for the pair $({\operatorname{st}}^\downarrow v, {\operatorname{lk}}^\downarrow v)$. Finiteness properties of fundamental groups of Ore categories {#sec:proof_scheme} ------------------------------------------------------------- We take up the construction from Section \[sec:space\_ore\_garside\]. So ${\mathcal{C}}$ is again a right-Ore category, ${\mathcal{S}}$ is a left- or right-Garside family closed under factors, and $* \in {\mathrm{Ob}}({\mathcal{C}})$ is a base object. More than requiring strong Noetherianity, we now need a height function $\rho \colon {\mathrm{Ob}}({\mathcal{C}}) \to {\mathbb{N}}$. We use these data and assumptions to provide a criterion to prove finiteness properties for the fundamental group. We need to introduce one further space construction. It is another variant of the nerve construction. For $x \in {\mathrm{Ob}}({\mathcal{C}})$ let $E(x)$ be the set of equivalence classes in $a \in {\mathcal{E}}(-,x) \smallsetminus {\mathcal{E}}^\times(x,x)$ modulo the equivalence relation that $\bar{a} = \bar{a\smash'}$ if there exists a $g \in {\mathcal{C}}^\times$ with $ga = a'$. We define a relation $\le$ on $E(x)$ by declaring $\bar{a} \le \bar{b}$ if there is an $f \in {\mathcal{C}}$ with $fa = b$. Note that if $g$ and $f$ as above exist, they lie in ${\mathcal{E}}$ so the description can be formulated purely in terms of ${\mathcal{E}}$. As in Lemma \[lem:poset\_contractible\] one sees that $\le$ is a partial order on $E(x)$, however it is usually not contractible. Let ${\mathcal{C}}$ be a right-Ore category and let $* \in {\mathrm{Ob}}({\mathcal{C}})$. Let ${\mathcal{S}}$ be a locally finite left- or right-Garside family that is closed under factors. Let $\rho \colon {\mathrm{Ob}}({\mathcal{C}}) \to {\mathbb{N}}$ be a height function such that $\{x \in {\mathrm{Ob}}({\mathcal{C}}) \mid \rho(x) \le n\}$ is finite for every $n \in {\mathbb{N}}$. \[thm:generic\_proof\] Assume 1. ${\mathcal{C}}^\times(x,x)$ is of type $F_n$ for all $x$, 2. ${\lvert E(x) \rvert}$ is $(n-1)$-connected for all $x$ with $\rho(x)$ beyond a fixed bound. (If $\rho$ is unbounded on the component of $*$ then it suffices if holds for every $x$ with $\rho(x)$ beyond a fixed bound.) Then $\pi_1({\mathcal{C}},*)$ is of type $F_n$. Recall that ${\mathcal{C}}$ can be replaced by the component of $*$ in ${\mathcal{C}}$ so all assumptions need to be made only for that component. We take $X$ to be the complex constructed in Section \[sec:finiteness\_properties\]. Assume first that holds for all $x \in {\mathrm{Ob}}({\mathcal{C}})$. For a vertex $\bar{a} \in X$ with $a \in {\mathcal{O}re}({\mathcal{C}})(*,x)$ we define $\rho(\bar{a}) = \rho(x)$. This is a $\pi_1({\mathcal{C}},*)$-invariant Morse function which we think of as height. For $n \in {\mathbb{N}}$ we consider the subcomplex $X_{\rho < n}$ supported on vertices of height $< n$. We want to see that every $X_{\rho < n}$ is $\pi_1({\mathcal{C}},*)$-cocompact. To do so we note that $\pi_1({\mathcal{C}},*)$ acts transitively on vertices $\bar{a}$ with $a \in {\mathcal{O}re}({\mathcal{C}})(*,x)$: indeed, if $\bar{b}$ is another such then $ba^{-1} \in \pi_1({\mathcal{C}},*)$ takes $\bar{a}$ to $\bar{b}$. It follows from the assumption on $\rho$ that there are only finitely many vertices $\bar{a}$ with $\rho(\bar{a}) < n$ up to the $\pi_1({\mathcal{C}},*)$-action. Cocompactness now follows from Observation \[obs:loc\_fin\]. Stabilizers are of type $F_n$ by Lemma \[lem:stabilizers\] because finiteness properties are inherited by finite-index subgroups. Let $N$ be large enough so that all the $x \in {\mathrm{Ob}}({\mathcal{C}})$ for which the nerve of ${\lvert E(x) \rvert}$ is not $(n-1)$-connected have $\rho(x) < N$. We have just seen that $\pi_1({\mathcal{C}},*)$ acts on $X_{\rho < N}$ cocompactly with stabilizers of type $F_n$, so once we show that $X_{\rho < N}$ is $(n-1)$-connected, we are done by Theorem \[thm:browns\_criterion\]. We want to apply the Morse Lemma (Lemma \[lem:morse\]) so let us look at the descending link of a vertex $\bar{b}$ of $X$, where $b \in {\mathcal{C}}(*,x)$. The vertices in the descending link are the $\bar{a}$ that are comparable with $\bar{b}$ and have $\rho(\bar{a}) < \rho(\bar{b})$. The condition on the height shows that $a$ cannot be a right-multiple of $b$ but has to be a left-factor. Thus $a^{-1}b \in {\mathcal{E}}(-,x)$ and the descending link of $\bar{b}$ is the realization of $\{\bar{a} \mid a \prec b\}$. We see that the map ${\mathcal{E}}(-,x) \smallsetminus {\mathcal{E}}(x,x) \to \{\bar{a} \mid a \prec b\}$ that takes $f$ to $\overline{a f^{-1}}$ is an order-reversing surjection. The definition of $E(x)$ is made so that the induced map $E(x) \to \{\bar{a} \mid a \prec b\}$ is well-defined and a order-reversing bijection. Since ${\lvert E(x) \rvert}$ is $(n-1)$-connected by assumption, this completes the proof in the case that holds for all $x$. If only holds for $x$ with $\rho(x) \ge M$ let $*'$ be in the component of $*$ satisfying $\rho(*') > M$. Since ${\mathcal{C}}$ is Ore, one sees that $$\pi_1({\mathcal{C}},*) = \pi_1({\mathcal{C}},*') = \pi_1({\mathcal{C}}_{\rho \ge M}, x_0)$$ where ${\mathcal{C}}_{\rho \ge M}$ is obtained from ${\mathcal{C}}$ by removing objects $y$ with $\rho(y) < M$. Moreover local finiteness of ${\mathcal{S}}$ implies that the complexes $E(y)$ for ${\mathcal{C}}$ and for ${\mathcal{C}}_{\rho \ge r}$ are the same for $y$ in the component of $*'$ once $\rho(y)$ is large enough. One can therefore consider ${\mathcal{C}}_{\rho \ge M}$ instead of ${\mathcal{C}}$ with the effect that the groups ${\mathcal{C}}^\times(x,x)$ only need to be of type $F_n$ when $\rho(x) \ge M$. \[cor:generic\_proof\] Let ${\mathcal{C}}$, ${\mathcal{S}}$, $\rho$, $*$ be as in the theorem. If ${\mathcal{C}}^\times(x,x)$ is of type $F_\infty$ for every $x$ and the connectivity of ${\lvert E(x) \rvert}$ tends to infinity for $\rho(x) \to \infty$ then $\pi_1({\mathcal{C}},*)$ is of type $F_\infty$. The construction of $X$ uses two important ideas. One is the passage from ${\lvert P \rvert}$ to $X$ which is due to Stein, see [@stein92 Theorem 1.5]. The other is to take $P$ to consist of ${\mathcal{C}}^\times$-equivalence classes and goes back to [@bux16]. Apart from these ideas the main difficulty in proving that $\pi_1({\mathcal{C}},*)$ is of type $F_n$ lies in establishing the connectivity properties of the complexes ${\lvert E(x) \rvert}$. This problem depends individually on the concrete setup and we will see various examples later. Example: $F$ is of type $F_\infty$ ---------------------------------- As a first illustration of the results in this section we reprove a result due to Brown and Geoghegan [@brown84]: \[prop:f\_finfty\] Thompson’s group $F$ is of type $F_\infty$. We have seen in Proposition \[prop:thomcat\_ore\] that ${\mathcal{F}}$ is right-Ore and admits a height function and by Corollary \[cor:f\_garside\_family\] it has a locally finite left-Garside family that is closed under factors. Moreover, ${\mathcal{F}}^\times(x,x) = \{1_x\}$ for every $x$, so is satisfied as well. It only remains to verify . Although things are not always as easy, we remark that this is the typical situation: property is where one actually needs to show something. To understand the complexes ${\lvert E(n) \rvert}$ we first need to unravel the definition. Recall that a *matching* of a graph $\Gamma$ is a set of edges $M \subseteq E(\Gamma)$ that are pairwise disjoint. Matchings are ordered by containment and we denote the poset of matchings by ${\mathcal{M}}(\Gamma)$. In fact, since every subset of a matching is again a matching, ${\mathcal{M}}(\Gamma)$ is (the face poset of) a simplicial complex, the *matching complex*. We denote by $L_n$ the *linear graph* on $n$ vertices $\{1,\ldots,n\}$ so its edges are $\{i,i+1\}$ for $1 \le i <n$. \[lem:link\_match\_linear\] The poset $E_{\mathcal{F}}(n)$ is isomorphic to ${\mathcal{M}}(L_n)$. Let $f \in {\mathcal{E}}_{\mathcal{F}}(-,n)$, so $f$ is an element of $E_{\mathcal{F}}(n)$. We identify the roots of $f$ with the vertices of the linear graph $L_n$ on the vertices $\{1,\ldots,n\}$. Every caret of $f$ connects two of these roots and thus corresponds to an edge of $L_n$. All these edges are disjoint so the resulting subgraph $M_f$ of $L_n$ is a matching. It is clear that conversely every matching of $L_n$ arises in a unique way from an elementary forest. If $h \le f$ then $h$ is a left-multiple of $f$, that is, $f$ can be obtained from $h$ by adding carets to some roots of $h$ that do not have carets yet. On the level of graphs this means that $M_f$ is obtained from $M_h$ by adding edges so that $M_h \le M_f$ in the poset of matchings. In particular, $E_{\mathcal{F}}(n)$ is (the face poset of) a simplicial complex. Note that the realization as a poset is the barycentric subdivision of the realization as a simplicial complex, and in particular both are homeomorphic. So there is no harm in working with the coarser cell structure where elements of $E_{\mathcal{F}}(n)$ are simplices rather than vertices. This fact applies in most of our cases. Matching complexes of various graphs have been studied intensely and their connectivity properties can be verified in various ways [@bjoerner94]. In fact, for linear and cyclic graphs the precise homotopy type is known [@kozlov08 Proposition 11.16]. Rather than using the known optimal connectivity bounds we use the opportunity to introduce a criterion due to Belk and Forrest [@belk Theorem 4.9] that is particularly well suited to verifying that the connectivity of the spaces $E(x)$ tends to infinity in easier cases. We need to introduce some notation. An abstract simplicial complex $X$ is *flag* if every set of pairwise adjacent vertices forms a simplex. A simplex $\sigma$ in a simplicial flag complex is called a *$k$-ground* for $k \in {\mathbb{N}}$ if every vertex of $X$ is connected to all but at most $k$ vertices of $\sigma$. The complex is said to be *$(n,k)$-grounded* if there is an $n$-simplex that is a $k$-ground. \[thm:belk\_forrest\] For $m,k \in {\mathbb{N}}$ every $(mk,k)$-grounded flag complex is $(m-1)$-connected. The reference requires $m,k \ge 1$ but it is clear that every $(0,k)$-grounded flag complex is non-empty, and every $(0,0)$-grounded flag complex is a cone an therefore contractible. Using Theorem \[thm:belk\_forrest\] we verify: \[lem:match\_connectivity\] For every $n \in {\mathbb{N}}$ let $\Gamma_n$ be a subgraph of $K_n$ containing $L_n$. The connectivity of ${\mathcal{M}}(\Gamma_n)$ goes to infinity as $n$ goes to infinity. Consider the matchings of $L_n$ that use only the edges $\{2i-1, 2i\}$, $1 \le i \le {\lfloor n/2 \rfloor}$. They form an $({\lfloor n/2 \rfloor}-1)$-simplex $\sigma$ in ${\mathcal{M}}(\Gamma_n)$. If $v = \{j,k\}$ is any edge of $\Gamma_n$, so a vertex of ${\mathcal{M}}(\Gamma_n)$ then there are at most $2$ vertices of $\sigma$ that $v$ is not connected to: one is $\{j-1,j\}$ or $\{j,j+1\}$, the other is $\{k-1,k,\}$ or $\{k,k+1\}$. This shows that ${\mathcal{M}}(\Gamma_n)$ is $({\lfloor n/2 \rfloor}-1,2)$-grounded, so by Theorem \[thm:belk\_forrest\] it is $({\lfloor n/4 \rfloor}-1)$-connected. We want to apply Corollary \[cor:generic\_proof\]. The only thing left to check is condition . This follows from Lemmas \[lem:link\_match\_linear\] and \[lem:match\_connectivity\]. The indirect product of two categories {#sec:indirect_product} ====================================== The construction introduced in this section will help us to produce more interesting examples. It is usually called the Zappa–Szép product in the literature of groups and monoids, cf. [@brin05]. The Zappa–Szép product naturally generalizes the semidirect product in the same way as the semidirect product generalizes the direct product. We think that such a basic construction should have a simpler name and therefore call it the *indirect product*. For motivation, let $M$ be a monoid (or group) whose multiplication we denote by $\circ$ and suppose that $M$ decomposes uniquely as $M = A \circ B$. By this we mean that $A$ and $B$ are submonoids of $M$ such that every element $m \in M$ can be written in a unique way as $m = a \circ b$ with $a \in A$ and $b \in B$. In particular, if $b' \in B$ and $a' \in A$, the product $m = b' \circ a'$ can be rewritten as $b'\circ a'= a\circ b$. This allows to formally define maps $A \times B \to B, (a',b') \mapsto a' \cdot b' {\mathrel{\vcentcolon =}}a$ and $B \times A \to A, (b',a') \mapsto {b'}^{a'} {\mathrel{\vcentcolon =}}b$ so that $$b \circ a = (b \cdot a) \circ b^a\text{.}$$ These maps turn out to be actions of monoids on sets. If both actions are trivial then $M$ is a direct product, if one of the actions is trivial then $M$ is a semidirect product, and in general it is an indirect product. We therefore start by introducing the appropriate notion of actions of categories. Actions ------- Let ${\mathcal{C}}$ be a category and let $(X_m)_{m \in {\mathrm{Ob}}({\mathcal{C}})}$ be a family of sets, one for each object of ${\mathcal{C}}$. We say that a left action of ${\mathcal{C}}$ on $(X_m)_m$ is a family of maps $$\begin{aligned} {\mathcal{C}}(n,m) \times X_m &\to X_n\\ (f,s) &\mapsto f \cdot s\end{aligned}$$ satisfying $1_m \cdot s = s$ for all $m \in {\mathrm{Ob}}({\mathcal{C}})$ and $s \in X_m$ and $fg \cdot s = f \cdot (g \cdot s)$ whenever $fg$ is defined. A right action is defined analogously. An action is said to be *injective* if $f \cdot x = f \cdot y$ implies $x = y$. Note that actions of groupoids are always injective. In our examples the family $(X_m)_m$ itself will consist of morphisms of a category with the same objects as ${\mathcal{C}}$. We have to bear in mind, however, that the action is on these as sets and does not preserve products. The indirect product -------------------- Let ${\mathcal{C}}$ be a category and let ${\mathcal{F}}$ and ${\mathcal{G}}$ be subcategories. We say that ${\mathcal{C}}$ is an *internal indirect product* ${\mathcal{F}}\bowtie {\mathcal{G}}$ if every $h \in {\mathcal{C}}$ can be written in a unique way as $h = fg$ with $f \in {\mathcal{F}}$ and $g \in {\mathcal{G}}$. Note that this means in particular that ${\mathrm{Ob}}({\mathcal{C}}) = {\mathrm{Ob}}({\mathcal{F}}) = {\mathrm{Ob}}({\mathcal{G}})$. Given elements $f \in {\mathcal{F}}(x,-)$ and $g \in {\mathcal{G}}(-,x)$ there exist then unique elements $f' \in {\mathcal{F}}$ and $g' \in {\mathcal{G}}$ such that $gf = f'g'$, see Figure \[fig:zappa-szep\_gen\]. In this situation we define $g \cdot f$ to be $f'$ and $g^f$ to be $g'$. The following properties are readily verified, see Figure \[fig:zappa-szep\_ones\] and \[fig:zappa-szep\_prod\], the last four hold whenever one of the sides is defined: 1. $1_x \cdot f = f$ for $f \in {\mathcal{F}}(x,-)$,\[item:zappa-szep\_1\_acts\_left\] 2. $g^{1_y} = g$ for $g \in {\mathcal{G}}(-,y)$,\[item:zappa-szep\_1\_acts\_right\] 3. $(g_1g_2) \cdot f = g_1 \cdot (g_2 \cdot f)$,\[item:zappa-szep\_product\_acts\_left\] 4. $g^{f_1f_2} = (g^{f_1})^{f_2}$,\[item:zappa-szep\_product\_acts\_right\] 5. $1_x^f = 1_y$ for $f \in {\mathcal{F}}(x,y)$,\[item:zappa-szep\_1\_is\_acted\_left\] 6. $g \cdot 1_y = 1_z$ for $g \in {\mathcal{G}}(z,x)$,\[item:zappa-szep\_1\_is\_acted\_right\] 7. $(g_1g_2)^f = g_1^{(g_2 \cdot f)}g_2^f$,\[item:zappa-szep\_product\_is\_acted\_left\] 8. $g \cdot (f_1f_2) = (g \cdot f_1)(g_2^{f_1} \cdot f_2)$.\[item:zappa-szep\_product\_is\_acted\_right\] The first four relations say that the map $(g,f) \mapsto g \cdot f$ is an left action of ${\mathcal{G}}$ on the sets $({\mathcal{F}}(x,-))_x$ and that $(g,f) \mapsto g^f$ is a right action of ${\mathcal{F}}$ on the sets $({\mathcal{G}}(-,y))_y$. The next two relations say that identity elements are taken to identity elements, while the last two are cocycle conditions. We call actions satisfying to *indirect product actions*. Now assume that conversely categories ${\mathcal{F}}$ and ${\mathcal{G}}$ with ${\mathrm{Ob}}({\mathcal{F}}) = {\mathrm{Ob}}({\mathcal{G}})$ are given together with indirect product actions of ${\mathcal{F}}$ and ${\mathcal{G}}$ on each other. Then the *external indirect product* ${\mathcal{C}}= {\mathcal{F}}\bowtie {\mathcal{G}}$ is defined to have objects ${\mathrm{Ob}}({\mathcal{C}}) = {\mathrm{Ob}}({\mathcal{F}}) = {\mathrm{Ob}}({\mathcal{G}})$ and morphisms $${\mathcal{C}}= \bigcup_{x \in {\mathrm{Ob}}({\mathcal{C}})} \{ (f,g) \mid f \in {\mathcal{F}}(-,x), g \in {\mathcal{G}}(x,-)\}\text{.}$$ Composition is defined by $$(f_1,g_1)(f_2,g_2) = (f_1(g_1 \cdot f_2),g_1^{f_2}g_2)\text{.}\label{eq:zappa-szep_definition}$$ \(w) at (0,0) [$\bullet$]{}; (x) at (1,1) [$\bullet$]{}; (y) at (2,2) [$\bullet$]{}; (z) at (3,3) [$\bullet$]{}; \(r) at (0,1) [$\bullet$]{}; (s) at (1,2) [$\bullet$]{}; (t) at (2,3) [$\bullet$]{}; \(o) at (0,2) [$\bullet$]{}; (p) at (1,3) [$\bullet$]{}; \(m) at (0,3) [$\bullet$]{}; (r) edge node\[above\][$g_1$]{} (x) (x) edge node\[right\][$f_2$]{} (s) (s) edge node\[above\][$g_2$]{} (y) (y) edge node\[right\][$f_3$]{} (t); (w) edge\[ultra thick,darkgreen\] (r) (t) edge\[ultra thick,darkgreen\] (z); (w) edge\[ultra thick,dashed\] node\[right\][$f_1$]{} (r) (t) edge\[ultra thick,dashed\] node\[above\][$g_3$]{} (z); (r) edge\[ultra thick,darkgreen\] node\[fill=white\][$g_1 \cdot f_2$]{} (o) (o) edge node\[above\][$g_1^{f_2}$]{} (s) (s) edge node\[fill=white\][$g_2 \cdot f_3$]{} (p) (p) edge\[ultra thick\] node\[above\][$g_2^{f_3}$]{} (t); (o) edge node\[fill=white,shift=[(.5,.2)]{}\][$g_1^{f_2} \cdot (g_2 \cdot f_3)$]{} (m) (m) edge node\[below\][$(g_1^{f_2})^{g_2 \cdot f_3}$]{} (p); (r) edge\[bend left=90,ultra thick,dashed\] node\[left\][$g_1 \cdot (f_2(g_2\cdot f_3))$]{} (m); (m) edge\[bend left=90,ultra thick,darkgreen\] node\[below\][$(g_1^{f_2}g_2)^{f_3}$]{} (t); (o) edge\[bend left=90,ultra thick,darkgreen\] node\[fill=white,shift=[(-.1,-.3)]{}\][$(g_1^{f_2}g_2) \cdot f_3$]{} (m); (m) edge\[bend left=90,ultra thick,dashed\] node\[fill=white,shift=[(.5,-.05)]{}\][$g_1^{f_2(g_2 \cdot f_3)}$]{} (p); The external indirect product ${\mathcal{F}}\bowtie {\mathcal{G}}$ is well-defined. It is naturally isomorphic to the internal indirect product of the copies of ${\mathcal{F}}$ and ${\mathcal{G}}$ inside ${\mathcal{F}}\bowtie {\mathcal{G}}$. That the identity morphisms $(1_x,1_x)$ behave as they should is easily seen using relations , , , and . To check associativity we verify the four equations $$\begin{gathered} g_1^{f_2(g_2 \cdot f_3)} \stackrel{\eqref{item:zappa-szep_product_acts_right}}{=} (g_1^{f_2})^{g_1 \cdot f_3} \label{eq:zappa-szep_associative_1}\mathrlap{,}\\ (g_1^{f_2}g_2) \cdot f_3 \stackrel{\eqref{item:zappa-szep_product_acts_left}}{=} g_1^{f_2} \cdot (g_2 \cdot f_3) \label{eq:zappa-szep_associative_2}\mathrlap{,}\\ g_1^{f_2(g_2 \cdot f_3)}g_2^{f_3} \stackrel{\eqref{eq:zappa-szep_associative_1}}{=} (g_1^{f_2})^{g_1 \cdot f_3} g_2^{f_3} \stackrel{\eqref{item:zappa-szep_product_is_acted_left}}{=}(g_1^{f_2}g_2)^{f_3}\label{eq:zappa-szep_associative_3}\mathrlap{,\quad\text{and}}\\ (g_1 \cdot f_2)((g_1^{f_2}g_2) \cdot f_3) \stackrel{\eqref{eq:zappa-szep_associative_2}}{=} (g_1 \cdot f_2)(g_1^{f_2} \cdot (g_2 \cdot f_3)) \stackrel{\eqref{item:zappa-szep_product_is_acted_right}}{=} g_1 \cdot (f_2(g_2 \cdot f_3))\label{eq:zappa-szep_associative_4}\end{gathered}$$ see Figure \[fig:zappa-szep\_associative\]. The categories ${\mathcal{F}}$ and ${\mathcal{G}}$ naturally embed into the external indirect product ${\mathcal{F}}\bowtie {\mathcal{G}}$ as $f \mapsto (f, 1_y)$ for $f \in {\mathcal{F}}(-,y)$ and $g \mapsto (1_x,g)$ for $g \in {\mathcal{G}}(x,-)$. Any morphism of ${\mathcal{F}}\bowtie {\mathcal{G}}$ decomposes as $(f,g) = (f,1_y)(1_y,g)$ and it is clear from that respective actions on each other are the ones used to define ${\mathcal{F}}\bowtie {\mathcal{G}}$. If the action of ${\mathcal{G}}$ on ${\mathcal{F}}$ is trivial then the indirect product is a *semidirect product* ${\mathcal{F}}\ltimes {\mathcal{G}}$. Similarly, if the action of ${\mathcal{F}}$ on ${\mathcal{G}}$ is trivial then it is a semidirect product ${\mathcal{F}}\rtimes {\mathcal{G}}$. Finally, if both actions are trivial then the indirect product is in fact a *direct product* ${\mathcal{F}}\times {\mathcal{G}}$. We close the section by collecting facts that ensure that an indirect product is Ore. \[lem:zappa-szep\_cancellative\] If ${\mathcal{F}}$ and ${\mathcal{G}}$ are right-cancellative and the action of ${\mathcal{F}}$ on ${\mathcal{G}}$ is injective then ${\mathcal{F}}\bowtie {\mathcal{G}}$ is right-cancellative. Symmetrically, if ${\mathcal{F}}$ and ${\mathcal{G}}$ are left-cancellative and the action of ${\mathcal{G}}$ on ${\mathcal{F}}$ is injective then ${\mathcal{F}}\bowtie {\mathcal{G}}$ is left-cancellative. If $f_1g_1fg = f_2g_2fg$ then $f_1(g_1 \cdot f) = f_2(g_2 \cdot f)$ and $g_1^fg = g_2^fg$. Since ${\mathcal{G}}$ is right-cancellative the latter equation shows that $g_1^f = g_2^f$ and injectivity of the action then implies $g_1 = g_2$. Putting this in the former equation and using right-cancellativity of ${\mathcal{F}}$ gives $f_1 = f_2$. \[lem:zappa-szep\_common\_multiples\] Let ${\mathcal{F}}$ have common right-multiples and let ${\mathcal{G}}$ be a groupoid. Then ${\mathcal{F}}\bowtie {\mathcal{G}}$ has common right-multiples. Let $fg \in {\mathcal{F}}\bowtie {\mathcal{G}}$ with $f \in {\mathcal{F}}$ and $g \in {\mathcal{G}}$. Since ${\mathcal{G}}$ is a groupoid, $f$ is both a left-factor and a right-multiple of $fg$. It follows that common right-multiples exist in ${\mathcal{F}}\bowtie {\mathcal{G}}$ because they exist in ${\mathcal{F}}$. \[lem:zappa-szep\_invertibles\] Let ${\mathcal{F}}$ have no non-trivial invertible morphisms and let ${\mathcal{G}}$ be a groupoid. Then $({\mathcal{F}}\bowtie {\mathcal{G}})^\times = {\mathcal{G}}$. \[prop:zappa-szep\_conditions\] Let ${\mathcal{C}}= {\mathcal{F}}\bowtie {\mathcal{G}}$ where ${\mathcal{F}}$ has no non-trivial invertibles and ${\mathcal{G}}$ is a discrete groupoid. 1. If ${\mathcal{F}}$ is right-Ore and the action of ${\mathcal{F}}$ on ${\mathcal{G}}$ is injective then ${\mathcal{C}}$ is right-Ore.\[item:product\_ore\] 2. If ${\mathcal{F}}$ is strongly Noetherian then so is ${\mathcal{C}}$.\[item:product\_noeth\] 3. If $\rho$ is a height function on ${\mathcal{F}}$ then it is a height function on ${\mathcal{C}}$.\[item:product\_height\] 4. If ${\mathcal{S}}$ is a left-Garside family in ${\mathcal{F}}$ then it is a left-Garside family in ${\mathcal{C}}$.\[item:product\_left\_gars\] 5. If ${\mathcal{S}}$ is a right-Garside family in ${\mathcal{F}}$ then ${\mathcal{S}}{\mathcal{G}}$ is a right-Garside family in ${\mathcal{C}}$.\[item:product\_right\_gars\] Property  follows from Lemma \[lem:zappa-szep\_cancellative\] and Observation \[lem:zappa-szep\_common\_multiples\]. Properties  and  follow from the fact that for $f \in {\mathcal{F}}$ and $g \in {\mathcal{G}}$ the morphisms $f$ and $fg$ are right-multiples by invertibles of each other. Property \[item:product\_height\] follows from ${\mathcal{G}}$ being discrete (i.e. every morphism being an endomorphism). Toward it is clear that every right-factor of ${\mathcal{S}}{\mathcal{G}}$ is contained in ${\mathcal{S}}{\mathcal{G}}$. Moreover if $t$ is an ${\mathcal{S}}$-tail for $f$ then $tg$ is a ${\mathcal{S}}$-tail for $fg$. Examples: categories constructed by indirect products {#sec:indirect_product_examples} ===================================================== In this section we show how the indirect product can be used to construct new groups. The basic examples are Thompson’s groups $T$ and $V$ as well as the braided Thompson groups, which all arise as fundamental groups of categories of the form ${\mathcal{F}}\bowtie {\mathcal{G}}$ where ${\mathcal{G}}$ is an appropriate groupoid. More generally, the groups studied in joint work with Zaremsky [@witzel16a] are essentially by definition groups that can be obtained in this form. Later we also describe other groups obtained via indirect products. We will sometimes draw pictures to motivate our definition. In these pictures the up-direction always corresponds to left in our notation and down corresponds to right. This is especially relevant for group elements. For example, a permutation $X {\leftarrow}X, g(x)$ $x$ will be depicted by connecting the point $x$ at the bottom to the point $g(x)$ at the top. Thompson’s groups $\mathbf{T}$ and $\mathbf{V}$ {#sec:tcat_vcat} ----------------------------------------------- In this section we introduce Thompson’s groups $T$ and $V$ as fundamental groups of categories ${\mathcal{T}}$ and ${\mathcal{V}}$. The categories will be obtained from ${\mathcal{F}}$ as indirect products with groupoids and we start by introducing these. We define ${\mathcal{G}}_T$ and ${\mathcal{G}}_V$ to be groupoids whose objects are positive natural numbers with ${\mathcal{G}}_T(m,n) = \emptyset$ for $m \ne n$. We put ${\mathcal{G}}_T(n,n) = {\mathbb{Z}}/n{\mathbb{Z}}$ and ${\mathcal{G}}_V(n,n) = {\textsc{Sym}}_n$. We want to define ${\mathcal{T}}= {\mathcal{F}}\bowtie {\mathcal{G}}_T$ and ${\mathcal{V}}= {\mathcal{F}}\bowtie {\mathcal{G}}_V$ and have to specify the actions that define these indirect products. That is, given a forest $f \in {\mathcal{F}}(m,n)$ and a group element $g \in {\mathcal{G}}(m,m)$ we need to specify how the product $gf$ should be written as $(g \cdot f)g^f$ with $g \cdot f \in {\mathcal{F}}(m,n)$ and $g^f \in {\mathcal{G}}(n,n)$ (for ${\mathcal{G}}$ one of ${\mathcal{G}}_T$ and ${\mathcal{G}}_V$). in [0,1]{} (,1) – (+1,0); (2,1) – (2.5,.5) (-.5,.5) – (0,0); (root) at (0,0) ; (left) at (-.5,.5) ; (right) at (.5,.5) ; (left) – (root) – (right); in [0,1,2]{} at (,0) ; in [0,1,2]{} at (,1) ; in [0,1]{} (,1) – (,1.5); in [0,1]{} at (,1.5) ; at (3,1) [=]{}; (root) at (.5,.5) ; (left) at (0,1) ; (right) at (1,1) ; (left) – (root) – (right); in [2,3]{} (,.5) – (,1); in [2,3]{} at (,.5) ; in [0,1]{} (,1) – (+2,0); in [2,3]{} (,1) – (3.5,1-1/2\*3.5+1/2\*) (-.5,1-1/2\*3.5+1/2\*) – (-2,0); in [0,1,2,3]{} at (,0) ; in [0,1,2,3]{} at (,1) ; Since ${\mathcal{G}}_T$ is contained in ${\mathcal{G}}_V$, it would suffice to only define the actions for ${\mathcal{G}}_V$, but we look at the simpler case of ${\mathcal{G}}_T$ first. We need to rewrite a cyclic permutation followed by a tree as a tree followed by a cyclic permutation. This is illustrated in Figure \[fig:t\_zappa\_szep\]. For $f \in {\mathcal{F}}(m,n)$ and $g = \ell + {\mathbb{Z}}/m{\mathbb{Z}}\in {\mathcal{G}}_T(m,m)$ the forest $g \cdot f$ is just $f$ with the trees rotated by $\ell$ to the right. The definition of $g^f$ is more subtle: looking at the figure we see that we have to define it to be $k + {\mathbb{Z}}/n{\mathbb{Z}}$ where $k$ is the number of leaves of the first $\ell$ trees of $g \cdot f$, or equivalently, to be the number leaves of the last $\ell$ trees of $f$. Note that this number does not depend on the chosen representative $\ell$: if we replace $\ell$ by $\ell + m$, instead of $k$ we get $k+n$, because we counted every leaf once more. If $k_\ell$ denotes the number of leaves of the last $\ell$ trees of $f$, the sequence $(k_\ell)_{0 \le \ell < m}$ is strictly increasing. This shows: \[obs:tcat\_injective\] The action of ${\mathcal{F}}$ on ${\mathcal{G}}_T$ is injective. The actions of ${\mathcal{F}}$ and ${\mathcal{G}}_T$ on each other are indirect product actions. Conditions , , , , , are clear. The condition in our setting follows from the fact that the last $k+\ell$ trees of $f$ are the last $\ell$ trees of $f$ plus the last $k$ trees of $(\ell + m{\mathbb{Z}}) \cdot f$. Condition can be verified drawing a picture. The lemma allows us to define ${\mathcal{T}}= {\mathcal{F}}\bowtie {\mathcal{G}}_T$. Combining Observation \[obs:tcat\_injective\] with Propositions \[prop:thomcat\_ore\] and Corollary \[cor:f\_garside\_family\] and applying Proposition \[prop:zappa-szep\_conditions\] we find: \[cor:tcat\_righ\_ore\] The category ${\mathcal{T}}$ is right-Ore and admits a height function and a left-Garside family ${\mathcal{S}}$ that is closed under factors such that ${\mathcal{S}}(x,-)/{\mathcal{S}}^\times$ is finite for every $x$. The fundamental group $\pi_1({\mathcal{T}},1)$ is *Thompson’s group $T$*. Now we want to define the actions of ${\mathcal{F}}$ and ${\mathcal{G}}_V$ on each other. So let $f \in {\mathcal{F}}(m,n)$ and let $g \in {\mathcal{G}}_V(m,m)$. The action of ${\mathcal{G}}_V$ on ${\mathcal{F}}$ is again as expected: the forest $f' = (g \cdot f)$ is given by the relationship that the $g(j)$th tree of $f'$ is the $j$th tree of $f$. The permutation $g' = g^f \in {\mathcal{G}}_V(n,n)$ has the following description. Identify $\{1,\ldots,n\}$ with the leaves of $f$ and with the leaves of $(g \cdot f)$. If $i$ is the $k$the leaf of the $j$th tree of $f$ then $g'(i)$ is the $k$th leaf of the $g(j)$th tree of $g \cdot f$, see Figure \[fig:t\_zappa\_szep\]. At this point it becomes clear that working the actions as described above is virtually impossible. To obtain a more explicit algebraic description, we make use of the presentation of ${\mathcal{F}}$. Property  tells us that we know how any element of ${\mathcal{F}}$ acts as soon as we know how the generators act and property  tells us that we know how ${\mathcal{G}}_V$ acts on any element once we know how it acts on the generators of ${\mathcal{F}}$. It therefore suffices to specify both actions for generators of ${\mathcal{F}}$. Checking well-definedness then means to check various conditions coming from the relations in ${\mathcal{F}}$. So now we consider $g \in {\mathcal{G}}_V(m,m)$ and $\lambda_i^m \in {\mathcal{F}}(m,m+1)$ and define the actions on each other. We start again with the easy case: $$\label{eq:vcat_on_thomcat} g \cdot \lambda_i = \lambda_{g(i)}\text{.}$$ Working out $g^{\lambda_i}$ we have to distinguish four cases depending on the position of a point relative to $i$ and relative to $g(i)$: $$\label{eq:thomcat_on_vcat} g^{\lambda_i}(j) = \left\{ \begin{array}{ll} g(j) & j \le i, g(j) \le g(i),\\ g(j - 1) & j > i, g(j-1) \le g(i),\\ g(j) + 1 & j \le i, g(j) > g(i),\\ g(j - 1) + 1 & j > i, g(j-1) > g(i)\text{.} \end{array} \right.$$ Since $i = j$ if and only if $g(i) = g(j)$, the inequalities in the second and third case can be taken to be strict. \[lem:thomcat\_vcat\_actions\] The formulas and define well-defined indirect product actions of ${\mathcal{F}}$ and ${\mathcal{G}}_V$ on each other. The conditions that involve only the action of ${\mathcal{G}}_V$, namely , , and are clear. Condition is defined to hold. Verifying conditions , on the $\lambda_i$ is straightforward, although in the second case tedious. Conditions and should also be defined to hold, but in order for this to be well-defined, we need to check them on relations. That is, we need to verify that $$\begin{aligned} (g^{\lambda_i})^{\lambda_j} = g^{\lambda_i \lambda_j} &= g^{\lambda_j\lambda_{i+1}} = (g^{\lambda_j})^{\lambda_{i+1}}\mathrlap{\quad\text{and}}\\ (g \cdot \lambda_i)(g_2^{\lambda_i} \cdot \lambda_j) = g \cdot (\lambda_i \lambda_j) &= g \cdot (\lambda_j \lambda_{i+1}) = (g \cdot \lambda_j)(g_2^{\lambda_j} \cdot \lambda_{i+1})\end{aligned}$$ for $j < i$. These are again not difficult but tedious and we skip them here. See [@witzel16a Example 2.9] for a detailed verification. Thus we can define ${\mathcal{V}}= {\mathcal{F}}\bowtie {\mathcal{G}}_V$. \[lem:vcat\_injective\] The action of ${\mathcal{F}}$ on ${\mathcal{G}}_V$ defined by is injective. Since by definition $g^{\lambda_{i_1} \cdots \lambda_{i_n}} = (\ldots(g^{\lambda_{i_1}})\ldots)^{\lambda_{i_n}}$ we only need to check that the map $g \mapsto g^{\lambda_i}$ defined in is injective. But $g$ can be recovered from $g^{\lambda_i}$ as follows. Let $\tau_i, \pi_i \colon {\mathbb{N}}\to {\mathbb{N}}$ be given by $$\begin{aligned} \tau_i(j) &{\mathrel{\vcentcolon =}}\left\{\begin{array}{ll} j & j \le i\\ j+1 & j > i \end{array} \right. &\pi_i(j) &{\mathrel{\vcentcolon =}}\left\{\begin{array}{ll} j & j \le i\\ j-1 & j > i \end{array} \right.\end{aligned}$$ Then $g(j) = \pi_i(g^{\lambda_i}(\tau_i(j)))$. Proposition \[prop:thomcat\_ore\], Corollary \[cor:f\_garside\_family\] and Proposition \[prop:zappa-szep\_conditions\] now imply: \[cor:vcat\_righ\_ore\] The category ${\mathcal{V}}$ is right-Ore and admits a height function and a left-Garside family ${\mathcal{S}}$ that is closed under factors such that ${\mathcal{S}}(x,-)/{\mathcal{S}}^\times$ is finite for every $x$. The fundamental group $\pi_1({\mathcal{V}},1)$ is *Thompson’s group $V$*. The braided Thompson groups --------------------------- The group $\mathit{BV}$, called *braided $V$*, was introduced independently by Brin [@brin07] and Dehornoy [@dehornoy06]. We describe it using our framework which is similar to Brin’s approach. To define the categories underlying the braided Thompson groups, we define the groupoid ${\mathcal{G}}_{\mathit{BV}}$ to have as objects natural numbers, and to have morphisms ${\mathcal{G}}_{\mathit{BV}}(m,n) = \emptyset$ for $m \ne n$, and ${\mathcal{G}}_{\mathit{BV}}(n,n) = {\textsc{Braid}}_n$. Note that the morphisms $\pi \colon {\textsc{Braid}}_n \to {\textsc{Sym}}_n$ define a morphism ${\mathcal{G}}_{\mathit{BV}} \to {\mathcal{G}}_V$ that we denote by $\pi$ as well. We want to define a indirect product ${\mathcal{F}}\bowtie {\mathcal{G}}_{\mathit{BV}}$ and need to define actions of ${\mathcal{F}}$ and ${\mathcal{G}}_{\mathit{BV}}$ on each other. Our guiding picture is Figure \[fig:braided\_v\]. (0,0) – (0,1); (3,0) – (3,1); (2,0) to \[out=90, in=-90, looseness=1\] (1,1); (1,0) to \[out=90, in=-90, looseness=1\] (2,1); (1,0) to \[out=90, in=-90, looseness=1\] (2,1); in [0,1,2,3]{} at (,0) ; in [0,1,2,3]{} at (,1) ; (root) at (1,1) ; (a) at (.5,1.5) ; (b) at (1.5,1.5) ; (a) – (root) – (b); in [0,2,3]{} (,1.5) – (,1); in [0,2,3]{} at (,1.5) ; at (4,.75) [$=$]{}; (0,0) – (0,2); (4,0) – (4,2); (3,0) – (3,1) to \[out=90, in=-90, looseness=1\] (2,2) (2,0) to \[out=90, in=-90, looseness=1\] (1,1) – (1,2); (1,0) to \[out=90, in=-90, looseness=1\] (2,1) to \[out=90, in=-90, looseness=1\] (3,2); (1,0) to \[out=90, in=-90, looseness=1\] (2,1) to \[out=90, in=-90, looseness=1\] (3,2); in [0,1,2,3,4]{} at (,0) ; in [0,1,2,3,4]{} at (,2) ; (root) at (2.5,-.5) ; (a) at (2,0) ; (b) at (3,0) ; (a) – (root) – (b); in [0,1,4]{} (,-.5) – (,0); in [0,1,4]{} at (,-.5) ; We define the action of ${\mathcal{G}}_{\mathit{BV}}$ on ${\mathcal{F}}$ simply as the action of ${\mathcal{G}}_V$ composed with $\pi$. In particular, $\sigma_i \cdot \lambda_i = \lambda_{i+1}$, $\sigma_i \cdot \lambda_{i+1} = \lambda_i$ and $\sigma_i \cdot \lambda_j = \lambda_j$ for $j \ne i,i+1$. The action of ${\mathcal{F}}$ on ${\mathcal{G}}_{\mathit{BV}}$ we only define for generators acting on generators by $$\sigma_i^{\lambda_j} {\mathrel{\vcentcolon =}}\left\{ \begin{array}{ll} \sigma_{i+1} & j <i\\ \sigma_i \sigma_{i+1} & j = i\\ \sigma_{i+1} \sigma_i & j = i+1\\ \sigma_i & j > i+1\text{.} \end{array} \right.$$ \[lem:bv\_well\_defined\] The formulas above define well-defined indirect product actions of ${\mathcal{F}}$ and ${\mathcal{G}}_{\mathit{BV}}$ on each other. In the proof we will use the fact that there is a set-theoretic splitting $\iota \colon {\textsc{Sym}}_n \to {\textsc{Braid}}_n$ that takes a reduced word $w(s_1, \ldots, s_{n-1})$ to the braid $w(\sigma_1,\ldots,\sigma_{n-1})$. This map is not multiplicative but if $\beta$ is a positive word (meaning involving no inverses) of length at most $3$ in the $\sigma_i$ then $\iota\pi(\beta) = \beta$. As in the proof of Lemma \[lem:thomcat\_vcat\_actions\] most conditions hold by definition but we need to check well-definedness on relations. Namely $$\begin{aligned} (\sigma_i \sigma_{i+1} \sigma_i) \cdot \lambda_k = \sigma_i \cdot (\sigma_{i+1} \cdot (\sigma_i \cdot \lambda_k)) &= \sigma_{i+1}\cdot (\sigma_i \cdot (\sigma_{i+1} \cdot \lambda_k)) = (\sigma_{i+1} \sigma_i \sigma_{i+1}) \cdot \lambda_k,\label{eq:bv_braid_acts}\\ (\sigma_i\sigma_{i+1}\sigma_i)^{\lambda_k} = \sigma_i^{(\sigma_{i+1}\sigma_i) \cdot \lambda_k}\sigma_{i+1}^{\sigma_i \cdot \lambda_k} \sigma_i^{\lambda_k}&= \sigma_{i+1}^{(\sigma_i\sigma_{i+1}) \cdot \lambda_k}\sigma_i^{\sigma_{i+1} \cdot \lambda_k} \sigma_{i+1}^{\lambda_k}= (\sigma_{i+1}\sigma_i\sigma_{i+1})^{\lambda_k},\label{eq:bv_braid_is_acted}\\ (\sigma_i \sigma_j) \cdot \lambda_k = \sigma_i \cdot (\sigma_j \cdot \lambda_k) &= \sigma_i \cdot (\sigma_j \cdot \lambda_k) = (\sigma_j \sigma_i) \cdot \lambda_k,\label{eq:bv_commutator_acts}\\ (\sigma_i\sigma_j)^{\lambda_k} = \sigma_i^{\sigma_j \cdot \lambda_k} \sigma_j^{\lambda_k} &= \sigma_j^{\sigma_i \cdot \lambda_k} \sigma_i^{\lambda_k} = (\sigma_j\sigma_i)^{\lambda_k},\label{eq:bv_commutator_is_acted}\\ \sigma_i \cdot (\lambda_\ell \lambda_k) = (\sigma_i \cdot \lambda_\ell)(\sigma_i^{\lambda_\ell} \cdot \lambda_k) &= (\sigma_i \cdot \lambda_k)(\sigma_i^{\lambda_k} \cdot \lambda_{\ell+1}) = \sigma_i \cdot (\lambda_k \lambda_{\ell+1}), \mathrlap{\quad\text{and}}\label{eq:bv_split_is_acted}\\ \sigma_i^{\lambda_\ell \lambda_k} = (\sigma_i^{\lambda_\ell})^{\lambda_k} &= (\sigma_i^{\lambda_k})^{\lambda_{\ell+1}} = \sigma_i^{\lambda_k \lambda_{\ell+1}}\label{eq:bv_split_acts}\end{aligned}$$ for $i-j \ge 2$, $\ell > k$. Relations  and  follow from Lemma \[lem:thomcat\_vcat\_actions\]. For the remaining relations note that $\pi(\beta^{\lambda_k}) = \pi(\beta)^{\lambda_k}$. Now  follows from Lemma \[lem:thomcat\_vcat\_actions\] as well because $$\label{eq:thomcat_action_braids_permutations} \pi(\sigma_i^{\lambda_\ell}) \cdot \lambda_{k} = \sigma_i^{\lambda_\ell} \cdot \lambda_{k}\quad\text{and}\quad\pi(\sigma_i^{\lambda_k}) \cdot \lambda_{\ell+1} = \sigma_i^{\lambda_k} \cdot \lambda_{\ell+1}\text{.}$$ Relation  follows from Lemma \[lem:thomcat\_vcat\_actions\] by noting that both sides are positive words of length at most $3$ and applying $\iota$. We verify by distinguishing cases. The cases $k<i$ and $k > i+2$ are clear. If $k = i+1$ then the left hand side equals $(\sigma_{i+1}\sigma_i) \sigma_{i+2} (\sigma_{i+1}\sigma_i)$ and the right hand side equals $(\sigma_{i+1}\sigma_{i+2})\sigma_i(\sigma_{i+1}\sigma_{i+2})$. Both are equivalent through two braid relations with intermediate commutator relations. The cases $k = i$ and $k = i+2$ are symmetric and we only verify $k = i$. The left hand side equals $\sigma_i (\sigma_{i+1}\sigma_{i+2})(\sigma_i\sigma_{i+1})$ while the right hand side equals $(\sigma_{i+1}\sigma_{i+2})(\sigma_i\sigma_{i+1})\sigma_{i+2}$. Again these are equivalent through two braid relations with intermediate commutator relations. Relation is left to the reader. For future reference we record which in the presence of Lemma \[lem:bv\_well\_defined\] can be formulated as: \[obs:bv\_to\_v\_f\_equiv\] The morphism $\pi \colon {\mathcal{G}}_{\mathit{BV}} \to {\mathcal{G}}_V$ is equivariant with respect to the ${\mathcal{F}}$-action in the sense that $$\pi(\beta^f) = \pi(\beta)^f$$ for $\beta \in {\mathcal{G}}_{\mathit{BV}}$ and $f \in {\mathcal{F}}$. We define the category ${\mathcal{BV}}$ to be ${\mathcal{F}}\bowtie {\mathcal{G}}_{\mathit{BV}}$ with the above indirect product actions. The action of ${\mathcal{G}}_{\mathit{BV}}$ on ${\mathcal{F}}$ is injective. We only need to check that $\beta \mapsto \beta^{\lambda_i}$ is injective. But $\beta$ can be recovered from $\beta^{\lambda_i}$ by removing the $(i+1)$st strand. \[cor:bvcat\_righ\_ore\] The category ${\mathcal{BV}}$ is right-Ore. The fundamental group $\pi_1({\mathcal{BV}}, 1)$ is the *braided Thompson group $\mathit{BV}$*. It is now easy to define braided versions of $T$ and $F$. We let ${\mathcal{G}}_{\mathit{BT}}$ and ${\mathcal{G}}_{\mathit{BF}}$ be the inverse image under $\pi \colon {\mathcal{G}}_{\mathit{BV}} \to {\mathcal{G}}_V$ of ${\mathcal{G}}_T$ and ${\mathcal{G}}_F$ respectively. Both of these act on ${\mathcal{F}}$ by restricting the action of ${\mathcal{G}}_{\mathit{BV}}$, which is the same as to say that they act through $\pi$. The action of ${\mathcal{F}}$ of ${\mathcal{G}}_{\mathit{BV}}$ leaves ${\mathcal{G}}_{\mathit{BT}}$ and ${\mathcal{G}}_{\mathit{BF}}$ invariant and restricts to actions on these, thanks to Observation \[obs:bv\_to\_v\_f\_equiv\]: we know from Section \[sec:tcat\_vcat\] that ${\mathcal{F}}$ leaves ${\mathcal{G}}_{T}$ invariant and it is axiomatically required that it leaves the trivial groupoid invariant. Hence if $\beta \in {\mathcal{G}}_{\mathit{BT}}$ and $f \in {\mathcal{F}}$ then $\pi(\beta^f) = \pi(\beta)^f \in {\mathcal{G}}_T$ so that $\beta^f \in {\mathcal{G}}_{\mathit{BT}}$ and an analogous reasoning applies for $\beta \in {\mathcal{G}}_{\mathit{BF}}$. As a consequence we can define the categories ${\mathcal{BT}}= {\mathcal{F}}\bowtie {\mathcal{G}}_{\mathit{BT}}$ and ${\mathcal{BF}}= {\mathcal{F}}\bowtie {\mathcal{G}}_{\mathit{BF}}$, which are right-Ore. The group $\mathit{BF} = \pi_1({\mathcal{BF}},1)$ is called *braided $F$* and was first introduced in [@brady08]. We call the group $\mathit{BT} = \pi_1({\mathcal{BT}},1)$ *braided $T$*. \[rem:bt\] The group $\mathit{BT}$ was not introduced before for the following technical reason. Instead of our category ${\mathcal{BV}}$ Brin [@brin07] used a monoid that can be thought of as a category with a single object $\omega$ which represents countably infinitely many strands. This is possible because splitting one of countably infinitely many strands leads to countably infinitely many strands and because braid groups ${\textsc{Braid}}_n$ are contained in a braid group $\varinjlim {\textsc{Braid}}_n$ on infinitely many strands. A practical downside of that approach is that the group of fractions of that monoid is too big so one needs to describe which elements should be elements of $\mathit{BV}$. A formal downside is that groups like $\mathit{BT}$ or even $T$ cannot be described because ${\mathbb{Z}}/n{\mathbb{Z}}$ is not contained in ${\mathbb{Z}}/(n+1){\mathbb{Z}}$ so that the needed limit does not exist. Despite this formal problem, the main topological ingredient to establishing the finiteness properties of $\mathit{BT}$ has been verified in [@bux16 Section 3.4]. Groups arising from cloning systems ----------------------------------- In [@witzel16a] Zaremsky and the author have defined (filtered) cloning systems to be the data needed to define indirect product actions of ${\mathcal{F}}$ and a groupoid on each other. Thus the groups considered there are by definition fundamental groups of categories ${\mathcal{F}}\bowtie {\mathcal{G}}$ where ${\mathcal{G}}$ is a groupoid. However, the approach follows Brin [@brin07] to construct the groups as subgroups of an indirect product of monoids ${\mathcal{F}}_\infty \bowtie {\mathcal{G}}_\infty$. As a consequence it has to deal with technical complications such as the notion of being *properly graded*, as well as practical shortcomings such as being unable to construct (braided) $T$. Our categorical approach removes the necessity that the groups $(G_n)_n$ fit into a directed system of groups and therefore the whole discussion goes through without that assumption. Thus a *cloning system* is given by a sequence $(G_n)_{n \in {\mathbb{N}}}$ of groups, a sequence $(\rho_n)_{n \in {\mathbb{N}}} \colon G_n \to S_n$ of morphisms and a family of maps $(\kappa^n_k)_{k \le n} \colon G_n \to G_{n+1}$ such that the following hold for all $k \le n$, $k<\ell$, and $g,h\in G_n$: 1. $(gh) \kappa_k^n = (g)\kappa_{\rho(h)k}^n(h)\kappa_k^n$. \[item:fcs\_cloning\_a\_product\](Cloning a product) 2. $\kappa_\ell^n \circ \kappa_k^{n+1} = \kappa_k^n \circ \kappa_{\ell+1}^{n+1}$.\[item:fcs\_product\_of\_clonings\](Product of clonings) 3. $\rho_{n+1}((g)\kappa^n_k)(i) = (\rho_n(g)) \varsigma^n_k (i)$ for all $i\ne k,k+1$ \[item:fcs\_compatibility\](Compatibility) Here $\varsigma^n_k$ describes the action of ${\mathcal{F}}$ on ${\mathcal{G}}_V$ so that $(g)\varsigma^n_k(j) = g^{\lambda_k}(j)$ as in . Given a cloning system, a groupoid ${\mathcal{G}}$ is defined by setting ${\mathcal{G}}(m,n) = \emptyset$ if $m \ne n$ and setting ${\mathcal{G}}(n,n) = G_n$. Indirect product actions of ${\mathcal{F}}$ and ${\mathcal{G}}$ on each other are defined by $g \cdot \lambda^n_k = \lambda^{n+1}_{\rho_n(g)k}$ and $g^{\lambda^n_k} = (g)\kappa^n_k$ for $g \in G_n$. The axioms , , ensure that these indeed define indirect product actions. The Higman–Thompson groups -------------------------- In total analogy to Section \[sec:tcat\_vcat\] one can define ${\mathcal{T}}_n = {\mathcal{F}}_n \bowtie {\mathcal{G}}_T$ and ${\mathcal{V}}_n = {\mathcal{F}}_n \bowtie {\mathcal{G}}_V$. As mentioned in Section [sec:fn]{} the category ${\mathcal{F}}_n$ is not connected for $n > 2$ and neither are the categories ${\mathcal{T}}_n$ and ${\mathcal{V}}_n$. Thus it makes sense to define the groups $$\begin{aligned} T_{n,r} &= \pi_1({\mathcal{T}}_n,r)\\ V_{n,r} &= \pi_1({\mathcal{V}}_n,r)\end{aligned}$$ and unlike the situation of ${\mathcal{F}}_n$, these groups are indeed distinct for different $r$. They are the remaining *Higman–Thompson groups*. Groups from graph rewriting systems {#sec:graph_rewriting} ----------------------------------- We now look at indirect products that do not involve ${\mathcal{F}}$. The corresponding groups have been introduced and described in some detail in [@belk]. In this section, when we talk about graphs we will take their edges to be directed and allow multiple edges and loops. In particular, every vertex has an initial and a terminal vertex. The edge set of a graph $G$ is denoted $E(G)$ and the vertex set $V(G)$. An *edge replacement rule* $e \to R$ consists of a single directed edge $e$ and a finite graph $R$ that contains the two vertices of $e$ (but not $e$ itself). If $G$ is any graph and $\varepsilon$ is an edge of $G$, the edge replacement rule can be *applied to $G$ at $\varepsilon$* by removing $\varepsilon$ and adding in a copy of $R$ while identifying the initial/terminal vertex of $\varepsilon$ with the initial/terminal vertex of $e$ in $R$. The resulting graph is denoted $G \lhd \varepsilon$. If $\delta$ is another edge of $G$, then it is also an edge of $G \lhd \varepsilon$ and so the replacement rule can be applied to $G \lhd \varepsilon$ at $\delta$. We regard $G \lhd \varepsilon \lhd \delta$ and $G \lhd \delta \lhd \varepsilon$ as the same graph. The vagueness inherent in the last sentence can be remedied by declaring that a graph obtained from $G$ by applying the edge replacement rule (possibly many times) has as edges words in $E(G) \times E(R)^*$ and as vertices words in $V(G) \cup (E(G) \times E(R)^* \times V(R))$. For example, the graph $G \lhd \varepsilon \lhd \delta$ would have edges $\zeta \in E(G) \smallsetminus \{\varepsilon, \delta\}$ and $\varepsilon\xi$, $\delta\xi$ for $\xi \in E(R)$ and vertices $v \in V(G)$ as well as $\varepsilon w$ and $\delta w$ for $w \in V(R)$. For every edge replacement rule $e \to R$ we define a category ${\mathcal{R}}_{e \to R}$ whose objects are finite graphs. In order for the category to be small we will take the graphs to have vertices and edges coming from a fixed countable set, which in addition is closed under attaching words in $E(R)$ and $V(R)$. The category is presented by having generators $$\lambda^G_\varepsilon \in {\mathcal{R}}_{e \to R}(G, G\lhd \varepsilon)\quad \text{for}\quad G\text{ a graph and }\varepsilon \text{ an edge of }G$$ subject to the relations $$\label{eq:rewriting_relation} \lambda^G_\delta \lambda^{G \lhd \delta}_\varepsilon = \lambda^{G}_\varepsilon \lambda^{G \lhd \varepsilon}_\delta \quad \text{for}\quad G\text{ a graph and }\delta, \varepsilon \text{ distinct edges of }G\text{.}$$ For any edge replacement rule $e \to R$ the category ${\mathcal{R}}_{e \to R}$ is right-Ore. Thanks to the relations a morphism $\lambda_{\varepsilon_1} \ldots \lambda_{\varepsilon_k}$ in ${\mathcal{R}}_{e \to R}$ is uniquely determined by its source, its target, and the set $\{\varepsilon_1, \ldots, \varepsilon_k\}$. The claim now follows by taking differences and unions of these sets of edges. As in previous sections, the second ingredient will be a groupoid. Its definition does not depend on the edge replacement rule, except possibly for the foundational issues of choosing universal sets of vertices and edges. We define ${\mathcal{G}}_\text{graph}$ to have as objects finite graphs and as morphisms isomorphisms of graphs. We define actions of ${\mathcal{R}}_{e \to R}$ and ${\mathcal{G}}_\text{graph}$ on each other as follows. If $g \colon G \to G'$ is an isomorphism of graphs and $\varepsilon \in E(G)$ is an edge then $$g \cdot \lambda^G_{\varepsilon} = \lambda^{G'}_{g(\varepsilon)}$$ and $g^{\lambda_{\varepsilon}}$ is the isomorphism $G \lhd \varepsilon \to G' \lhd g(\varepsilon)$ that takes $\delta$ to $g(\delta)$ for $\delta \in E(G) \smallsetminus \{\varepsilon\}$ and that takes $\varepsilon\zeta$ to $g(\varepsilon)\zeta$ for $\zeta \in V(R) \cup E(R)$. The following is easy to verify: The actions of ${\mathcal{R}}_{e \to R}$ and ${\mathcal{G}}_\text{graph}$ on each other defined above are well-defined indirect product actions. The action of ${\mathcal{R}}_{e \to R}$ on ${\mathcal{G}}_\text{graph}$ is injective. As a consequence we obtain a right-Ore category ${\mathcal{RG}}_{e \to R} {\mathrel{\vcentcolon =}}{\mathcal{R}}_{e \to R} \bowtie {\mathcal{G}}_\text{graph}$ and for every finite graph $G$ we obtain a potential group $\pi_1({\mathcal{RG}}_{e \to R}, G)$. If we consider the edge replacement rule at (-1,1) [$e=$]{}; at (4,1) [$=L_2$]{}; \(lv) at (0,0) ; (lw) at (0,2) ; \(v) at (2.5,0) ; (w) at (2.5,2) ; (4) at (2.5,1) ; \(lv) to (lw); at (1,1) [$\to$]{}; \(v) to\[left\] (4); (4) to\[left\] (w); and take $L_1$ to be the graph consisting of a single edge then then $\pi_1({\mathcal{RG}}_{e \to L_2}, L_1)$ is isomorphic to $F$. Similarly, if $C_1$ is the graph consisting of a single loop then $\pi_1({\mathcal{RG}}_{e \to L_2}, C_1)$ is isomorphic to $T$. Finally, $V$ arises as $\pi_1({\mathcal{RG}}_{e \to D_2}, L_1)$ where the rule $e \to D_2$ replaces an edge by two disconnected edges. Various fundamental groups of categories arising from graph rewriting systems are described in [@belk]. Here we will only mention the Basilica Thompson group introduced by them in [@belk15]. We consider the replacement rule at (-1,1) [$e=$]{}; at (4.5,1) [$=R$]{}; \(lv) at (0,0) ; (lw) at (0,2) ; \(v) at (2.5,0) ; (w) at (2.5,2) ; (4) at (2.5,1) ; \(lv) to (lw); at (1,1) [$\to$]{}; \(v) to\[left\] (4); (4) to\[out=-[40]{}, in=[40]{}, min distance=1cm, looseness=5,right\] (4); (4) to\[left\] (w); and the graph at (-1.2,0) [$G=$]{}; \(x) at (0,0) ; (y) at (1,0) ; \(x) edge\[-&gt;-,out=180-[40]{}, in = 180+[40]{}, min distance = 1cm, looseness=5, left\] (x); (y) edge\[-&gt;-,out=0-[40]{}, in = 0+[40]{}, min distance = 1cm, looseness=5, right\] node[.]{} (y); (x) edge\[-&gt;-,out=0-[60]{}, in = 180+[60]{}, below\] (y); (y) edge\[-&gt;-,out=180-[60]{}, in = 0+[60]{}, above\] (x); The *Basilica Thompson group* is $T_B {\mathrel{\vcentcolon =}}\pi_1({\mathcal{RG}}_{e \to R}, G)$. Examples: Finiteness properties {#sec:finiteness_properties_examples} =============================== In this final section we give various examples of applications of Theorem \[thm:generic\_proof\] and Corollary \[cor:generic\_proof\]. In most cases these finiteness properties are known and the proofs involve proving that certain complexes are highly connected. We will see that these complexes always coincide with the complexes ${\lvert E(x) \rvert}$. As a consequence the connectivity statement from the literature together with Theorem \[thm:generic\_proof\] gives the result. Finiteness properties of Thompson’s groups ------------------------------------------ We start with the categories ${\mathcal{T}}$ and ${\mathcal{V}}$. The conditions needed to apply the results from Section \[sec:finiteness\_properties\] have been verified in Corollary \[cor:tcat\_righ\_ore\] and Corollary \[cor:vcat\_righ\_ore\]. In order to apply Corollary \[cor:generic\_proof\] two more things are left to verify: that automorphism groups are of type $F_\infty$ and that the connectivity of the simplicial complexes ${\lvert E(n) \rvert}$ goes to infinity with $n$. The groups ${\mathcal{F}}(n,n) = \{1\}$, ${\mathcal{T}}(n,n) = {\mathbb{Z}}/n{\mathbb{Z}}$ and ${\mathcal{V}}(n,n) = {\textsc{Sym}}_n$ are all finite and therefore of type $F_\infty$. In order to describe the complexes $E(n)$, we need to talk about further graphs. The *cyclic graph* is denoted $C_n$, it has the same edges as $L_n$ and additionally $\{1,n\}$. The *complete graph* $K_n$ has all edges $\{i,j\}$, $1 \le i < j \le n$. We describe the complexes $E(n)$ in the case of ${\mathcal{V}}$ and leave ${\mathcal{T}}$ to the reader. \[lem:link\_match\_cyclic\] The poset $E_{\mathcal{T}}(n)$ is isomorphic to ${\mathcal{M}}(C_n)$. \[lem:link\_match\_complete\] There is a poset-morphism $E_{\mathcal{V}}(n) \to {\mathcal{M}}(K_n)$ whose fibers over $k$-simplices are $k$-spheres. Every element of $({\mathcal{E}}\bowtie {\mathcal{G}}_V)(-,n)$ can be written as a product $fg$ of an elementary forest $f \in {\mathcal{E}}(-,n)$ and a permutation $g \in {\mathcal{G}}_V(n,n)$. By definition the vertices of $E(n)$ are these products modulo multiplication by permutations from the left. As in Lemma \[lem:link\_match\_linear\] an elementary forest can be interpreted as a matching on $L_n$. Under this correspondence, the group ${\mathcal{G}}_V(n,n) = {\textsc{Sym}}_n$ acts on the vertices of $L_n$ and the permutations from the left act on components of the matching. Thus elements of $({\mathcal{E}}\bowtie {\mathcal{G}}_V)(-,n)$ can be described by matchings on the linear graph on $g^{-1}(1), \ldots, g^{-1}(n)$ modulo reordering the components of the matching. The possibility of reordering the vertices of the matching means that any two elements of $\{1,\ldots,n\}$ can be connected and so we obtain a map ${\lvert E(n) \rvert} \to {\mathcal{M}}(K_n)$ to the matching complex of the *complete* graph on $\{1,\ldots,n\}$. This map is clearly surjective. It is not injective because in $E(n)$ the order of two matched vertices matters while in ${\mathcal{M}}(K_n)$ it does not. For example, $\lambda_i$ and $\lambda_i (i\ i+1)$ map to the same vertex in ${\mathcal{M}}(K_n)$. As a result the fiber over a $k$-simplex is a join of $k+1$ many $0$-spheres, i.e. a $k$-sphere. The fact that the morphism in Lemma \[lem:link\_match\_complete\] is not an isomorphism means that we have to do one extra step, namely to apply the following result by Quillen [@quillen73 Theorem 9.1]. Rather than giving the general formulation for posets we restrict to face posets of ($n$-skeleta of) simplicial complexes, to save us some notation. \[thm:quillen\] Let $n \in {\mathbb{N}}$ and let $f \colon X \to Y$ be a simplicial map. Assume that $Y$ is $(n-1)$-connected and that for every $k$-simplex $\sigma$ of $Y$ the link ${\operatorname{lk}}\sigma$ is $(n -\dim \sigma-2)$-connected and the fiber ${\lvert f^{-1}(\sigma) \rvert}$ is $(k-1)$-connected. Then $X$ is $(n-1)$-connected. Thompson’s groups $T$, and $V$ are of type $F_\infty$. Using Corollary \[cor:generic\_proof\] we need to show that the connectivity of the complexes ${\lvert E(n) \rvert}$ goes to infinity as $n$ goes to infinity. We work with the simplicial complexes $E(n)$ instead. In the case of $T$ the complexes are matching complexes by Lemma \[lem:link\_match\_cyclic\] whose connectivity goes to infinity by Lemma \[lem:match\_connectivity\]. In the case of $V$ the complexes map to matching complexes with good fibers by Lemma \[lem:link\_match\_complete\]. Noting that the link of a $k$-simplex in ${\mathcal{M}}(K_n)$ is isomorphic to ${\mathcal{M}}(K_{n-2(k+1)})$, we can apply Theorem \[thm:quillen\] to see that the connectivity of $E_{\mathcal{V}}$ goes to infinity as well. Finiteness properties of braided Thompson groups ------------------------------------------------ We have alreadt seen that ${\mathcal{BF}}$, ${\mathcal{BT}}$, and ${\mathcal{BV}}$ are right-Ore. That they admit a height function and a left-Garside family follows via Proposition \[prop:zappa-szep\_conditions\], just as it did for ${\mathcal{T}}$ and ${\mathcal{V}}$. The braid groups ${\mathcal{BV}}^\times(n,n) = {\mathcal{G}}_{\mathit{BV}}(n,n)$ are of type $F$ by Corollary \[cor:garside\_complex\] (and hence of type $F_\infty$). Consequently the finite index subgroups of pure braids ${\mathcal{BF}}^\times(n,n)$ and of cyclically-permuting braids ${\mathcal{BT}}^\times(n,n)$ are of type $F$ as well. It remains to understand the complexes ${\lvert E(n) \rvert}$. For that purpose, we will want to think of braid groups as mapping class groups. Let $D$ be a closed disc with $n$ punctures $p_1, \ldots, p_n$ which we can think of as distinguished points in the interior of $D$. The mapping class group of the $n$-punctured disc is $${\operatorname{Homeo}}^+(D \smallsetminus \{p_1,\ldots, p_n\}, \partial D)/{\operatorname{Homeo}}^+_0(D \smallsetminus \{p_1,\ldots, p_n\}, \partial D)$$ where ${\operatorname{Homeo}}^+(D \smallsetminus \{p_1,\ldots, p_n\}, \partial D)$ is the group of orientation-preserving homeomorphisms of $D \smallsetminus \{p_1,\ldots, p_n\}$ that fix $\partial D$ and ${\operatorname{Homeo}}^+_0(D \smallsetminus \{p_1,\ldots, p_n\}, \partial D)$ is the subgroup of homeomorphisms that are isotopic to the identity. It is well-known that the mapping class group of the $n$-punctured disc is isomorphic to the braid group, see for example [@kassel08]. With this description in place, we can start to look at the complexes ${\lvert E(n) \rvert}$. Let $fg \in E(n)$ with $f \in {\mathcal{E}}(-,n)$ and $g \in {\mathcal{G}}_{\mathit{BV}}(n,n)$. Regard the $n$ punctures $p_1, \ldots, p_n$ as the vertices of an $L_n$ embedded into $D$. As we have seen before, $f$ corresponds to a matching $M_f$ on $L_n$ which we now regard as a disjoint selection of the fixed arcs connecting pairs of adjacent punctures. The element $g$, regarded as a mapping class, acts on $M_f$ and we obtain a set $M_fg$ of disjoint arcs connecting some pairs of punctures. Such a collection of arcs is called an *arc matching* in [@bux16]. Note that if $f \in {\mathcal{E}}(k,n)$ so that the arc matching consists of $n-k$ arcs, then removing the arcs from the punctured disc results in a $k$-punctured disc. The action of ${\mathcal{G}}_{\mathit{BV}}(k,k)$ from the left is just the action of the mapping class group of that $k$-punctured disc and in particular does nothing to $M_f$. For a subgraph $\Gamma$ of $K_n$ the *arc matching complex* ${\mathcal{MA}}(\Gamma)$ is the simplicial complex whose $k$-simplices are sets of pairwise disjoint arcs connecting punctures with the condition that an arc can only connect two punctures if they are connected by an edge in $\Gamma$. \[prop:link\_arc\_match\] There surjective morphisms of simplicial complexes 1. $E_{\mathcal{BF}}(n) \to {\mathcal{MA}}(L_n)$\[item:arc\_map\_BF\] 2. $E_{\mathcal{BT}}(n) \to {\mathcal{MA}}(C_n)$\[item:arc\_map\_BT\] 3. $E_{\mathcal{BV}}(n) \to {\mathcal{MA}}(K_n)$\[item:arc\_map\_BV\] whose fiber over any $k$-simplex is the join of $k$ countable infinite discrete sets. The product $fg \in E(n)$ is taken to the arc matching $M_fg$ as described above. Since ${\mathcal{G}}_{\mathit{BF}}(n,n)$ takes every puncture to itself, the map maps onto ${\mathcal{MA}}(L_n)$. Similarly, since ${\mathcal{G}}_{\mathit{BT}}(n,n)$ cyclically permutes the punctures, the map maps into ${\mathcal{MA}}(C_n)$. Surjectivity is clear. To describe the fibers consider a disc $D'$ containing $p_i$ and $p_{i+1}$ but none of the other punctures and let $\beta$ be a braid that is arbitrary inside $D'$ but trivial outside. Then $\lambda_i \beta$ maps to the same arc (= vertex of ${\mathcal{MA}}(K_n)$) irrespective of $\beta$. Thus the fiber over this vertex is the mapping class group of $D' \smallsetminus \{p_i, p_{i+1}\}$ in the case of ${\mathcal{BV}}$ and is the pure braid group of $D' \smallsetminus \{p_i,p_{i+1}\}$ in the cases of ${\mathcal{BF}}$ and ${\mathcal{BT}}$. In either case it is a countable infinite discrete set. The connectivity properties of arc matching complexes have been studied in [@bux16]. We summarize Theorem 3.8, Corollary 3.11, and the remark in Section 3.4 in the following theorem. It applies to arc matching complexes not only on disks but on arbitrary surfaces with (possibly empty) boundary. \[thm:arc\_matching\_connectivity\] 1. ${\mathcal{MA}}(K_n)$ is $(\nu(n)-1)$-connected, 2. ${\mathcal{MA}}(C_n)$ is $(\eta(n-1)-1)$-connected, 3. ${\mathcal{MA}}(L_n)$ is $(\eta(n)-1)$-connected, where $\nu(n) = {\lfloor \frac{n-1}{3} \rfloor}$ and $\eta(n) = {\lfloor \frac{n-1}{4} \rfloor}$. \[thm:braided\_finiteness\_properties\] The braided Thompson groups $\mathit{BF}$, $\mathit{BT}$, and $\mathit{BV}$ are of type $F_\infty$. We want to apply Corollary \[cor:generic\_proof\]. By Proposition \[prop:link\_arc\_match\] the complexes $E(n)$ map onto arc matching complexes and we want to apply Theorem \[thm:quillen\]. To do so, we need to observe that the link of a $k+1$-simplex on an arc matching complex on a surface with $n$ punctures is an arc matching complex with $n-2k$ punctures where the $k$ arcs connecting two punctures have been turned into boundary components. Putting these results together shows that the connectivity properties of $E(n)$ go to infinity with $n$ by Theorem \[thm:arc\_matching\_connectivity\]. Absence of finiteness properties -------------------------------- Theorem \[thm:generic\_proof\] gives a way to prove that certain groups are of type $F_n$. If the group is not of type $F_n$ one of the hypotheses fails. We will now discuss to which extent the construction is (un-)helpful in proving that the group is not of type $F_n$ depending on which hypothesis fails. In the first case the groups ${\mathcal{C}}^\times(x,x)$ are not of type $F_n$ (even for $\rho(x)$ large). In this case the general part of Brown’s criterion, Theorem \[thm:browns\_criterion\_negative\], cannot be applied. Thus the whole construction from Section \[sec:proof\_scheme\] is useless for showing that $\pi_1({\mathcal{G}}_{\mathcal{C}},*)$ is not of type $F_n$. An example of this case are the groups $\mathcal{T}(B_*(\mathcal{O}_S))$ treated in [@witzel16a Theorem 8.12]. The proof redoes part of the proof that the groups ${\mathcal{C}}^\times(x,x)$, which are the groups in $B_n(\mathcal{O}_S)$ in this case, are not of type $F_n = F_{{\lvert S \rvert}}$. In the second case the complexes $E(x)$ are not (even asymptotically) $(n-1)$-connected. In this case Brown’s criterion, Theorem \[thm:browns\_criterion\_negative\], can in principle be applied, but not by using just Morse theory. An example of this case is the Basilica Thompson group from Section \[sec:graph\_rewriting\] which is not finitely presented [@witzel16], so $n=2$. A morphism in ${\mathcal{RG}}_{e \to R} = {\mathcal{R}}_{e \to R} \bowtie {\mathcal{G}}_{\text{graph}}$ is declared to be elementary if there are edges $\{e_1,\ldots,e_k\}$ of $G$ such that $f = \lambda_{e_1}\ldots\lambda_{e_k}$. The function $\rho \colon {\mathcal{R}}_{e \to R} \to {\mathbb{N}}$ is the number of edges of a graph. The basepoint $*$ is the Basilica graph $G$. The connectivity-assumption of Theorem \[thm:generic\_proof\] is violated because the ${\mathcal{RG}}_{e \to R}$-component of $G$ contains graphs $H$ with arbitrarily many edges for which $E(H)$ is not simply connected. Examples of such graphs are illustrated in Figure \[fig:bad\_graph\]. In these examples $E(H)$ has four vertices, two vertices $v_{ll}$, $v_{ul}$ corresponding to the loops on the left and two vertices $v_{lr}$ and $v_{ur}$ corresponding to the loops on the right. The left vertices are connected to the right vertices but not to each other and neither are the right vertices. Thus $E(H)$ is a circle $v_{ll}, v_{lr}, v_{ul}, v_{ur}$ and is not simply connected. (oo) at (1,0) ; (o) at (0,0) ; (p) at (-.6,.6) ; (q) at (-.6,-.6) ; (kk) at (2,0) ; (k) at (3,0) ; (m) at (3.6,.6) ; (l) at (3.6,-.6) ; at (1.5,0) () [$\cdots$]{}; \(o) edge\[-&gt;-,out=0-[60]{}, in = 180+[60]{}, below\] (oo); (oo) edge\[-&gt;-,out=180-[60]{}, in = 0+[60]{}, below\] (o); (o) edge\[-&gt;-,out=180-[60]{}, in = 60-[60]{}, above right\] (p); (p) edge\[-&gt;-,out=120-[40]{}, in = 120+[40]{}, min distance = 1cm, looseness=5, above left\] (p); (p) edge\[-&gt;-,out=180+[60]{}, in = 180-[60]{}, left\] (q); (q) edge\[-&gt;-,out=240-[40]{}, in = 240+[40]{}, min distance = 1cm, looseness=5, below left\] (q); (q) edge\[-&gt;-,out=300+[60]{}, in = 180+[60]{}, below right\] (o); (kk) edge\[-&gt;-,out=0-[60]{}, in = 180+[60]{}, below\] (k); (k) edge\[-&gt;-,out=180-[60]{}, in = 0+[60]{}, below\] (kk); (k) edge\[-&gt;-,out=0-[60]{}, in = 240-[60]{}, below left\] (l); (l) edge\[-&gt;-,out=300-[40]{}, in = 300+[40]{}, min distance = 1cm, looseness=5, below right\] (l); (l) edge\[-&gt;-,out=0+[60]{}, in = 0-[60]{}, right\] (m); (m) edge\[-&gt;-,out=60-[40]{}, in = 60+[40]{}, min distance = 1cm, looseness=5, above right\] (m); (m) edge\[-&gt;-,out=120+[60]{}, in = 0+[60]{}, above left\] (k); Looking into the proof of Theorem \[thm:generic\_proof\] we can compare directly what the non-simple connectedness of $E(H)$ tells us and what is needed to apply Brown’s criterion (Theorem \[thm:browns\_criterion\_negative\]) in order to prove that the group is not of type $F_n$. To apply Theorem \[thm:browns\_criterion\_negative\], one needs to show that for every $m$ there is an arbitrarily large $n$ such that passing from $X_{\rho<m}$ to $X_{\rho<n+1}$ a non-trivial $1$-sphere in $X_{\rho < m}$ is filled in. The assumption that $E(H)$ is not simply connected for $\rho(H) = n$ translates via the Morse argument to the statement that when passing from $X_{\rho <n}$ to $X_{\rho < n+1}$ either a non-trivial $1$-sphere in $X_{\rho < n}$ is filled in, or a non-trivial $2$-sphere is created. The proof in [@witzel16] that the Basilica-Thompson group $T_B$ is not finitely presented, therefore needs to rule out the second possibility and also show that the $1$-sphere that is filled in was non-trivial already in $X_{\rho<m}$.
--- abstract: 'A fault-tolerant negotiation-based intersection crossing protocol is presented. Rigorous analytic proofs are used for demonstrating the correctness and fault-tolerance properties. Experimental results validate the correctness proof via detailed computer simulations and provide a preliminary evaluation of the system performances. The results are compared to the ones that can be achieved via a risk estimator with and without combining the proposed protocol. Our fault model considers packet-loss, noisy sensory information and malicious driving. Our preliminary results show a reduction in the number of dangerous situations and vehicle collisions.' author: - 'António Casimiro [^1]' - 'Emelie Ekenstedt [^2]' - 'Elad M. Schiller [^3]' title: | Membership-based Manoeuvre Negotiation\ in Autonomous and Safety-critical Vehicular Systems[^4]\ (preliminary report) --- [^1]: Faculdade de Ciências, Universidade de Lisboa, Lisboa 1749-016, Portugal. E-mail: `[email protected]` [^2]: Department of Engineering and Computer Science, Chalmers University of Technology, Gothenburg, SE-412 96, Sweden, `[email protected]` [^3]: Department of Engineering and Computer Science, Chalmers University of Technology, Gothenburg, SE-412 96, Sweden, `[email protected]`. [^4]: This technical report is directly based on a master thesis [@mastersthesis] written by Emelie Ekenstedt. The supervisor was Elad M. Schiller.
--- abstract: 'When properly tuned, Hamiltonian Monte Carlo scales to some of the most challenging high-dimensional problems at the frontiers of applied statistics, but when that tuning is suboptimal the performance leaves much to be desired. In this paper I show how suboptimal choices of one critical degree of freedom, the cotangent disintegration, manifest in readily observed diagnostics that facilitate the robust application of the algorithm.' address: | Department of Statistics, University of Warwick, Coventry CV4 7AL, UK\ . author: - Michael Betancourt bibliography: - 'energy\_diagnostic.bib' title: | Diagnosing Suboptimal\ Cotangent Disintegrations\ in Hamiltonian Monte Carlo --- Once a statistical model has been specified as a probability distribution, applied statistics reduces to the evaluation of expectations with respect to the that target distribution. Consequently, the fundamental computational challenge in these statistics is the accurate and efficient estimation of these expectations. Given its general applicability, Markov chain Monte Carlo [@RobertEtAl:1999; @BrooksEtAl:2011] has become one of the most of the most popular frameworks for developing practical estimation algorithms, as evident from decades of theoretical analysis and empirical success. In particular, Hamiltonian Monte Carlo [@DuaneEtAl:1987; @Neal:2011; @BetancourtEtAl:2014a] pushes Markov chain Monte Carlo deep into the frontiers of applied statistics by exploiting the geometry inherent to many probability distributions. Implementing Hamiltonian Monte Carlo in practice, however, is frustrated by algorithmic degrees of freedom that present a delicate tuning problem which can not only impede the scalable performance of the algorithm but also introduce biases in the estimation. In this paper I consider the choice of a *cotangent disintegration* that arises in any Hamiltonian Monte Carlo algorithm. Because the performance of the resulting implementation is highly sensitive to the interaction of the cotangent disintegration with the given target distribution, a careful choice is critical for robust performance. After first reviewing the general construction of Hamiltonian Monte Carlo, I show how the consequences of a given cotangent disintegration manifest in the performance of a single stage of the algorithm. I then analyze this stage to define not only a implicit criteria for the optimal disintegration relative to a given target distribution, but also an explicit diagnostics to identify a suboptimal cotangent disintegration in practice. Finally I demonstrate the utility of these diagnostics in various examples. Constructing Hamiltonian Monte Carlo ==================================== In this paper I will let $\pi$ be the target probability distribution over the $D$-dimensional sample space $Q$ and some appropriate $\sigma$-algebra. To simplify the notation I will assume that $\pi$ admits a density $\pi \! \left( q \right)$ with respect to some reference measure, ${\mathrm{d}}q$, although Hamiltonian Monte Carlo does not require this. For the more general construction of Hamiltonian Monte Carlo see [@BetancourtEtAl:2014a]. Here I will very briefly review Markov chain Monte Carlo and then Hamiltonian Monte Carlo, both in general and in its most common implementation. Markov Chain Monte Carlo ------------------------ Markov chain Monte Carlo builds expectation estimates by finding and exploring the neighborhoods on which the target probability distribution concentrates. The exploration itself is generated by repeatedly sampling from a Markov transition, given by the density $\mathcal{T} \! \left( q \mid q' \right)$, to give a sequence of points, $\{ q_{0}, \ldots, q_{N} \}$, known as a Markov chain. If the transition preserves the target distribution, $$\pi \! \left( q \right) = \int_{Q} \mathcal{T} \! \left( q \mid q' \right) \pi \! \left( q' \right) {\mathrm{d}}q',$$ then the resulting Markov chain will eventually explore the entire target distribution and we can use the history of the Markov chain to construct consistent Markov chain Monte Carlo estimators of the desired expectations, $$\lim_{N \rightarrow \infty} \hat{f}_{N} \equiv \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n = 0}^{N} f \! \left( q_{n} \right) = \mathbb{E}_{\pi} \! \left[ f \right].$$ The performance of these Markov chain Monte Carlo estimators depends on how effectively the Markov transition guides the Markov chain along the neighborhoods of high probability. If the exploration is slow then the estimators will become computationally inefficient, and if the exploration is incomplete then the estimators will become biased. In order to scale Markov chain Monte Carlo to the high-dimensional and complex distributions of practical interest, we need a Markov transition that exploits the properties of the target distribution to make informed jumps through neighborhoods of high probability while avoiding neighborhoods of low probability entirely. Hamiltonian Monte Carlo ----------------------- Hamiltonian Monte Carlo achieves such informed transitions by harnessing the differential geometry of the target distribution with auxiliary *momenta* parameters. The algorithm begins by first attaching to each point, in the sample space, $q$, a copy of $\mathbb{R}^{D}$ called a *momenta fiber*. Collecting these fibers together yields the $2D$-dimensional *cotangent bundle*, $T^{*} Q$, with a natural projection that collapses each fiber to the base point at which it was attached, $$\begin{aligned} \varpi:& \; T^{*} Q \rightarrow Q \\ & (q, p) \mapsto q.\end{aligned}$$ We next lift our target probability distribution into a joint distribution on the cotangent bundle with the choice of a conditional probability distribution over the fibers known as a *cotangent disintegration*. Denoting the target distribution $$\pi \propto \exp \! \left( - V ( q ) \right) {\mathrm{d}}q,$$ with $V ( q )$ denoted the *potential energy*, and the cotangent disintegration as $$\xi_{q} \propto \exp \! \left[ - K (q, p) \right] {\mathrm{d}}p,$$ with $K (q, p)$ denoted the *kinetic energy*, then the joint distribution is defined as $$\begin{aligned} \pi_{H} &= \xi_{q} \cdot \pi \\ &\propto \exp \! \left( - \left( K (q, p) + V (q) \right) \right) {\mathrm{d}}q \, {\mathrm{d}}p \\ &\propto \exp \! \left(- H (q, p) \right) {\mathrm{d}}q \, {\mathrm{d}}p,\end{aligned}$$ with $H (q, p)$ denoted the *Hamiltonian*. When combined with the natural fiber structure of the cotangent bundle, this Hamiltonian immediately defines an infinite family of deterministic maps, $$\begin{aligned} \phi^{H}_{t} : (q, p) &\rightarrow (q, p), \forall t \in \mathbb{R} \\ \phi^{H}_{t} \circ \phi^{H}_{s} &= \phi^{H}_{s + t},\end{aligned}$$ called a *Hamiltonian flow*. By construction, the Hamiltonian flow traces through the neighborhoods where the joint distribution concentrations, while its projection, $\varpi \circ \phi_{t}^{H}$, traces through the neighborhoods where the target distribution concentrates, exactly as desired. Hence we can build a powerful Markov transition in three stages. From the initial point, $q$, we first lift from the sample space onto the cotangent bundle by sampling a random momenta from the cotangent disintegration, $p \sim \xi_{q}$, apply the Hamiltonian flow for some time to generate exploration, $(q, p) \mapsto \phi^{H}_{t} (q, p)$, and then project back down to the target sample space, $\varpi : (q, p) \mapsto q$. In practice there are various strategies for choosing the integration time, as well as numercially approximating the Hamiltonian flow and correcting for the resulting error [@Betancourt:2016], but in general any Hamiltonian Markov transition will proceed with a lift, a flow, and a projection (Figure \[fig:hmc\_transition\_cartoon\]). ![Every Hamiltonian Markov transition is comprised of a random lift from the target sample space onto the cotangent bundle (light red), a deterministic Hamiltonian flow on the cotangent bundle (dark red), and a projection back down to the target space (light red).[]{data-label="fig:hmc_transition_cartoon"}](hmc_transition_cartoon.pdf){width="4in"} Gaussian-Euclidean Cotangent Disintegrations -------------------------------------------- An explicit choice of cotangent disintegration is facilitated when the sample space has a metric structure. In particular, if the sample space is equipped with a Riemannian metric, $g$, then we can define an entire family of *Riemannian cotangent disintegrations* with the kinetic energies $$K \! \left( q, p \right) = A \cdot f \! \left( g^{-1}_{q} \! \left( p, p \right) \right) + \frac{1}{2} \log \left| g_{q} \right| + \mathrm{const},$$ for some constant $A$ and function $f: \mathbb{R} \rightarrow \mathbb{R}$. Riemannian disintegrations also define two helpful scalar functions: the *effective potential energy*, $$\widecheck{V} \! \left( q \right) = V \! \left( q \right) + \frac{1}{2} \log \left| g_{q} \right| + \mathrm{const}.$$ and the *effective kinetic energy*, $$\widecheck{K} \! \left(q, p \right) = A \cdot f \! \left( g^{-1}_{q} \! \left( p, p \right) \right).$$ In practice most implementations of Hamiltonian Monte Carlo assume that any metric structure is *Euclidean*, where the metric $g$ is constant across the sample space and then sometimes denoted as a *mass matrix*. Additionally, these implementations usually take $A = \frac{1}{2}$ and $f = \mathbb{I}$, in which case the cotangent disintegration defines a Gaussian distribution over each momenta fiber. Hence in common practice we typically consider only *Gaussian-Euclidean cotangent disintegrations*. The Microcanonical Disintegration ================================= Although any choice of cotangent disintegration will yield a Hamiltonian flow that coherently explores the neighborhoods where the target distribution concentrates, not every choice will yield a flow that is as computationally efficient as others. How the interaction between a particular disintegration and the target distribution manifests in performance may at first seem abstruse, but it becomes straightforward to characterize if we examining these Hamiltonian systems from a more natural perspective. One of the distinctive properties of Hamiltonian flow is that it preserves the Hamiltonian itself, which implies that each Hamiltonian trajectory is confined to a *level set* of the Hamiltonian, $$H^{-1} \! \left( E \right) = \left\{ (q, p) \in T^{*} Q \mid H \! \left( q, p \right) = E \right\}.$$ A Markov transition, then, first jumps to a random level set and then explores that level set with the Hamiltonian flow before projecting back to the sample space (Figure \[fig:hmc\_chain\_cartoon\]a). If we compose the projection and random lift stages together into a single *momentum resampling* operation, then the entire Hamiltonian Markov chain naturally decouples into exploration along each level set driven by the Hamiltonian flow, and exploration across level sets driven by the momentum resampling (Figure \[fig:hmc\_chain\_cartoon\]b). Consequently a much more natural way to analyze Hamiltonian Monte Carlo is not through positions and momenta but rather level sets and the *energies* labeling each level set. The *microcanonical disintegration* formalizes this intuition by decomposing the joint distribution into a conditional *microcanonical distribution* over level sets, $\pi_{{\ensuremath { H^{-1} \left( E \right) } }}$, and a *marginal energy distribution*, $\pi_{E}$, $$\pi_{H} = \pi_{{\ensuremath { H^{-1} \left( E \right) } }} \cdot \pi_{E}.$$ A Hamiltonian system always admits a microcanonical disintegration, although there are some technical subtleties [@BetancourtEtAl:2014a]. From this perspective, the Hamiltonian flow generates exploration of the microcanonical distributions while the exploration of the marginal energy distribution is determined solely by the momentum resampling. Because the cotangent disintegration affects the geometry of the level sets, it also effects the efficacy of the Hamiltonian flow, but this can largely be overcome with an appropriate choice of integration times [@Betancourt:2016]. The exploration of the marginal energy distribution, however, is determined solely by the momentum resampling which itself depends on only the interaction between the cotangent disintegration and the target distribution. Diagnosing Suboptimal Cotangent Disintegrations =============================================== To quantify the efficacy of the momentum resampling consider $\pi_{E \mid q}$, the distribution of energies, $E$, induced by a momentum resampling at position $q$. The closer this distribution is to the marginal energy distribution for any $q$, the faster the random walk will explore energies and the smaller the autocorrelations we be in the overall Hamiltonian Markov chain (Figure \[fig:energy\_marginals\]a). Conversely, the more this distribution deviates from the marginal energy distribution the less effectively the random walk will explore and the larger the autocorrelations will be in the overall chain (Figure \[fig:energy\_marginals\]b). Consequently, the compatibility of the momentum mresampling-induced distributions and the marginal energy distribution defines an implicit criterion for selecting an optimal cotangent disintegration for a given target distribution. There are many ways, however, of quantifying this compatibility in theory and in practice, and hence many ways of defining optimality criteria and resulting diagnostics. General Criteria ---------------- Optimal performance is achieved only when the momentum resampling-induced energy distributions are uniformly equal to the marginal energy distribution, $$\log \frac{ {\mathrm{d}}\pi_{E \mid q} }{ {\mathrm{d}}\pi_{E} } = 0, \, \forall q \in Q.$$ Consequently we could quantify the compatibility of the two distributions with the expectation $$\mathbb{E}_{\pi} \! \left[ \log \frac{ {\mathrm{d}}\pi_{E \mid q} }{ {\mathrm{d}}\pi_{E} } \right],$$ which would vanish only when the cotangent disintegration was optimal. Because we don’t have closed-forms for the densities, however, this would be infeasible to even estimate in any nontrivial problem. In practice we want a criterion that is readily estimating using the Hamiltonian Markov chain itself. One theoretically appealing choice is the Bayesian fraction of missing information [@Rubin:2004], $$\mathrm{BFMI} = \frac{ \mathbb{E}_{\pi} \! \left[ \mathrm{Var}_{ \pi_{E \mid q} } \! \left[ E \mid q \right] \right] } { \mathrm{Var}_{ \pi_{E} } \! \left[ E \right] },$$ which quantifies how insufficient the energy variation induced by the momentum resampling is: in the worst case $\mathrm{BFMI} \rightarrow 0$ and the momentum resampling induces very slow exploration across the level sets, while in the best case $\mathrm{BFMI} \rightarrow 1$ and the momentum resampling effectively generates exact draws from the marginal energy distribution. By construction, $$\begin{aligned} \mathrm{Var}_{ \pi_{E \mid q} } \! \left[ E \mid q \right] &= \mathrm{Var}_{ \pi_{E \mid q} } \! \left[ \Delta E \mid q \right],\end{aligned}$$ where $\Delta E$ is the change in energy induced by the momentum resampling. Because the momentum resampling does not change the position, $q$, this can also be interpreted as the change in kinetic energy, $\Delta E = \Delta K$, which depends only on the choice of cotangent disintegration, as expected. Using this latter form we can then readily estimate the Bayesian fraction of missing information using the history of energies in the Hamiltonian Markov chain, $$\mathrm{BFMI} \approx \widehat{\mathrm{BFMI}} \equiv \frac{ \sum_{n = 1}^{N} \left( E_{n} - E_{n - 1} \right)^{2} } { \sum_{n = 0}^{N} \left( E_{n} - \bar{E} \right)^{2} }.$$ In this form the Bayesian fraction of missing information is similar to the lag-1 autocorrelation of the energies, suggesting that the effective sample size per transition of the energies, $\mathrm{ESS/T} \! \left( E \right)$ might also be a useful quantification, with $\mathrm{ESS/T} \! \left( E \right) \rightarrow 0$ indicating a suboptimal cotangent disintegration and $\mathrm{ESS/T} \! \left( E \right) \rightarrow 1$ indicating an optimal one. This measure also has a strong intuitive appeal – up to the usual regularity conditions the effective sample size quantifies the rate of convergence of the marginal random walk over energies, and hence it directly quantifies the efficacy of the exploration induced by the momentum resampling. Finally, we can also use the the change of energies induced by a momentum resampling to construct visual criteria. Averaging the momentum resampling-induced energy distribution over positions gives a marginal distribution over energy variations, $$\pi_{\Delta E} \! \left( \Delta E \right) \equiv \int \pi_{E \mid q} \! \left( \Delta E \mid q \right) \pi \! \left( q \right) {\mathrm{d}}q,$$ whose density is readily estimated by histogramming the $\left\{ \Delta E_{n} \right\}$ from the Hamiltonian Markov chain. We can also estimate the marginal energy density by histogramming the $\left\{ E_{n} \right\}$, and then compare the variation of the two histograms by eye. The Bayesian fraction of missing information, effective sample size per transition, and histograms can all be estimated directly from the history of the Hamiltonian Markov chain, but none of them define a criteria that can be explicitly inverted to identify an optimal disintegration. Hence in practice they best serve as diagnostics for distinguishing suboptimal disintegrations. Additionally we must take care when applying these diagnostics as they make the strong assumption that the Hamiltonian Markov chain sufficiently explores the joint distribution. In order to improve the robustness of these diagnostics, in practice it is best to use them with multiple Markov chains and monitor additional diagnostics such as divergences [@BetancourtEtAl:2014b; @BetancourtEtAl:2015] and the Gelman-Rubin statistic [@GelmanEtAl:1992]. Gaussian-Euclidean Criteria --------------------------- Gaussian-Euclidean cotangent disintegrations are particularly useful as they admit some results that can simplify the general criteria introduced above. Consider, for example, the random lift onto the cotangent bundle. For any Gaussian-Euclidean cotangent disintegration, in fact for any Gaussian-Riemannian cotangent disintegration, the effective kinetic energy introduced by the randomly sampled momentum is distributed according to a scaled-$\chi^{2}$ distribution independent of position, $$\widecheck{K} \sim \chi^{2} \! \left( D, 1 / 2 \right),$$ where $$\chi^{2} \! \left( x \mid k, \sigma \right) = \frac{ \left( 2 \sigma \right)^{-\frac{k}{2}} }{ \Gamma \! \left( \frac{k}{2} \right) } x^{\frac{k}{2} - 1} e^{-\frac{x}{2 \sigma} }.$$ In general the projection can shed an arbitrarily large amount of effective kinetic energy, but in equilibrium we’d expect to loose as much energy as we gain, hence the change in energies should be distributed as the difference between two $\chi^{2} \! \left( D, \frac{1}{2} \right)$ variates describing the initial and final effective kinetic energies. As the number of dimensions, $D$, increases this distribution rapidly converges to a Gaussian distribution with zero mean and $D$ variance, so to a good approximation we have $$\Delta E \sim \mathcal{N} \! \left( 0, D \right),$$ for all positions, $q$. In this case the numerator in the Bayesian fraction of missing information becomes $$\mathbb{E}_{\pi} \! \left[ \mathrm{Var}_{ \pi_{E \mid q} } \! \left[ E \mid q \right] \right] = \mathbb{E}_{\pi} \! \left[ D \right] = D,$$ and we can quantify the efficacy of the cotangent disintegration simply by comparing the variance of the marginal energy distribution to the dimensionality of the target sample space, $D$. Examples ======== In order to demonstrate the utility of these diagnostics, in this section we’ll consider a series of pedagogic examples, starting with identically and independently distributed Gaussian and Cauchy distributions and then a typical hierarchical model. All Markov chains were generated with <span style="font-variant:small-caps;">CmdStan</span> [@CmdStan:2016], using the No-U-Turn sampler [@HoffmanEtAl:2014] to dynamically adapt the integration time and a Gaussian-Euclidean cotangent disintegration with a diagonal Euclidean metric adapted to the covariance of the target distribution. Unless otherwise specified all other settings were default. The exact version of <span style="font-variant:small-caps;">CmdStan</span> can be found at <https://github.com/stan-dev/cmdstan/commit/ad6177357d4d228e129eefa60c9f399b36e9ac19>, and all Stan programs and configurations can be found in the Appendix. ![Both the Bayesian fraction of mission information, BFMI, and effective sample size per transition for the energy ESS/T(E), quantify the compatibility of a cotangent disintegration with a given target distribution. Here a Gaussian-Euclidean cotangent disintegration works well for both a Gaussian and non-centered eight schools target, but is less effective for the heavier-tailed Cauchy and centered eight schools targets.[]{data-label="fig:num_diagnostics"}](experiment_results.pdf){width="4in"} Gaussian Target --------------- Let’s first consider a 100-dimensional identically and independently distributed Gaussian distribution, $q_{n} \sim \mathcal{N} \! \left( 0, 1 \right)$. Given a Gaussian-Euclidean cotangent disintegration the marginal energy distribution reduces to a scaled $\chi^{2}$ distribution, $$E \sim \chi^{2} \! \left( 2 D, \frac{1}{2} \right),$$ which converges to a $\mathcal{N} \! \left( D, D \right)$ with increasing dimension. This perfectly matches the expected energy variation, as evident in both numerical diagnostics (Figure \[fig:num\_diagnostics\]) and visual diagnostics (Figure \[fig:gauss\_exp\]). ![A Gaussian-Euclidean cotangent disintegration is well-suited to a Gaussian target distribution – at each iteration the momentum resampling is able to jump the Hamiltonian Markov chain to any relevant level set.[]{data-label="fig:gauss_exp"}](energy_gauss.pdf){width="4in"} Cauchy Target ------------- For a less compatible pairing consider instead a 100-dimensional identically and independently distributed Cauchy distribution, $q_{n} \sim \mathcal{C} \! \left( 0, 1 \right)$. The heavy tails of the Cauchy distribution induce a marginal energy distribution with heavier tails than the momentum resampling-induced energy variation. Consequently each transition is limited to only those level sets in close proximity to the initial level set, resulting in slower exploration and decreased performance (Figures \[fig:num\_diagnostics\], \[fig:cauchy\_exp\]). Despite the suboptimality of this disintegration, however, the Hamiltonian Markov chain is able to explore all relevant energies within only a few transitions and ends up performing surprisingly well given the reputation of the Cauchy distribution. ![The heavy tails of the Cauchy distribution induce a heavy-tailed marginal energy distribution which limits the efficacy of a Hamiltonian Markov chain utilizing the more lightly-tailed energy variation induced by a Gaussian-Euclidean cotangent disintegration.[]{data-label="fig:cauchy_exp"}](energy_cauchy.pdf){width="4in"} Hierarchical Target ------------------- Finally, let’s consider the eight schools posterior distribution, a relatively simple Bayesian hierarchical model that demonstrates both the utility of hierarchical modeling as well many of the computational difficulties inherent to these models [@Rubin:1981; @GelmanEtAl:2014a]. Here the test taking performance of eight schools is modeled with individual, centered Gaussian distributions, $$y_{n} \sim \mathcal{N} \! \left( \theta_{n}, \sigma_{n}^{2} \right),$$ where the $\theta_{n}$ are modeled hierarchically, $$\begin{aligned} \theta_{n} &\sim \mathcal{N} \! \left( \mu, \tau^{2} \right) \\ \mu &\sim \mathcal{N} \! \left( 0, 10^{2} \right) \\ \tau &\sim \text{Half-}\mathcal{C} \! \left( 0, 10 \right),\end{aligned}$$ and the $\left\{ y_{n}, \sigma_{n} \right\}$ are given as data. In the typical centered-parameterization [@PapaspiliopoulosEtAl:2007] the marginal energy distribution seems to exhibits only mildly-heavy tails (Figure \[fig:energy\_schools\_cp\_exp\]a), but these empirical results are misleading. The problem is that the Hamiltonian Markov chain is not able to fully explore the tails of the target distribution, as exhibited by the large number of divergences at small $\tau$, (Figure \[fig:energy\_schools\_cp\_exp\]b). Forcing the step size of the numerical integrator to a smaller value improves the exploration of the tails (Figure \[fig:energy\_schools\_cp\_99\_exp\]b) and better reveals the true heaviness of the marginal energy distribution (Figure \[fig:energy\_schools\_cp\_99\_exp\]a), although the exploration is still incomplete. In order to completely explore the tails we have to utilize a non-centered parameterization which explores the hierarchical effects only indirectly, $$\begin{aligned} y_{n} &\sim \mathcal{N} \! \left( \mu + \tau \cdot \tilde{\theta}_{n}, \sigma_{n}^{2} \right), \\ \tilde{\theta}_{n} &\sim \mathcal{N} \! \left( 0, 1 \right) \\ \mu &\sim \mathcal{N} \! \left( 0, 10^{2} \right) \\ \tau &\sim \text{Half-}\mathcal{C} \! \left( 0, 10 \right).\end{aligned}$$ Not only does this implementation of the model not suffer from the heavy tails and pathological curvature of the centered implementation, the Gaussian-Euclidean cotangent disintegration is a nearly optimal pairing (Figures \[fig:num\_diagnostics\], \[fig:energy\_schools\_ncp\_exp\]) Discussion ========== As with any Markov chain Monte Carlo algorithm, the performance of Hamiltonian Monte Carlo is limited by its ability to sufficiently explore the target distribution. [@LivingstoneEtAl:2016], for example, demonstrates that both neighborhoods of strong curvature and heavy tails limit the exploration of a Hamiltonian Markov chain, ultimately obstructing geometric ergodicity and the central limit theorems needed to ensure robust Markov chain Monte Carlo estimation. What is unique to Hamiltonian Monte Carlo, however, is its natural ability to diagnose these pathologies. Neighborhoods of strong curvature, for example, can be identified with the divergent transitions they provoke. Moreover, heavy tails manifest both in expansive level sets and heavy-tailed marginal energy distributions. When using dynamic integration time algorithms, the former manifests as long integration times which are readily reported to users, and we have seen in this paper that heavy-tailed marginal energy distributions are straightforward to report both numerically and visually. These intrinsic diagnostics make Hamiltonian Monte Carlo extremely robust, even in the challenging problems at the frontiers of applied statistics. How to address the pathologies identified by these diagnostics is another question. For example, more heavily-tailed cotangent disintegrations, such as Laplace or even Cauchy disintegrations, may be useful. Generalizing from Euclidean to fully Riemannian disintegrations, for example with the SoftAbs metric [@Betancourt:2013b], offers another potential strategy. Within existing tools like <span style="font-variant:small-caps;">Stan</span>, however, perhaps the best way to deal with any identified pathologies is with alternative implementations, such as the non-centered parameterization utilized in the eight schools example. Acknowledgements ================ I am grateful to Gareth Roberts for enlightening discussions. This work was supported under EPSRC grant EP/J016934/1. Stan Programs ============= In this section I collect the Stan programs and configurations used in the examples. Gaussian -------- Configuration: ./gauss sample num_samples=10000 random seed=2983157687 Stan Program: parameters { real x[100]; } model { x ~ normal(0, 1); } Cauchy ------ Configuration: ./cauchy sample num_samples=10000 random seed=2983158736 Stan Program: parameters { real x[100]; } model { x ~ cauchy(0, 1); } Centered Eight Schools ---------------------- Nominal Configuration: ./eight_schools_cp sample num_samples=10000 data file=eight_schools.data.R random seed=483892929 Small Stepsize Configuration: ./eight_schools_cp sample num_samples=10000 adapt delta=0.99 data file=eight_schools.data.R random seed=483892929 Stan Program: data { int<lower=0> J; real y[J]; real<lower=0> sigma[J]; } parameters { real mu; real<lower=0> tau; real theta[J]; } model { mu ~ normal(0, 10); tau ~ cauchy(0, 10); theta ~ normal(mu, tau); y ~ normal(theta, sigma); } Noncentered Eight Schools ------------------------- Configuration: ./eight_schools_ncp sample num_samples=10000 data file=eight_schools.data.R random seed=483892929 Stan Program: data { int<lower=0> J; real y[J]; real<lower=0> sigma[J]; } parameters { real mu; real<lower=0> tau; real theta_tilde[J]; } transformed parameters { real theta[J]; for (j in 1:J) theta[j] = mu + tau * theta_tilde[j]; } model { mu ~ normal(0, 10); tau ~ cauchy(0, 10); theta_tilde ~ normal(0, 1); y ~ normal(theta, sigma); }
--- abstract: 'Functional data have been the subject of many research works over the last years. Functional regression is one of the most discussed issues. Specifically, significant advances have been made for functional linear regression models with scalar response. Let $({\cal H},\langle\cdot,\cdot\rangle)$ be a separable Hilbert space. We focus on the model $Y=\langle \Theta,X\rangle+b+\varepsilon$, where $Y$ and $\varepsilon$ are real random variables, $X$ is an ${\cal H}$–valued random element, and the model parameters $b$ and $\Theta$ are in ${\mathbb{R}}$ and ${\cal H}$, respectively. Furthermore, the error satisfies that $E(\varepsilon|X)=0$ and $E(\varepsilon^2|X)=\sigma^2<\infty$. A consistent bootstrap method to calibrate the distribution of statistics for testing $H_0: \Theta=0$ versus $H_1: \Theta\neq0$ is developed. The asymptotic theory, as well as a simulation study and a real data application illustrating the usefulness of our proposed bootstrap in practice, is presented.' author: - | Wenceslao González–Manteiga$^1$, Gil González–Rodríguez$^2$,\ Adela Martínez–Calvo$^{1,3}$ and Eduardo García–Portugués$^{1}$ title: Bootstrap independence test for functional linear models --- **Keywords:** Bootstrap; bootstrap consistency; functional linear regression; functional principal components analysis; hypothesis test. Introduction ============ Nowadays, [*Functional Data Analysis*]{} (FDA) has turned into one of the most interesting statistical fields. Particularly, functional regression models have been studied from a parametric point of view (see Ramsay and Silverman (2002, 2005)), and from a non–parametric one (see Ferraty and Vieu (2006)), being the most recent advances compiled on Ferraty and Romain (2011). This work focuses on the parametric approach, specifically, on the [*functional linear regression model with scalar response*]{} that is described below.\ Let $({\cal H},\langle \cdot,\cdot \rangle)$ be a separable Hilbert space, and let $\|\cdot\|$ be the norm associated with its inner product. Moreover, let $(\mathrm{\Omega}, \sigma, \mathrm{P})$ be a probability space and let us consider $(X,Y)$ a measurable mapping from $\mathrm{\Omega}$ to ${\cal H} \times {\mathbb{R}}$, that is, $X$ is an ${\cal H}$–valued random element whereas $Y$ is a real random variable. In this situation, let us assume that $(X,Y)$ verifies the following linear model with scalar response, $$Y = \langle \Theta, X \rangle + b + \varepsilon \label{eq.hlm}$$ where $\Theta \in {\cal H}$ is a fixed functional model parameter, $b \in {\mathbb{R}}$ is the intercept term, and $\varepsilon$ is a real random variable such that $E(\varepsilon | X)=0$ and $E(\varepsilon^2|X)=\sigma^2<\infty$. Many authors have dealt with model , being the methods based on [*Functional Principal Components Analysis*]{} (FPCA) amongst the most popular ones to estimate the model parameters (see Cardot, Ferraty, and Sarda (1999, 2003), Cai and Hall (2006), Hall and Hosseini-Nasab (2006), and Hall and Horowitz (2007)).\ The main aim of this work is to develop a consistent general bootstrap resampling approach to calibrate the distribution of statistics for testing the significance of the relationship between $X$ and $Y$, that is, for testing $H_0: \Theta=0$ versus $H_1: \Theta\neq0$, on the basis of a simple random sample $\{(X_i,Y_i)\}_{i=1}^n$ drawn from $(X,Y)$. The bootstrap techniques will become an alternative useful tool when the asymptotics of test statistics are unknown or when they are inaccurate due to small sample size.\ Since its introduction by Efron (1979), it is well–known that the bootstrap method results in a new distribution approximation which can be applied to a large number of situations, such as the calibration of pivotal quantities in the finite dimensional context (see Bickel and Freedman (1981), and Singh (1981)). As far as multivariate regression models are concerned, bootstrap validity for linear and non–parametric models was also stated in literature (see Freedman (1981), and Cao-Abad (1991)). Currently, the application of bootstrap to the functional field has been successfully started. For instance, Cuevas, Febrero, and Fraiman (2006) have proposed bootstrap confidence bands for several functional estimators such as the sample and the trimmed functional means. In the regression context, Ferraty, Van Keilegom, and Vieu (2010), and González-Manteiga and Martínez-Calvo (2011) have shown the validity of the bootstrap in the estimation of non–parametric functional regression and functional linear model, respectively, when the response is scalar. They have also proposed pointwise confidence intervals for the regression operator involved in each case. In addition, the asymptotic validity of a componentwise bootstrap procedure has been proved by Ferraty, Van Keilegom, and Vieu (2012) when a non–parametric regression is considered and both response and regressor are functional.\ Bootstrap techniques can also be very helpful for testing purposes, since they can be used in order to approximate the distribution of the statistic under the null hypothesis $H_0$. For example, Cuevas, Febrero, and Fraiman (2004) have developed a sort of parametric bootstrap to obtain quantiles for an ANOVA test, and González-Rodríguez, Colubi, and Gil (2012) have proved the validity of a residual bootstrap in that context. Hall and Vial (2006) and, more recently, Bathia, Yao, and Ziegelmann (2010) have studied the finite dimensionality of functional data using a bootstrap approximation for independent and dependent data, respectively.\ As was indicated previously, testing the lack of dependence between X and Y is our goal. This issue has stirred up a great interest during the last years due to its practical applications in the functional context. For instance, Kokoszka, Maslova, Sojka, and Zhu (2008) proposed a test for lack of dependence in the functional linear model with functional response which was applied to magnetometer curves consisting of minute–by–minute records of the horizontal intensity of the magnetic field measured at observatories located at different latitude. The aim was to analyse if the high–latitude records had a linear effect on the mid– or low–latitude records. On the other hand, Cardot, Prchal, and Sarda (2007) presented a statistical procedure to check if a real–valued covariate has an effect on a functional response in a nonparametric regression context, using this methodology for a study of atmospheric radiation. In this case, the dataset were radiation profiles curves measured at a random time and the authors tested if the radiation profiles changed along the time.\ Regarding the regression model , testing the significance of the relationship between a functional covariate and a scalar response has been the subject of recent contributions, and asymptotic approaches for this problem can be found in Cardot, Ferraty, Mas, and Sarda (2003) or Kokoszka, Maslova, Sojka, and Zhu (2008). The methods presented in these two works are mainly based on the calibration of the statistics distribution by using asymptotic distribution approximations. In contrast, we propose a consistent bootstrap calibration in order to approximate the statistics distribution. For that, we firstly introduce in Section 2 some notation and basic concepts about the regression model , the asymptotic theory for the testing procedure, and the consistency of the bootstrap techniques that we propose. In Section 3, the bootstrap calibration is presented as an alternative to the asymptotic theory previously exposed. Then, Section 4 is devoted to the empirical results. A simulation study and a real data application allow us to show the performance of our bootstrap methodology in comparison with the asymptotic approach. Finally, some conclusions are summarized in Section 5. Asymptotic theory and bootstrap =============================== Let us consider the model given in the previous Section 1. In this framework, the regression function, denoted by $m$, is given by $$m(x) = E(Y|X=x) = \langle \Theta, x \rangle + b \text{ for all } x \in {\cal H}.$$ The aim is to develop correct and consistent bootstrap techniques for testing $$\left\{\begin{array}{ll} H_0: &\Theta=0 \\ H_1: &\Theta\neq 0 \end{array}\right. \label{eq.test}$$ on the basis of a random sample $\{(X_i,Y_i)\}_{i=1}^n$ of independent and identically distributed random elements with the same distribution as $(X,Y)$. That is, our objective is to check whether $X$ and $Y$ are linearly independent ($H_0$) or not ($H_1$).\ Next, we expose briefly some technical background required to develop the theoretical results presented throughout the section. Some background --------------- Riesz Representation Theorem ensures that the functional linear model with scalar response can be handled theoretically within the considered framework. Specifically, let ${\cal H}$ be the separable Hilbert space of square Lebesgue integrable functions on a given compact set $C \subset {\mathbb{R}}$, denoted by ${\cal L}^2(C,\lambda)$, with the usual inner product and the associated norm $\|\cdot\|$. The functional linear model with scalar response between a random function $X$ and a real random variable $Y$ is defined as $$Y = \Phi(X) + \epsilon, \label{eq.flm}$$ where $\Phi$ is a continuous linear operator (that is, $\Phi \in {\cal H}'$, being ${\cal H}'$ the dual space of ${\cal H}$ with norm $\|\cdot\|'$), and $\epsilon$ is a real random variable with finite variance and independent of $X$. In virtue of Riesz Representation Theorem ${\cal H}$ and ${\cal H}'$ are isometrically identified, in such a way that for any $\Phi \in {\cal H}'$ there exists a unique $\Theta \in {\cal H}$ so that $\|\Theta\|=\|\Phi\|'$ and $\Phi(h)=\langle \Theta,h\rangle$ for all $h \in {\cal H}$. Consequently, the model presented in equation is just a particular case of the one considered in .\ Previous works regarding functional linear models assume $b=0$ (see Cardot, Ferraty, Mas, and Sarda (2003), and Kokoszka, Maslova, Sojka, and Zhu (2008)). Of course, the intercept term can be embedded in the variable counterpart of the model as in the multivariate case as follows. Let ${\cal H}_e$ be the product space ${\cal H}\times {\mathbb{R}}$ with the corresponding inner product $\langle \cdot,\cdot\rangle_e$, and define $X'=(X,1)$ and $\Theta'=(\Theta,b) \in {\cal H}_e$. Then the model considered in can be rewritten as $Y = \langle \Theta', X' \rangle_e + \varepsilon$ (and consequently $X'$ cannot be assumed to be centered). Nevertheless, in the context of the linear independence test, the aim is to check if $\Theta=0$ or not, and this is not equivalent to checking whether $\Theta'=0$ or not. In addition, in practice the intercept term $b$ cannot be assumed to be equal to $0$. Thus, in order to avoid any kind of confusion, in this paper the intercept term $b$ has been written explicitly.\ In the same way, in the above mentioned papers, the random element $X$ is assumed to be centered. Although, in many cases, the asymptotic distribution of the proposed statistics does not change if $\{X_i\}_{i=1}^n$ is replaced by the dependent sample $\{X_i-\overline{X}\}_{i=1}^n$, the situation with the bootstrap version of the statistics could be quite different. In fact, as it will be shown afterwards, different bootstrap statistics could be considered when this replacement is done. Hence, for the developments in this section, it will not be assumed that the $X$ variable is centered. Linear independence test ------------------------ Given a generic ${\cal H}$–valued random element $H$ such that $E(\|H\|^2)<\infty$, its associated covariance operator $\Gamma_H$ is defined as the operator $\Gamma_H:{\cal H} \rightarrow {\cal H}$ $$\Gamma_H(h)=E\left(\langle H-\mu_H,h\rangle(H-\mu_H)\right)=E\left(\langle H,h\rangle H\right)-\langle\mu_H,h\rangle\mu_H,$$ for all $h \in {\cal H}$, where $\mu_H \in {\cal H}$ denotes the expected value of $H$. From now on, it will be assumed that $E(\|X\|^2)<\infty$, and thus, as a consequence of Hölder’s inequality, $E(Y^2)<\infty$. Whenever there is no possible confusion, $\Gamma_X$ will be abbreviated as $\Gamma$. It is well–known that $\Gamma$ is a nuclear and self–adjoint operator. In particular, it is a compact operator of trace class and thus, in virtue of the Spectral Theorem Decomposition, there is an orthonormal basis of $H$, $\{v_j\}_{j \in {\mathbb{N}}}$, consisting on eigenvectors of $\Gamma$ with corresponding eigenvalues $\{\lambda_j\}_{j \in {\mathbb{N}}}$, that is, $\Gamma(v_j)=\lambda_j v_j$ for all $j \in {\mathbb{N}}$. As usual, the eigenvalues are assumed to be arranged in decreasing order ($\lambda_1\geq \lambda_2\geq \ldots$). Since the operator $\Gamma$ is symmetric and non–negative definite, then the eigenvalues are non–negative.\ In a similar way, let us consider the cross–covariance operator $\Delta: {\cal H} \rightarrow {\mathbb{R}}$ between $X$ and $Y$ given by $$\Delta(h)=E\left(\langle X-\mu_X,h\rangle(Y-\mu_Y)\right)=E\left(\langle X,h\rangle Y\right)-\langle\mu_X,h\rangle\mu_Y,$$ for all $h \in {\cal H}$, where $\mu_Y \in {\mathbb{R}}$ denotes the expected value of $Y$. Of course, $\Delta \in {\cal H}'$ and the following relation between the considered operators and the regression parameter $\Theta$ is satisfied $$\Delta(\cdot)=\langle \Gamma(\cdot), \Theta\rangle. \label{eq.relation}$$ The Hilbert space $\cal H$ can be expressed as the direct sum of the two orthogonal subspaces induced by the self–adjoint operator $\Gamma$: the kernel or null space of $\Gamma$, ${\cal N}(\Gamma)$, and the closure of the image or range of $\Gamma$, $\overline{{\cal R}(\Gamma)}$. Thus, $\Theta$ is determined uniquely by $\Theta=\Theta_1 + \Theta_2$ with $\Theta_1 \in {\cal N}(\Gamma)$ and $\Theta_2 \in \overline{{\cal R}(\Gamma)}$. As $\Theta_1 \in {\cal N}(\Gamma)$, it is easy to check that $Var(\langle X,\Theta_1\rangle)=0$ and, consequently, the model introduced in can be expressed as $$Y = \langle \Theta_2, X \rangle + \langle \Theta_1, \mu_X \rangle + b + \varepsilon.$$ Therefore, it is not possible to distinguish between the term $\langle \Theta_1, \mu_X \rangle$ and the intercept term $b$, and consequently it is not possible to check whether $\Theta_1=0$ or not. Taking this into account, the hypothesis test will be restricted to check $$\left\{\begin{array}{ll} H_0: &\Theta_2=0 \\ H_1: &\Theta_2\neq 0 \end{array}\right. \label{eq.restrictedtest}$$ on the basis of the available sample information.\ Note that in this case, according to the relation between the operators and the regression parameter shown in , $\Theta_2=0$ if, and only if, $\Delta(h)=0$ for all $h \in {\cal H}$. Consequently, the hypothesis test in is equivalent to $$\left\{\begin{array}{ll} H_0: &\|\Delta\|'=0 \\ H_1: &\|\Delta\|'\neq 0 \end{array}\right. \label{eq.deltatest}$$ It should be recalled that, in previous works $\mu_X$ is assumed to be equal $0$. Thus, the preceding reasoning leads to the fact that $\Theta_1$ cannot be estimated based on the information provided by $X$ (see, for instance, Cardot, Ferraty, Mas, and Sarda (2003)). Consequently the hypothesis testing is also restricted to the one in the preceding equations. In addition in Cardot, Ferraty, Mas, and Sarda (2003), it is also assumed for technical reasons that $\overline{{\cal R}(\Gamma)}$ is an infinite–dimensional space. On the contrary, this restriction is not imposed in the study here developed. Note that another usual assumption is that the intercept term vanishes. Although this is not common in most of situations, it should be noted that if $b=0$ and $X$ is not assumed to be centered (as in this work), then an interesting possibility appears: to check whether $\Theta_1=0$ or not by checking the nullity of the intercept term of the model, and thus to check the original hypothesis testing in . This open problem cannot be solved with the methodology employed in the current paper (or in the previous ones) because the idea is based on checking , which is equivalent to the restricted test but not to the unrestricted one in . Testing procedure and asymptotic theory --------------------------------------- According to the relation between $\|\cdot\|'$ and $\|\cdot\|$, the dual norm of $\Delta \in {\cal H}'$ can be expressed equivalently in terms of the ${\cal H}$–valued random element $(X-\mu_X)(Y-\mu_Y)$ as follows $$\|\Delta\|'=\|\langle E\left((X-\mu_X)(Y-\mu_Y)\right),\cdot\rangle\|'=\|E\left((X-\mu_X)(Y-\mu_Y)\right)\|.$$ Thus, based on an i.i.d. sample $\{(X_i,Y_i)\}_{i=1}^n$ drawn from $(X,Y)$, $$D=\|E\left((X-\mu_X)(Y-\mu_Y)\right)\|=\|T\|$$ can be estimated in a natural way by means of its empirical counterpart $D_n=\|T_n\|$, where $T_n$ is the ${\cal H}$–valued random element given by $$T_n=\frac{1}{n}\sum_{i=1}^n (X_i-\overline{X})(Y_i-\overline{Y}),$$ where $\overline{X}$ and $\overline{Y}$ denote as usual the corresponding sample means. The next theorem establishes some basic properties of $T_n$. \[th.conv\] Assuming that holds with $E(\varepsilon)=0$, $E(\varepsilon^2)=\sigma^2<\infty$ and $E(\|X\|^4)<\infty$, then 1. $E(T_n)= E\left((X-\mu_X)(Y-\mu_Y)\right)(n-1)/n$ 2. $T_n$ converges a.s.–$P$ to $E\left((X-\mu_X)(Y-\mu_Y)\right)$ as $n \rightarrow \infty$ 3. $\sqrt{n} \left(T_n - E\left((X-\mu_X)(Y-\mu_Y)\right) \right)$ converges in law, as $n \rightarrow \infty$, to a centered Gaussian element $Z$ in ${\cal H}$ with covariance operator $$\Gamma_Z(\cdot) = \sigma^2 \Gamma(\cdot) + E\left( (X-\mu_X)\langle X-\mu_X, \cdot \rangle \langle X-\mu_X, \Theta \rangle^2\right).$$ Since $T_n$ can be equivalently expressed as $$T_n = \frac{1}{n}\sum_{i=1}^n (X_i-\mu_X)(Y_i-\mu_Y) - (\overline{X} - \mu_X)(\overline{Y} - \mu_Y),$$ it is straightforward to check item $1$. The a.s.–$P$ convergence is a direct application of the SLLN for separable Hilbert–valued random elements.\ On the other hand, given that $E(\|(X-\mu_X)(Y-\mu_Y)\|^2)<\infty$, the convergence in law can be deduced by applying the CLT for separable Hilbert–valued random elements (see, for instance, Laha and Rohatgi (1979)) together with Slutsky’s Theorem. The concrete expression of the operator $\Gamma_Z$, that is, $\Gamma_Z = \Gamma_{(X-\mu_X)(Y-\mu_Y)} = \Gamma_{(X-\mu_X)\varepsilon} + \Gamma_{(X-\mu_X)\langle X-\mu_X, \Theta \rangle }$, can be obtained by simple computations. In order to simplify the notation, from now on, given any ${\cal H}$–valued random element $H$ with $E(\|H\|^2)<\infty$, $Z_H$ will denote a centered Gaussian element in ${\cal H}$ with covariance operator $\Gamma_H$. \[cor.null\] Under the conditions of Theorem \[th.conv\], if the null hypothesis $H_0: \|\Delta\|'=0$ is satisfied, then $\sqrt{n} T_n$ converges in law to $Z_{(X-\mu_X)\varepsilon}$ (with covariance operator $\sigma^2 \Gamma$), and consequently, $\|\sqrt{n} T_n\|$ converges in law to $\|Z_{(X-\mu_X)\varepsilon}\|$. In contrast to Theorem 1 in Cardot, Ferraty, Mas, and Sarda (2003), the result in Corollary \[cor.null\] is established directly on the Hilbert space $\cal H$ instead of on its dual space. In addition, no assumption of centered $X$ random elements or null intercept term is necessary. Nevertheless these two assumptions could be easily removed in that paper in order to establish a dual result of Corollary \[cor.null\].\ Furthermore, in view of Corollary \[cor.null\], the asymptotic null distribution of $\|\sqrt{n} T_n\|$ is not explicitly known. This is the reason why no further research on how to use in practice this statistic (or its dual one) for checking if $\Theta_2$ equals $0$ is carried out in Cardot, Ferraty, Mas, and Sarda (2003). Instead, an alternative statistic that is used in the simulation section for comparative purposes is considered. Nevertheless, it is still possible to use $\|\sqrt{n} T_n\|$ as a core statistic in order to solve this test in practice by means of bootstrap techniques.\ One natural way of using the asymptotic result of Corollary \[cor.null\] for solving the test under study is as follows. Consider a consistent (at least under $H_0$) estimator $\sigma^2_n$ of $\sigma^2$ (for instance, the sample variance of $Y$ could be used, or perhaps the one introduced by Cardot, Ferraty, Mas, and Sarda (2003), provided that its theoretical behavior is analyzed). Then, according to Slutsky’s Theorem $\|\sqrt{n} T_n\|/\sigma_n$ converges in law under $H_0$ to the norm of $Z_X$. As its covariance operator $\Gamma$ is unknown, it can be approximated by the empirical one $\Gamma_n$. And thus, $\|Z_X\|$ can be approximated by $\|Z_n\|$, being $Z_n$ a centered Gaussian element in ${\cal H}$ with covariance operator $\Gamma_n$. Of course, the distribution of $\|Z_n\|$ is still difficult to compute directly, nevertheless one can make use of the CLT and approximate its distribution by Monte Carlo method by the distribution of $$\left\|\frac{1}{m}\sum_{i=1}^m (X_i^*-\overline{X})\right\|$$ for a large value of $m$, being $\{X_i^*\}_{i=1}^m$ i.i.d. random elements chosen at random from the fixed population $(X_1,\ldots,X_n)$. Obviously, this method is a precursor of the bootstrap procedures.\ In order to complete the asymptotic study of the statistic $\|\sqrt{n} T_n\|$, its behavior under local alternatives is going to be analyzed. To this purpose, let us consider $\Theta \in {\cal H}$ so that $\|\Theta_2\|>0$, and given $\delta_n >0$ consider the modified random sample $$Y_i^n= \langle X_i, \frac{\delta_n}{\sqrt{n}} \Theta \rangle + b + \varepsilon_i,$$ for all $i \in \{1,\ldots,n\}$. Then, the null hypothesis is not verified. However, if $\delta_n/\sqrt{n} \rightarrow 0$, then $\|(\delta_n/\sqrt{n}) \Theta \| \rightarrow 0$, that is, $H_0$ is approached with “speed” $\delta_n/\sqrt{n}$. In these conditions, $$E\left( (X_i-\mu_{X_i})(Y_i^n-\mu_{Y_i^n}) \right) = \frac{\delta_n}{\sqrt{n}} \Gamma(\Theta),$$ and thus the following theorem that establishes the behavior of the statistic under the considered local alternatives can be easily deduced. Under the conditions of Theorem \[th.conv\] and with the above notation, if $\delta_n \rightarrow \infty$ and $\delta_n/\sqrt{n} \rightarrow 0$ as $n \rightarrow \infty$ then $$P\left( \left\|\frac{1}{\sqrt{n}} \sum_{i=1}^n \left(X_i - \overline{X}\right) \left( Y_i^n - \overline{Y^n} \right) \right\| \leq t\right) \rightarrow 0$$ as $n \rightarrow \infty$, for all $t \in {\mathbb{R}}$. Bootstrap procedures -------------------- The difficulty of using the previously proposed statistic to solve the hypothesis test by means of asymptotic procedures suggests the development of appropriated bootstrap techniques. The asymptotic consistency of a bootstrap approach is guaranteed if the associated bootstrap statistic converges in law to a non–degenerated distribution irrespectively of $H_0$ being satisfied or not. In addition, in order to ensure its asymptotic correctness, this limit distribution must coincide with the asymptotic one of the testing statistic provided that $H_0$ holds.\ Consequently, the asymptotic limit established in Corollary \[cor.null\] plays a fundamental role for defining appropriate bootstrap statistics. In this way, recall that $$\frac{1}{\sqrt{n}}\sum_{i=1}^n \left( \big(X_i - \overline{X}\big)\big(Y_i - \overline{Y}\big) - E\big((X-\mu_X)(Y-\mu_Y)\big)\right)$$ converges in law to $Z_{(X-\mu_X)(Y-\mu_Y)}$, irrespectively of $H_0$ being satisfied or not and, in addition, if $H_0$ is satisfied then $\Gamma_{(X-\mu_X)(Y-\mu_Y)}=\sigma^2 \Gamma$. Thus, this is a natural statistic to be mimicked by a bootstrap one. Note that, $$\left(\frac{1}{n}\sum_{i=1}^n\big(Y_i - \overline{Y}\big)^2\right)\left(\frac{1}{\sqrt{n}}\sum_{i=1}^n \big(X_i-\mu_X\big) \right), \label{eq.stat2}$$ converges in law to $(\sigma^2 + E(\langle X-\mu_X, \Theta\rangle^2)) Z_{X}$, whose covariance operator is $(\sigma^2 + E(\langle X-\mu_X, \Theta\rangle^2) \Gamma$. Particularly, when $H_0$ is satisfied, this operator reduces again to $\sigma^2 \Gamma$. Consequently, another possibility consists in mimicking this second statistic by means of a bootstrap one, improving the approximation suggested in the previous subsection. Note that the left term in the product in equation could be substituted by any other estimator under $H_0$ of $\sigma^2$ that converges to a finite constant if $H_0$ does not hold. Anyway, this second approximation could lead to worst results under the null hypothesis, because the possible dependency between $X$ and $\varepsilon$ is lost (as the resample would focus only on the $X$ information).\ Two possibilities for mimicking the statistics which were above–mentioned are going to be explored, namely a “naive” paired bootstrap and a “wild” bootstrap approach. In order to achieve this goal, let $\{(X_i^*,Y_i^*)\}_{i=1}^n$ be a collection of i.i.d. random elements drawn at random from $(X_1,Y_1),\ldots,(X_n,Y_n)$, and let us consider the following “naive” paired bootstrap statistic $$T_n^{N*}=\frac{1}{n}\sum_{i=1}^n \left( \big(X_i^* - \overline{X^*}\big)\big(Y_i^* - \overline{Y^*}\big) - \big(X_i - \overline{X}\big)\big(Y_i - \overline{Y}\big) \right).$$ In addition, let us consider $\sigma^2_n= (1/n)\sum_{i=1}^n(Y_i - \overline{Y})^2$ and $\sigma^{*2}_n= (1/n)\sum_{i=1}^n(Y^*_i - \overline{Y^*})^2$, the empirical estimator of $\sigma^2_Y$ under $H_0$ and its corresponding bootstrap version.\ The asymptotic behavior of the “naive” bootstrap statistic will be analyzed through some results on bootstrapping general empirical measures obtained by Giné and Zinn (1990). It should be noted that the bootstrap results in that paper refer to empirical process indexed by a class of functions $\cal F$, that particularly extend to the bootstrap about the mean in separable Banach (and thus Hilbert) spaces. In order to establish this connection, it is enough to choose $${\cal F}= \{ f \in {\cal H}' | \| f \|'\leq 1\}$$ (see Giné (1997) and Kosorok (2008), for a general overview of indexed empirical process). ${\cal F}$ is image admissible Suslin (considering the weak topology). In addition, $F(h)=\sup_{f \in {\cal F}}|f(h)|=\|h\|$ for all $h \in {\cal H}$ and thus $E(F^2(X))=E(\|X\|^2) < \infty$.\ Consider the bounded and linear (so continuous) operator $\delta$ from ${\cal H}$ to $l^{\infty}{\cal (F)}$ given by $\delta(h)(f)=\delta_h(f)=f(h)$ for all $h \in {\cal H}$ and all $f \in {\cal F}$ and denote by $R(\delta) \subset l^{\infty}{\cal (F)}$ its range. As $\|\delta(h)\|_\infty=\|h\|$ for all $h \in {\cal H}$ then, there exists $\delta^{-1}:R(\delta) \rightarrow {\cal H}$, so that $\delta^{-1}$ is continuous. In addition, as $R(\delta)$ is closed, Dugundji Theorem allows us to consider a continuous extension $\delta^{-1}:l^{\infty}{\cal (F)} \rightarrow {\cal H}$ (see for instance Kosorok (2008), Lemma 6.16 and Theorem 10.9). Thus, following the typical empirical process notation, the empirical process $(1/\sqrt{n}) \sum_{i=1}^n (\delta_{X_i} - \mathbb{P})$ indexed in $\cal F$ is directly connected with $(1/\sqrt{n}) \sum_{i=1}^n (X_i - E(X))$ by means of the continuous mapping $\delta^{-1}$ and vice–versa.\ Some consequences of this formulation applied to the work developed by Giné and Zinn (1990) lead to the results collected in following lemma. \[le.1\] Let $\xi$ be a measurable mapping from a probabilistic space denoted by $(\Omega, \sigma, P)$ to a separable Hilbert space $({\cal H},\langle \cdot, \cdot \rangle)$ with corresponding norm $\|\cdot\|$ so that $E(\|\xi\|^2)<\infty$. Let $\{\xi_i\}_{i=1}^n$ be a sequence of i.i.d. random elements with the same distribution as $\xi$, and let $\{\xi^*_i\}_{i=1}^n$ be i.i.d. from $\{\xi_i\}_{i=1}^n$. Then 1. $\sqrt{n}(\overline{\xi^*} - \overline{\xi})$ converges in law to $Z_{\xi}$ a.s.–$P$ 2. $\overline{\xi^*}$ converges in probability to $E(\xi)$ a.s.–$P$ 3. $\overline{\|\xi^*\|^2}$ converges in probability to $E(\|\xi\|^2)$ a.s.–$P$ To prove item 1 note that the CLT for separable Hilbert–valued random elements (see, for instance, Laha and Rohatgi (1979)) together with the Continuous Mapping Theorem applied to $\delta$ guarantees that ${\cal F} \in \text{CLT}(P)$. Thus, Theorem 2.4 of Giné and Zinn (1990) ensures that $n^{1/2} (\hat{P}_n(w) - P_n(w))$ converges in law to a Gaussian process on $\cal F$, $G=\delta(Z_{\xi})$ a.s.–$P$. Consequently, by applying again the Continuous Mapping Theorem $\sqrt{n}(\overline{\xi^*} - \overline{\xi}) = \delta^{-1}(n^{1/2} (\hat{P}_n(w) - P_n(w)))$ converges in law to $Z_{\xi}=\delta^{-1}(G)$.\ Items 2 and 3 can be checked in a similar way by applying Theorem 2.6 of Giné and Zinn (1990). Note that item 1 is also a direct consequence of Remark 2.5 of Giné and Zinn (1990); nevertheless it was proven based on Theorem 2.4 to illustrate the technique. The following theorem establishes the asymptotic consistency and correctness of the “naive” bootstrap approach. Under the conditions of Theorem \[th.conv\], we have that $\sqrt{n}T_n^{N*}$ converges in law to\ $Z_{(X-\mu_X)(Y-\mu_Y)}$ a.s.–$P$. In addition, $\sigma^{*2}_n$ converges in probability to $\sigma^2_{Y}=\sigma^2 + E(\langle X-\mu_X, \Theta\rangle^2)$ a.s.–$P$. First of all consider the bootstrap statistic $$S_n^*=\frac{1}{\sqrt{n}}\sum_{i=1}^n \left( \big(X_i^* - \mu_X\big)\big(Y_i^* - \mu_Y\big) - \overline{\big({X} - \mu_X\big)\big({Y} - \mu_Y\big)}\right)$$ and note that $\{\big(X_i^* - \mu_X\big)\big(Y_i^* - \mu_Y\big)\}_{i=1}^n$ are i.i.d. $\cal H$–valued random elements chosen at random from the “bootstrap population” $\{\big(X_i - \mu_X\big)\big(Y_i - \mu_Y\big)\}_{i=1}^n$. Then, item 1 in Lemma \[le.1\] guarantees that $S_n^*$ converges in law to $Z_{(X-\mu_X)(Y-\mu_Y)}$ a.s.–$P$. On the other hand, $S_n^*$ equals $\sqrt{n}T_n^{N*}$ plus the following terms $$-\frac{1}{\sqrt{n}}\sqrt{n}(\overline{X^*} - \overline{X})\sqrt{n}(\overline{Y^*} - \overline{Y}) + \sqrt{n}(\overline{X^*} - \overline{X})(\overline{Y^*} - \mu_Y) + (\overline{X^*} - \mu_{X})\sqrt{n}(\overline{Y^*} - \overline{Y}).$$ Items 1 and 2 in Lemma \[le.1\], together with Slutsky’s Theorem, ensure that these three terms converge in probability to $0$ a.s.–$P$, and consequently the convergence in law stated in the theorem is proven.\ Finally, the convergence of $\sigma^{*2}_n$ holds in virtue of items 2 and 3 in Lemma \[le.1\]. The “naive” bootstrap approach is described in the following algorithm. [=0.5cm]{} Compute the value of the statistic $T_n$ (or the value $T_n/\sigma_n$). Draw $\{(X_i^*,Y_i^*)\}_{i=1}^n$, a sequence of i.i.d. random elements chosen at random from the initial sample $(X_1,Y_1),\ldots,(X_n,Y_n)$, and compute $a_n=\|T_n^{N*}\|$ (or $b_n=\|T_n^{N*}\|/\sigma^*_n$). Repeat Step 2 a large number of times $B \in {\mathbb{N}}$ in order to obtain a sequence of values $\{a_n^l\}_{l=1}^B$ (or $\{b_n^l\}_{l=1}^B$). Approximate the p–value of the test by the proportion of values in $\{a_n^l\}_{l=1}^B$ greater than or equal to $\|T_n\|$ (or by the proportion of values in $\{b_n^l\}_{l=1}^B$ greater than or equal to $\|T_n\|/\sigma_n$) Analogously, let $\{\varepsilon_i^*\}_{i=1}^n$ be i.i.d. centered real random variables so that $E\big((\varepsilon_i^*)^2\big)=1$ and\ $\int_0^{\infty} (P(|\varepsilon_1^*|>t)^{1/2})<\infty$ (to guarantee this last assumption, it is enough that $E\big((\varepsilon_i^*)^d\big)<\infty$ for certain $d>2$), and consider the “wild” bootstrap statistic $$T_n^{W*}=\frac{1}{n}\sum_{i=1}^n \big(X_i - \overline{X}\big)\big(Y_i - \overline{Y}\big) \varepsilon_i^*.$$ In order to analyze the asymptotic behavior of the “wild” bootstrap statistic, the following lemma will be fundamental. It is a particularization of a result due to Ledoux, Talagrand and Zinn (cf. Giné and Zinn (1990), and Ledoux and Talagrand (1988)). See also the Multiplier Central Limit Theorem in Kosorok (2008) for the empirical process indexed by a class of measurable functions counterpart. \[le.2\] Let $\xi$ be a measurable mapping from a probabilistic space denoted by $(\Omega, \sigma, P)$ to a separable Hilbert space $({\cal H},\langle \cdot, \cdot \rangle)$ with corresponding norm $\|\cdot\|$ so that $E(\|\xi\|^2)<\infty$. Let $\{\xi_i\}_{i=1}^n$ be a sequence of i.i.d. random elements with the same distribution as $\xi$, and let $\{W_i\}_{i=1}^n$ be a sequence of i.i.d. random variables (in the same probability space and independent of $\{\xi_i\}_{i=1}^n$) with $E(W_i)=0$ and $\int_0^{\infty} (P(|W_1|>t)^{1/2})<\infty$, then the following are equivalent 1. $E(\|\xi\|^2)<\infty$ (and consequently $\sqrt{n}(\overline{\xi} - E({\xi}))$ converges in law to $Z_{\xi}$). 2. For almost all $\omega \in \Omega$, $(1/\sqrt{n}) \sum_{i=1}^n W_i \xi_i(\omega)$ converges in law to $Z_{\xi}$. As a consequence, the asymptotic consistency and correctness of the “wild” bootstrap approach is guaranteed by the following theorem. Under the conditions of Theorem \[th.conv\], we get that $\sqrt{n}T_n^{W*}$ converges in law to\ $Z_{(X-\mu_X)(Y-\mu_Y)}$ a.s.–$P$. According to Lemma \[le.2\], for almost all $\omega \in \Omega$, $$S_n^*=\frac{1}{\sqrt{n}}\sum_{i=1}^n \big(X_i^w - \mu_X\big)\big(Y_i^w - \mu_Y\big)\varepsilon^*_i$$ converges in law to $Z_{(X-\mu_X)(Y-\mu_Y)}$. Moreover $(\overline{Y^w}-\mu_Y)$ and $(\overline{X^w}-\mu_X)$ converges to $0$ (by SLLN).\ Finally note that, for almost all $\omega \in \Omega$, $$\begin{split}S_n^* &= \sqrt{n}T_n^{W*} + (\overline{Y^w}-\mu_Y) \frac{1}{\sqrt{n}}\sum_{i=1}^n(X_i^w - \mu_X)\varepsilon^*_i\\ &+ (\overline{X^w}-\mu_X) \frac{1}{\sqrt{n}}\sum_{i=1}^n(Y_i^w - \mu_Y)\varepsilon^*_i - (\overline{X^w}-\mu_X)(\overline{Y^w}-\mu_Y) \frac{1}{\sqrt{n}}\sum_{i=1}^n\varepsilon^*_i.\end{split}$$ Lemma \[le.2\], together with the SLLN above–mentioned, guarantees the convergence in probability to $0$ of the last three summands, and thus the result is reached in virtue of Slutsky’s Theorem. The “wild” bootstrap approach proposed can be applied by means of the following algorithm. [=0.5cm]{} Compute the value of the statistic $T_n$ (or the value $T_n/\sigma_n$). Draw $\{\varepsilon_i^*\}_{i=1}^n$ a sequence of i.i.d. random elements $\varepsilon$, and compute $a_n=\|T_n^{W*}\|$ (or $b_n=\|T_n^{W*}\|/\sigma^*_n$, in this case $\sigma^*_n$ is computed like in Step 2 of the Naive Bootstrap algorithm). Repeat Step 2 a large number of times $B \in {\mathbb{N}}$ in order to obtain a sequence of values $\{a_n^l\}_{l=1}^B$ (or $\{b_n^l\}_{l=1}^B$). Approximate the p–value of the test by the proportion of values in $\{a_n^l\}_{l=1}^B$ greater than or equal to $\|T_n\|$ (or by the proportion of values in $\{b_n^l\}_{l=1}^B$ greater than or equal to $\|T_n\|/\sigma_n$). Bootstrap calibration vs. asymptotic theory =========================================== For simplicity, suppose from now on that $b=0$ and $X$ of zero–mean in , that is, suppose that the regression model is given by $$Y=\langle\Theta,X\rangle+\varepsilon.$$ Furthermore, $\Delta(h)=E\left(\langle X, h \rangle Y\right)$ and, analogously, $\Gamma(h)=E\left(\langle X, h \rangle X\right)$. In such case, if we assume that $\sum_{j=1}^{\infty}{(\Delta(v_j)/\lambda_j)^2}<+\infty$ and $Ker(\Gamma)=\{0\}$, then $$\Theta=\sum_{j=1}^{\infty}{\frac{\Delta(v_j)}{\lambda_j}v_j},$$ being $\{(\lambda_j,v_j)\}_{j\in{\mathbb{N}}}$ the eigenvalues and eigenfunctions of $\Gamma$ (see Cardot, Ferraty, and Sarda (2003)).\ A natural estimator for $\Theta$ is the FPCA estimator based on $k_n$ functional principal components given by $$\hat{\Theta}_{k_n}=\sum_{j=1}^{k_n}{\frac{\Delta_n(\hat{v}_j)}{\hat{\lambda}_j}\hat{v}_j},$$ where $\Delta_n$ is the empirical estimation of $\Delta$, that is, $\Delta_n(h)=(1/n)\sum_{i=1}^n{\langle X_i,h\rangle Y_i}$, and $\{(\hat{\lambda}_j,\hat{v}_j)\}_{j \in {\mathbb{N}}}$ are the eigenvalues and the eigenfunctions of $\Gamma_n$, the empirical estimator of $\Gamma$: $\Gamma_n(h)=(1/n)$ $\sum_{i=1}^n{\langle X_i,h\rangle X_i}$.\ Different statistics can be used for testing the lack of dependence between $X$ and $Y$. Bearing in mind the expression , one can think about using an estimator of $\|\Theta\|^2=\sum_{j=1}^{\infty}{(\Delta(v_j)/\lambda_j)^2}$ in order to test these hypotheses. In an alternative way, the expression can be a motivation for different class of statistics based on the estimation of $\|\Delta\|'$.\ One asymptotic distribution free based on the latter approach was given by Cardot, Ferraty, Mas, and Sarda (2003). They proposed as test statistic $$T_{1,n}=k_n^{-1/2}\left(\hat{\sigma}^{-2}||\sqrt{n}\Delta_n\hat{A}_n||'^2-k_n\right), \label{eq.t1}$$ where $\hat{A}_n(\cdot)=\sum_{j=1}^{k_n}{\hat{\lambda}_j^{-1/2}\langle\cdot,\hat{v}_j\rangle\hat{v}_j}$ and $\hat{\sigma}^2$ is a consistent estimator of $\sigma^2$. Cardot, Ferraty, Mas, and Sarda (2003) showed that, under $H_0$, $T_{1,n}$ converges in distribution to a centered Gaussian variable with variance equal to 2. Hence, $H_0$ is rejected if $|T_{1,n}|>\sqrt{2}z_{1-\alpha/2}$ ($z_{\alpha}$ the $\alpha$–quantile of a $\mathcal{N}(0,1)$), and accepted otherwise. Besides, Cardot, Ferraty, Mas, and Sarda (2003) also proposed another calibration of the statistic distribution based on a permutation mechanism.\ On the other hand, taking into account that $||\Theta||^2=\sum_{j=1}^{\infty}{(\Delta(v_j)/\lambda_j)^2}$, one can use the statistic $$T_{2,n}=\sum_{j=1}^{k_n}{\left(\frac{\Delta_n(\hat{v}_j)}{\hat{\lambda}_j}\right)^2}, \label{eq.t2}$$ which limit distribution is not known.\ Finally, a natural competitive statistic is the one proposed throughout Section 2.3 $$T_{3,n}=\left\|\frac{1}{n}\sum_{i=1}^{n}{(X_i-\bar{X})(Y_i-\bar{Y})}\right\|, \label{eq.t3}$$ which we will denote by “F–test” from now on since it is the natural generalization of the well–known F–test in the finite–dimensional context. Another possibility is to consider the studentized version of $$T_{3s,n}=\frac{1}{\hat{\sigma}}\left\|\frac{1}{n}\sum_{i=1}^{n}{(X_i-\bar{X})(Y_i-\bar{Y})}\right\|, \label{eq.t3s}$$ where $\hat{\sigma}^2$ is the empirical estimation of $\sigma^2$.\ In general, for the statistics such as , , and , the calibration of the distribution can be obtained by using bootstrap. Furthermore, in the previous section, “naive” and “wild” bootstrap were shown to be consistent for the F–test, that is, the distribution of $T_{3,n}$ and $T_{3s,n}$ can be approximated by their corresponding bootstrap distribution, and $H_0$ can be rejected when the statistic value does not belong to the interval defined for the bootstrap acceptation region of confidence $1-\alpha$. The same calibration bootstrap can be applied to the tests based on $T_{1,n}$ and $T_{2,n}$, although the consistence of the bootstrap procedure in this cases have not been proved in this work. Simulation and real data applications ===================================== In this section a simulation study and an application to a real dataset illustrate the performance of the asymptotic approach and the bootstrap calibration from a practical point of view. Simulation study ---------------- We have simulated $ns=500$ samples, each being composed of $n\in\{50,100\}$ observations from the functional linear model $Y=\langle\Theta,X\rangle+\varepsilon$, being $X$ a Brownian motion and $\varepsilon\sim \mathcal{N}(0,\sigma^2)$ with signal–to–noise ratio $r=\sigma/\sqrt{\mathbb{E}(\langle X,\Theta\rangle^2)}\in\{0.5,1,2\}$.\ Under $H_0$, we have considered the model parameter $\Theta_0(t)=0$, $t\in[0,1]$, whereas under $H_1$, the selected model parameter was $\Theta_1(t)=\sin(2\pi t^3)^3$, $t\in[0,1]$. Furthermore, under $H_0$ we have chosen $\sigma=1$, while in the alternative $H_1$ we assigned the three different values that were commented before. Let us remark that both $X$ and $\Theta$ were discretized to 100 equidistant design points.\ We have selected the statistical tests which were introduced in the previous section: , , and . For , three distribution approximations were considered: the asymptotic approach ($\mathcal{N}(0,2)$) and the following two bootstrap calibrations $$\begin{aligned} T_{1,n}^{*(a)} & = & \frac{1}{\sqrt{k_n}}\left(\frac{n}{(\hat{\sigma}^{*})^2}\sum_{j=1}^{k_n}{\frac{\left(\Delta_n^{*}(\hat{v}_j)\right)^2} {\hat{\lambda}_j}}-k_n\right), \\ T_{1,n}^{*(b)} & = & \frac{1}{\sqrt{k_n}}\left(\frac{n}{\hat{\sigma}^2}\sum_{j=1}^{k_n}{\frac{\left(\Delta_n^{*}(\hat{v}_j)\right)^2} {\hat{\lambda}_j}}-k_n\right).\end{aligned}$$ The difference between the two proposed bootstrap approximations is that in the former the estimation of $\sigma^2$ is also bootstrapped in each iteration. On the other hand, for , and , only the bootstrap approaches were computed $$\begin{aligned} T_{2,n}^{*} & = & \sum_{j=1}^{k_n}{\left(\frac{\Delta_n^{*}(\hat{v}_j)}{\hat{\lambda}_j}\right)^2}, \\ T_{3,n}^{*} & = & \left\|\frac{1}{n}\sum_{i=1}^{n}{(X_i-\bar{X})(Y_i-\bar{Y})\varepsilon_i^*}\right\|, \\ T_{3s,n}^{*} & = & \frac{1}{\hat{\sigma}^{*}}\left\|\frac{1}{n}\sum_{i=1}^{n}{(X_i-\bar{X})(Y_i-\bar{Y})\varepsilon_i^*}\right\|.\end{aligned}$$ For this simulation study, we have used the “wild” bootstrap algorithm introduced in Section 2.4 for the F–test and its studentized version, and the following adaptation of this consistent “wild” bootstrap for $T_{1,n}$ and $T_{2,n}$. [=0.5cm]{} Compute the value of the statistic $T_{1,n}$ (or the value $T_{2,n}$). Draw $\{\varepsilon_i^*\}_{i=1}^n$ a sequence of i.i.d. random elements $\varepsilon$, and define $Y_i^{*}=Y_i\varepsilon_i^*$ for all $i=1,\ldots,n$. Build $\Delta_n^{*}(\cdot)=n^{-1}\sum_{i=1}^{n}{\langle X_i,\cdot\rangle Y_i^{*}}$ and compute $a_n=|T_{1,n}^{*}|$ (or $b_n=|T_{2,n}^{*}|$). Repeat Steps 2 and 3 a large number of times $B \in {\mathbb{N}}$ in order to obtain a sequence of values $\{a_n^l\}_{l=1}^B$ (or $\{b_n^l\}_{l=1}^B$). Approximate the p–value of the test by the proportion of values in $\{a_n^l\}_{l=1}^B$ greater than or equal to $|T_{1,n}|$ (or by the proportion of values in $\{b_n^l\}_{l=1}^B$ greater than or equal to $|T_{2,n}|$). Let us indicate that $1,000$ bootstrap iterations were done in each simulation.\ Due to $k_n$ and $\alpha$ must be fixed to run the procedure, the study was repeated with different numbers of principal components involved ($k_n\in\{1,\ldots,20\}$) and confidence levels ($\alpha\in\{0.2,0.1,0.05,0.01\}$). Nevertheless, in order to simplify the reading, the information collected in the following tables corresponds to only three of the values of $k_n$ which were analyzed: $k_n=5$, $k_n=10$ and $k_n=20$.\ Table \[tab:level\] on page  displays the sizes of the test statistics obtained in the simulation study. For $T_{1,n}$, it can be highlighted that bootstrap approaches have closer sizes to the theoretical $\alpha$ than the asymptotic approximation for $T_{1,n}$, mainly when $k_n$ is small. If we compare the performance of the two bootstrap procedures proposed, it seems that if $\sigma^2$ is bootstrapped ($T_{1,n}^{*(a)}$) the results are better than if the same estimation of the variance is considered in all the bootstrap replications ($T_{1,n}^{*(b)}$) above all when $k_n$ is large. As far as $T_{2,n}$ is concerned, the estimated levels are quite near to the nominal ones, being $k_n=20$ the case in which they are farther from the theoretical $\alpha$. Finally, it must be remarked that the F–test and its studentized versions also get good results in terms of test levels, which are slightly closer to $\alpha$ when one uses the bootstrap distribution of $T_{3s,n}^{*}$ to approximate the distribution of the statistic.\ On the other hand, Table \[tab:power.0.5\] on page , Table \[tab:power.1\] on page , and Table \[tab:power.2\] on page  show the empirical power obtained with the different procedures for each considered signal–to–noise ratio $r$. In terms of power, when $r=0.5$ the results for all the methods are similar, except for $T_{2,n}$ for which the empirical power decreases drastically, above all when $k_n$ increases (this effect is also observed for $r=1$ and $r=2$). This fact seems to be due to the construction of $T_{2,n}$ since this test statistic is the only one which does not involve the estimation of $\sigma^2$. In addition, the power of $T_{1,n}$ also fallsabruptly when $T_{1,n}^{*(b)}$ is considered, $n$ is small and $k_n$ is very large.\ A similar situation can be observed when $r=1$ and $r=2$. In the latter it can be seen that the empirical power is smaller for all the methods in general, being obtained an important loss of power when the sample is small ($n=50$), and $k_n$ increases and/or $\alpha$ decreases (see Table \[tab:power.2\] on page ). Furthermore, in this case, it can be seen that the empirical power relies heavily on the selected $k_n$ value. Hence, the advantage of using $T_{3,n}$ or $T_{3s,n}$ is that they do not require the selection of any parameter and they are competitive in terms of power. Nevertheless, it also seems that an adequate $k_n$ selection can make $T_{1,n}$ obtain larger empirical power than $T_{3,n}$ or $T_{3s,n}$ in some cases.\ Data application ---------------- For the real data application, we have obtained concentrations of hourly averaged NO$_x$ in the neighborhood of a power station belonging to ENDESA, located in As Pontes in the Northwest of Spain. During unfavorable meteorological conditions, NO$_x$ levels can quickly rise and cause an air–quality episode. The aim is to forecast NO$_x$ with half an hour horizon to allow the power plant staff to avoid NO$_x$ concentrations reaching the limit values fixed by the current environmental legislation. This fact implies that it is necessary to estimate properly the regression model which defines the relationship between the observed NO$_x$ concentration in the last minutes ($X$) and the NO$_x$ concentration with half an hour horizon ($Y$). For that, a first step is to determine if there exists a linear dependence between $X$ and $Y$.\ Therefore, we have built a sample where each curve $X$ corresponds to 240 consecutive minutal values of hourly averaged NO$_x$ concentration, and the response $Y$ corresponds to the NO$_x$ value half an hour ahead (from Jan 2007 to Dec 2009). Applying the tests for dependence to the dataset, the null hypothesis is rejected in all cases (thus, there is a linear relationship between the variables), except for $T_{2,n}$ when $k_n$ is large (see Table \[tab:application\] on page ). Nevertheless, as we have commented in the simulation study, this test statistic does not take into account the variance term and its power is clearly lower than the power of the other tests.\ Final comments ============== The proposed bootstrap methods seems to give test sizes closer to the nominal ones than the tests based on the asymptotic distributions. In terms of power, the statistic tests which include a consistent estimation of the error variance $\sigma^2$ are better that the tests which do not take it into account. Furthermore, in all the cases, a suitable choice of $k_n$ seems to be quite important, and currently it is still an open question.\ Besides of the optimal $k_n$ selection, other issues related to these dependence tests require further research, such as their extension to functional linear models with functional response. On the other hand, and in addition to the natural usefulness of this test, if would be interesting to combine it with the functional ANOVA test (see Cuevas, Febrero, and Fraiman (2004), and González-Rodríguez, Colubi, and Gil (2012)) in order to develop an ANCOVA test in this context. Acknowledgements {#acknowledgements .unnumbered} ================ The work of the first and third authors was supported by Ministerio de Ciencia e Innovación (project MTM2008–03010), and by Consellería de Innovación e Industria (project PGIDIT07PXIB207031PR), and Consellería de Economía e Industria (project 10MDS207015PR), Xunta de Galicia. The work of the second author was supported by Ministerio de Ciencia e Innovación (project MTM2009–09440–C0202) and by the COST Action IC0702. The work of the fourth author was supported by Ministerio de Educación (FPU grant AP2010–0957). [99]{} Bathia, N., Yao, Q. and Ziegelmann, F. (2010). Identifying the finite dimensionality of curve time series. [*Ann. Statist.*]{} [**38**]{}, 3352–3386. Bickel, P. J. and Freedman, D. A. (1981). Some asymptotic theory for the bootstrap. [*Ann. Statist.*]{} [**9**]{}, 1196–1217. Cai, T. T. and Hall, P. (2006). Prediction in functional linear regression. [*Ann. Statist.*]{} [**34**]{}, 2159–2179. Cao-Abad, R. (1991). Rate of convergence for the wild bootstrap in nonparametric regression. [*Ann. Statist.*]{} [**19**]{}, 2226–2231. Cardot, H., Ferraty, F., Mas, A. and Sarda, P. (2003). Testing hypotheses in the functional linear model. [*Scand. J. Stat.*]{} [**30**]{}, 241–255. Cardot, H., Ferraty, F. and Sarda, P. (1999). Functional Linear Model. [*Statist. Probab. Lett.*]{} [**45**]{}, 11–22. Cardot, H., Ferraty, F. and Sarda, P. (2003). Spline estimators for the functional linear model. [*Statist. Sinica*]{} [**13**]{}, 571–591. Cardot, H., Prchal, L. and Sarda, P. (2007). No effect and lack–of–fit permutation tests for functional regression. [*Comput. Statist.*]{} [**22**]{}, 371–390. Cuevas, A., Febrero, M. and Fraiman, R. (2004). An anova test for functional data. [*Comput. Statist. Data Anal.*]{} [**47**]{}, 111–122. Cuevas, A., Febrero, M. and Fraiman, R. (2006). On the use of the bootstrap for estimating functions with functional data. [*Comput. Statist. Data Anal.*]{} [**51**]{}, 1063–1074. Efron, B. (1979). Bootstrap methods: another look at the jackknife. [*Ann. Statist.*]{} [**7**]{}, 1–26. Ferraty, F. and Romain, Y. (eds.) (2011). [*The Oxford Handbook of Functional Data Analysis*]{}. Oxford University Press, Oxford. Ferraty, F., Van Keilegom, I. and Vieu, P. (2010). On the validity of the bootstrap in non–parametric functional regression. [*Scand. J. Stat.*]{} [**37**]{}, 286–306. Ferraty, F., Van Keilegom, I. and Vieu, P. (2012). Regression when both response and predictor are functions. [*J. Multivariate Anal.*]{} [**109**]{}, 10–28. Ferraty, F. and Vieu, P. (2006). [*Nonparametric Functional Data Analysis: Theory and Practice*]{}. Springer, New York. Freedman, D. A. (1981). Bootstrapping regression models. [*Ann. Statist.*]{} [**9**]{}, 1218–1228. Giné, E. (1997). Lectures on some aspects of the bootstrap. In [*Lectures on Probability Theory and Statistics (Saint–Flour, 1996)*]{} (Edited by B. Pierre), 37–151. Springer, Berlin. Giné, E. and Zinn, J. (1990). Bootstrapping General Empirical Measures. [*Ann. Probab.*]{} [**18**]{}, 851–869. González-Manteiga, W. and Martínez-Calvo, A. (2011). Bootstrap in functional linear regression. [*J. Statist. Plann. Inference*]{} [**141**]{}, 453–461. González-Rodríguez, G., Colubi, A. and Gil, M. Á. (2012). Fuzzy data treated as functional data: A one–way ANOVA test approach. [*Comput. Statist. Data Anal.*]{} [**56**]{}, 943–955. Hall, P. and Horowitz, J. L. (2007). Methodology and convergence rates for functional linear regression. [*Ann. Statist.*]{} [**35**]{}, 70–91. Hall, P. and Hosseini-Nasab, M. (2006). On properties of functional principal components analysis. [*J. R. Stat. Soc. Ser. B Stat. Methodol.*]{} [**68**]{}, 109–126. Hall, P. and Vial, C. (2006). Assessing the finite dimensionality of functional data. [*J. R. Stat. Soc. Ser. B Stat. Methodol.*]{} [**68**]{}, 689–705. Kokoszka, P., Maslova, I., Sojka, J. and Zhu, L. (2008). Testing for lack of dependence in the functional linear model. [*Canad. J. Statist.*]{} [**36**]{}, 207–222. Kosorok, M. R. (2008). [*Introduction to Empirical Processes and Semiparametric Inference*]{}. Springer, New York. Laha, R. G. and Rohatgi, V. K. (1979). [*Probability Theory*]{}. Wiley, New York. Ledoux, M. and Talagrand, M. (1988). Un critère sur les petites boules dans le théorème limite central. [*Probab. Theory Related Fields*]{} [**77**]{}, 29–47. Ramsay, J. O. and Silverman, B. W. (2002). [*Applied Functional Data Analysis. Methods and Case Studies*]{}. Springer, New York. Ramsay, J. O. and Silverman, B. W. (2005). [*Functional Data Analysis*]{}. 2nd edition. Springer, New York. Singh, K. (1981). On the asymptotic accuracy of Efron’s bootstrap. [*Ann. Statist.*]{} [**9**]{}, 1187–1195.
--- abstract: 'The present study further strengthens the use of the Keedwell CIPQ against attack on a system by the use of the Smarandache Keedwell CIPQ for cryptography in a similar spirit in which the cross inverse property has been used by Keedwell. This is done as follows. By constructing two S-isotopic S-quasigroups(loops) $U$ and $V$ such that their Smarandache automorphism groups are not trivial, it is shown that $U$ is a SCIPQ(SCIPL) if and only if $V$ is a SCIPQ(SCIPL). Explanations and procedures are given on how these SCIPQs can be used to double encrypt information.' author: - | Tèmítp Gbláhàn Jaíyéolá[^1]\ Department of Mathematics,\ Obafemi Awolowo University, Ile Ife, Nigeria.\ [email protected], [email protected] title: 'A Double Cryptography Using The Smarandache Keedwell Cross Inverse Quasigroup[^2] [^3]' --- Introduction ============ Quasigroups And Loops --------------------- Let $L$ be a non-empty set. Define a binary operation ($\cdot $) on $L$ : If $x\cdot y\in L$ for all $x, y\in L$, $(L, \cdot )$ is called a groupoid. If the system of equations ; $$a\cdot x=b\qquad\textrm{and}\qquad y\cdot a=b$$ have unique solutions for $x$ and $y$ respectively, then $(L, \cdot )$ is called a quasigroup. For each $x\in L$, the elements $x^\rho =xJ_\rho ,x^\lambda =xJ_\lambda\in L$ such that $xx^\rho=e^\rho$ and $x^\lambda x=e^\lambda$ are called the right, left inverses of $x$ respectively. Now, if there exists a unique element $e\in L$ called the identity element such that for all $x\in L$, $x\cdot e=e\cdot x=x$, $(L, \cdot )$ is called a loop. To every loop $(L,\cdot )$ with automorphism group $AUM(L,\cdot )$, there corresponds another loop. Let the set $H=(L,\cdot )\times AUM(L,\cdot )$. If we define ’$\circ$’ on $H$ such that $(\alpha, x)\circ (\beta, y)=(\alpha\beta, x\beta\cdot y)$ for all $(\alpha, x),(\beta, y)\in H$, then $H(L,\cdot )=(H,\circ)$ is a loop as shown in Bruck [@phd82] and is called the Holomorph of $(L,\cdot )$. #### A loop is a weak inverse property loop(WIPL) if and only if it obeys the identity $$\label{eq:8} x(yx)^\rho=y^\rho\qquad\textrm{or}\qquad(xy)^\lambda x=y^\lambda.$$ A loop(quasigroup) is a cross inverse property loop(quasigroup)\[CIPL(CIPQ)\] if and only if it obeys the identity $$\label{eq:8.1} xy\cdot x^\rho =y\qquad\textrm{or}\qquad x\cdot yx^\rho =y\qquad\textrm{or}\qquad x^\lambda\cdot (yx)=y\qquad\textrm{or}\qquad x^\lambda y\cdot x=y.$$ A loop(quasigroup) is an automorphic inverse property loop(quasigroup)\[AIPL(AIPQ)\] if and only if it obeys the identity $$(xy)^\rho=x^\rho y^\rho~or~(xy)^\lambda =x^\lambda y^\lambda.$$ The set $SYM(G, \cdot )=SYM(G)$ of all bijections in a groupoid $(G,\cdot )$ forms a group called the permutation(symmetric) group of the groupoid $(G,\cdot )$. Consider $(G,\cdot )$ and $(H,\circ )$ been two distinct groupoids(quasigroups, loops). Let $A,B$ and $C$ be three distinct non-equal bijective mappings, that maps $G$ onto $H$. The triple $\alpha =(A,B,C)$ is called an isotopism of $(G,\cdot )$ onto $(H,\circ )$ if and only if $$xA\circ yB=(x\cdot y)C~\forall~x,y\in G.$$ If $(G,\cdot )=(H,\circ )$, then the triple $\alpha =(A,B,C)$ of bijections on $(G,\cdot )$ is called an autotopism of the groupoid(quasigroup, loop) $(G,\cdot )$. Such triples form a group $AUT(G,\cdot )$ called the autotopism group of $(G,\cdot )$. Furthermore, if $A=B=C$, then $A$ is called an automorphism of the groupoid(quasigroup, loop) $(G,\cdot )$. Such bijections form a group $AUM(G,\cdot )$ called the automorphism group of $(G,\cdot )$. As observed by Osborn [@phd89], a loop is a WIPL and an AIPL if and only if it is a CIPL. The past efforts of Artzy [@phd140; @phd193; @phd158; @phd30], Belousov and Tzurkan [@phd192] and recent studies of Keedwell [@phd176], Keedwell and Shcherbacov [@phd175; @phd177; @phd178] are of great significance in the study of WIPLs, AIPLs, CIPQs and CIPLs, their generalizations(i.e m-inverse loops and quasigroups, (r,s,t)-inverse quasigroups) and applications to cryptography. Interestingly, Huthnance [@phd44] showed that if $(L,\cdot )$ is a loop with holomorph $(H,\circ)$, $(L,\cdot )$ is a WIPL if and only if $(H,\circ)$ is a WIPL. But the holomorphic structure of AIPL and a CIPL has just been revealed by Jaíyéolá [@sma15]. In the quest for the application of CIPQs with long inverse cycles to cryptography, Keedwell [@phd176] constructed the following CIPQ which we shall specifically call Keedwell CIPQ. (Keedwell CIPQ) Let $(G,\cdot )$ be an abelian group of order $n$ such that $n+1$ is composite. Define a binary operation ’$\circ$’ on the elements of $G$ by the relation $a\circ b=a^rb^s$, where $rs=n+1$. Then $(G,\circ )$ is a CIPQ and the right crossed inverse of the element $a$ is $a^u$, where $u=(-r)^3$ The author also gave examples and detailed explanation and procedures of the use of this CIPQ for cryptography. Cross inverse property quasigroups have been found appropriate for cryptography because of the fact that the left and right inverses $x^\lambda$ and $x^\rho$ of an element $x$ do not coincide unlike in left and right inverse property loops, hence this gave rise to what is called ’cycle of inverses’ or ’inverse cycles’ or simply ’cycles’ i.e finite sequence of elements $x_1,x_2,\cdots ,x_n$ such that $x_k^\rho =x_{k+1}~\bmod{n}$. The number $n$ is called the length of the cycle. The origin of the idea of cycles can be traced back to Artzy [@phd140; @phd193] where he also found there existence in WIPLs apart form CIPLs. In his two papers, he proved some results on possibilities for the values of $n$ and for the number $m$ of cycles of length $n$ for WIPLs and especially CIPLs. We call these “Cycle Theorems” for now. In application, it is assumed that the message to be transmitted can be represented as single element $x$ of a quasigroup $(L,\cdot )$ and that this is enciphered by multiplying by another element $y$ of $L$ so that the encoded message is $yx$. At the receiving end, the message is deciphered by multiplying by the right inverse $y^\rho$ of $y$. If a left(right) inverse quasigroup is used and the left(right) inverse of $x$ is $x^\lambda$ ($x^\rho$), then the left(right) inverse of $x^\lambda$ ($x^\rho$) is necessarily $x$. But if a CIPQ is used, this is not necessary the situation. This fact makes an attack on the system more difficult in the case of CIPQs. Smarandache Quasigroups And Loops --------------------------------- The study of Smarandache loops was initiated by W. B. Vasantha Kandasamy in 2002. In her book [@phd75], she defined a Smarandache loop(S-loop) as a loop with at least a subloop which forms a subgroup under the binary operation of the loop. In [@sma2], the present author defined a Smarandache quasigroup(S-quasigroup) to be a quasigroup with at least a non-trivial associative subquasigroup called a Smarandache subsemigroup (S-subsemigroup). Examples of Smarandache quasigroups are given in Muktibodh [@muk2]. In her book, she introduced over 75 Smarandache concepts on loops. In her first paper [@phd83], on the study of Smarandache notions in algebraic structures, she introduced Smarandache : left(right) alternative loops, Bol loops, Moufang loops, and Bruck loops. But in [@sma1], the present author introduced Smarandache : inverse property loops(IPL) and weak inverse property loops(WIPL). A quasigroup(loop) is called a Smarandache “certain” quasigroup(loop) if it has at least a non-trivial subquasigroup(subloop) with the “certain” property and the latter is referred to as the Smarandache “certain” subquasigroup(subloop). For example, a loop is called a Smarandache CIPL(SCIPL) if it has at least a non-trivial subloop that is a CIPL and the latter is referred to as the Smarandache CIP-subloop. By an “initial S-quasigroup” $L$ with an “initial S-subquasigroup” $L'$, we mean $L$ and $L'$ are pure quasigroups, i.e. they do not obey a “certain” property(not of any variety). If $L$ is a S-groupoid with a S-subsemigroup $H$, then the set $SSYM(L, \cdot )=SSYM(L)$ of all bijections $A$ in $L$ such that $A~:~H\to H$ forms a group called the Smarandache permutation(symmetric) group of the S-groupoid. In fact, $SSYM(L)\le SYM(L)$. \[1:1\] Let $(L, \cdot )$ and $(G, \circ )$ be two distinct groupoids that are isotopic under a triple $(U, V, W)$. Now, if $(L, \cdot )$ and $(G, \circ )$ are S-groupoids with S-subsemigroups $L'$ and $G'$ respectively such that $A~:~L'\to G'$, where $A\in\{U,V,W\}$, then the isotopism $(U, V, W) : (L, \cdot )\rightarrow (G, \circ )$ is called a Smarandache isotopism(S-isotopism). Thus, if $U=V=W$, then $U$ is called a Smarandache isomorphism, hence we write $(L, \cdot )\succsim (G, \circ )$. But if $(L, \cdot )=(G, \circ )$, then the autotopism $(U, V, W)$ is called a Smarandache autotopism (S-autotopism) and they form a group $SAUT(L,\cdot )$ which will be called the Smarandache autotopism group of $(L, \cdot )$. Observe that $SAUT(L,\cdot )\le AUT(L,\cdot )$. Furthermore, if $U=V=W$, then $U$ is called a Smarandache automorphism of $(L,\cdot )$. Such Smarandache permutations form a group $SAUM(L,\cdot )$ called the Smarandache automorphism group(SAG) of $(L,\cdot )$. Now, set $H_S=(L,\cdot )\times SAUM(L,\cdot )$. If we define ’$\circ$’ on $H_S$ such that $(\alpha, x)\circ (\beta, y)=(\alpha\beta, x\beta\cdot y)$ for all $(\alpha, x),(\beta, y)\in H_S$, then $H_S(L,\cdot )=(H_S,\circ)$ is a S-quasigroup(S-loop) with S-subgroup $(H',\circ )$ where $H'=L'\times SAUM(L)$ and thus will be called the Smarandache Holomorph(SH) of $(L,\cdot )$. #### The aim of the present study is to further strengthen the use of the Keedwell CIPQ against attack on a system by the use of the Smarandache Keedwell CIPQ for cryptography in a similar spirit in which the cross inverse property has been used by Keedwell. This is done as follows. By constructing two S-isotopic S-quasigroups(loops) $U$ and $V$ such that their Smarandache automorphism groups are not trivial, it is shown that $U$ is a SCIPQ(SCIPL) if and only if $V$ is a SCIPQ(SCIPL). Explanations and procedures are given on how these SCIPQs can be used to double encrypt information. Preliminary Results =================== (Smarandache Keedwell CIPQ) Let $Q$ be an initial S-quasigroup with an initial S-subquasigroup $P$. $Q$ is called a Smarandache Keedwell CIPQ(SKCIPQ) if $P$ is isomorphic to the Keedwell CIPQ, say under a mapping $\phi$. The following results that have recently been established are of paramount importance to prove the main result of this work. \[1:4\](Jaíyéolá [@sma14]) Let $U=(L,\oplus)$ and $V=(L,\otimes )$ be initial S-quasigroups such that $SAUM(U)$ and $SAUM(V)$ are conjugates in $SSYM(L)$ i.e there exists a $\psi\in SSYM(L)$ such that for any $\gamma\in SAUM(V)$, $\gamma =\psi^{-1}\alpha\psi$ where $\alpha\in SAUM(U)$. Then, $H_S(U)\succsim H_S(V)$ if and only if $x\delta\otimes y\gamma =(x\beta\oplus y)\delta~\forall~x,y\in L,~\beta\in SAUM(U)$ and some $\delta,\gamma\in SAUM(V)$. \[3:3.2\](Jaíyéolá [@sma15]) The holomorph $H(L)$ of a quasigroup(loop) $L$ is a Smarandache CIPQ(CIPL) if and only if $SAUM(L)=\{I\}$ and $L$ is a Smarandache CIPQ(CIPL). Main Results ============ \[1:6\] Let $U=(L,\oplus)$ and $V=(L,\otimes )$ be initial S-quasigroups(S-loops) that are S-isotopic under the triple of the form $(\delta^{-1}\beta ,\gamma^{-1},\delta^{-1})$ for all $\beta\in SAUM(U)$ and some $\delta,\gamma\in SAUM(V)$ such that their Smarandache automorphism groups are non-trivial and are conjugates in $SSYM(L)$ i.e there exists a $\psi\in SSYM(L)$ such that for any $\gamma\in SAUM(V)$, $\gamma =\psi^{-1}\alpha\psi$ where $\alpha\in SAUM(U)$. Then, $U$ is a SCIPQ(SCIPL) if and only if $V$ is a SCIPQ(SCIPL). [**Proof**]{}\ Following Theorem \[1:4\], $H_S(U)\succsim H_S(V)$. Also, by Theorem \[3:3.2\], $H_S(U)$($H_S(V)$) is a SCIPQ(SCIPL) if and only if $SAUM(U)=\{I\}$($SAUM(V)=\{I\}$) and $U$($V$) is a SCIPQ(SCIPL). Let $U$ be an SCIPQ(SCIPL), then since $H_S(U)$ has a subquasigroup(subloop) that is isomorphic to a S-CIP-subquasigroup(subloop) of $U$ and that subquasigroup(subloop) is isomorphic to a S-subquasigroup(subloop) of $H_S(V)$ which is isomorphic to a S-subquasigroup(subloop) of $V$, $V$ is a SCIPQ(SCIPL). The proof for the converse is similar. #### Application To Cryptography Let the Smarandache Keedwell CIPQ be the SCIPQ $U$ in Theorem \[1:6\]. Definitely, its Smarandache automorphism group is non-trivial because as shown in Theorem 2.1 of Keedwell [@phd176], for any CIPQ, the mapping $J_\rho~:~x\to x^\rho$ is an automorphism. This mapping will be trivial only if the S-CIP-subquasigroup of $U$ is unipotent. For instance, in Example 2.1 of Keedwell [@phd176], the CIPQ $(G,\circ )$ obtained is unipotent because it was constructed using the cyclic group $C_5=<c:~c^5=e>$ and defined as $a\circ b=a^3b^2$. But in Example 2.2, the CIPQ gotten is not unipotent as a result of using the cyclic group $C_{11}=<c:~c^{11}=e>$. Thus, the choice of a Smarandache Keedwell CIPQ which suits our purpose in this work for a cyclic group of order $n$ is one in which $rs=n+1$ and $r+s\ne n$. Now that we have seen a sample for the choice of $U$, the initial S-quasigroup $V$ can then be obtained as shown in Theorem \[1:6\]. By Theorem \[1:6\], $V$ is a SCIPQ. Now, according to Theorem \[1:4\], by the choice of the mappings $\alpha ,\beta\in SAUM(U)$ and $\psi\in SSYM(L)$ to get the mappings $\delta,\gamma$, a SCIPQ $V$ can be produced following Theorem \[1:6\]. So, the secret keys for the systems are $\{\alpha ,\beta,\psi,\phi\}\equiv\{\delta,\gamma ,\phi\}$. Thus whenever a set of information or messages is to be transmitted, the sender will enciphere in the Smarandache Keedwell CIPQ by using specifically the S-CIP-subquasigroup in it(as described earlier on in the introduction) and then enciphere again with $\{\alpha ,\beta,\psi,\phi\}\equiv\{\delta,\gamma ,\phi\}$ to get a SCIPQ $V$ which is the set of encoded messages. At the receiving end, the message $V$ is deciphered by using an inverse isotopism(i.e inverse key of $\{\alpha ,\beta,\psi\}\equiv\{\delta,\gamma\}$) to get $U$ and then deciphere again(as described earlier on in the introduction) to get the messages. The secret key can be changed over time. The method described above is a double encryption and its a double protection. It protects each piece of information(element of the quasigroup) and protects the combined information(the quasigroup as a whole). Its like putting on a pair of socks and shoes or putting on under wears and clothes, the body gets better protection. An added advantage of the use of Smarandache Keedwell CIPQ over Keedwell CIPQ in double encryption is that the since the S-CIP-subquasigroups of the Smarandache Keedwell CIPQ in use could be more than one, then, the S-CIP-subquasigroups can be replaced overtime. [99]{} R. Artzy (1955), [*On loops with special property*]{}, Proc. Amer. Math. Soc. 6, 448–453. R. Artzy (1959), [*Crossed inverse and related loops*]{}, Trans. Amer. Math. Soc. 91, 3, 480–492. R. Artzy (1959), [*On Automorphic-Inverse Properties in Loops*]{}, Proc. Amer. Math. Soc. 10,4, 588–591. R. Artzy (1978), [*Inverse-Cycles in Weak-Inverse Loops*]{}, Proc. Amer. Math. Soc. 68, 2, 132–134. V. D. Belousov (1969), [*Crossed inverse quasigroups(CI-quasigroups)*]{}, Izv. Vyss. Ucebn; Zaved. Matematika 82, 21–27. R. H. Bruck (1944), [*Contributions to the theory of loops*]{}, Trans. Amer. Math. Soc. 55, 245–354. E. D. Huthnance Jr.(1968), [*A theory of generalised Moufang loops*]{}, Ph.D. thesis, Georgia Institute of Technology. T. G. Jaíyéolá (2006), [*An holomorphic study of the Smarandache concept in loops*]{}, Scientia Magna Journal, 2, 1, 1–8. T. G. Jaíyéolá (2006), [*Parastrophic invariance of Smarandache quasigroups*]{}, Scientia Magna Journal, 2, 3, 48–53. T. G. Jaíyéolá (2008), [*A Pair Of Smarandachely Isotopic Quasigroups And Loops Of The Same Variety*]{}, International Journal of Mathematical Combinatorics, Vol. 2., No. 1, 36-44. T. G. Jaíyéolá (2008), [*An Holomorphic Study Of Smarandache Automorphic and Cross Inverse Property Loops*]{}, Proceedings of the $4^\textrm{th}$ International Conference on Number Theory and Smarandache Problems, Scientia Magna Journal. Vol. 4, No. 1, 102-108. A. D. Keedwell (1999), [*Crossed-inverse quasigroups with long inverse cycles and applications to cryptography*]{}, Australas. J. Combin. 20, 241–-250. A. D. Keedwell and V. A. Shcherbacov (2002), [*On m-inverse loops and quasigroups with a long inverse cycle*]{}, Australas. J. Combin. 26, 99–-119. A. D. Keedwell and V. A. Shcherbacov (2003), [*Construction and properties of $(r, s, t)$-inverse quasigroups I*]{}, Discrete Math. 266, 275–-291. A. D. Keedwell and V. A. Shcherbacov, [*Construction and properties of $(r, s, t)$-inverse quasigroups II*]{}, Discrete Math. 288 (2004), 61-–71. A. S. Muktibodh (2006), [*Smarandache Quasigroups*]{}, Scientia Magna Journal, 2, 1, 13–19. J. M. Osborn (1961), [*Loops with the weak inverse property*]{}, Pac. J. Math. 10, 295–304. Y. T. Oyebo and O. J. Adeniran, [*On the holomorph of central loops*]{}, Pre-print. W. B. Vasantha Kandasamy (2002), [*Smarandache Loops*]{}, Department of Mathematics, Indian Institute of Technology, Madras, India, 128pp. W. B. Vasantha Kandasamy (2002), [*Smarandache Loops*]{}, Smarandache notions journal, 13, 252–258. [^1]: All correspondence to be addressed to this author. [^2]: 2000 Mathematics Subject Classification. Primary 20NO5 ; Secondary 08A05 [^3]: [**Keywords and Phrases :**]{} Smarandache holomorph of loops, Smarandache cross inverse property quasigroups(CIPQs), Smarandache automorphism group, cryptography
--- abstract: '[ We propose a method for determining the exact correspondence between the Wilsonian cut-off scale on the boundary and its holographically dual bulk theory. We systematically construct the multi-trace Wilsonian effective action from holographic renormalisation and evolve it by integrating out the asymptotically Anti-de Sitter bulk geometry with scalar probes. The Wilsonian nature of the effective action is shown by proving that it must be either double-trace, closing in on itself under successive integrations, or have an infinite series of multi-trace terms. Focusing on composite scalar operator renormalisation, we relate the Callan-Symanzik equation, the flow of the scalar anomalous dimension and the multi-trace beta functions to their dual RG flows in the bulk. Establishing physical renormalisation conditions on the behaviour of the large-$N$ anomalous dimension then enables us to extract the energy scales. Examples of pure AdS, GPPZ flow, black brane in AdS, M2 and M5 branes are studied before we generalise our results to arbitrary numbers of mass and thermal deformations of an ultra-violet CFT. Relations between the undeformed Wilsonian cut-off, deformation scales and the deformed Wilsonian cut-off are discussed, as is phenomenology of each considered background. We see how a mass gap, the emergent infra-red CFT scaling, etc. arise in different effective theories. We also argue that these results can have alternative interpretations through the flow of the conformal anomaly or the Ricci scalar curvature of boundary branes. They show consistency with the c-theorem.]{}' author: - Sašo Grozdanov title: 'Wilsonian renormalisation and the exact cut-off scale from holographic duality' --- OUTP-11-58P Introduction ============ Beyond its fundamental conceptual importance to physics, the holographic duality or AdS/CFT correspondence [@Maldacena:1997re; @Gubser:1998bc; @Witten:1998qj] has provided us with a very useful tool for the study of strongly coupled field theories. The correspondence enables us to calculate field theory predictions directly from their weakly coupled classical gravity duals. The string theory, or gravity, side of the duality is formulated in a bulk spacetime with an extra radial dimension compared to its holographic field theory dual, which lives on the bulk’s boundary. The radial coordinate can be understood as an energy scale of the dual theory [@Maldacena:1997re; @Susskind:1998dq; @Peet:1998wn]. In an asymptotically Anti-de Sitter spacetime, the metric tensor diverges at the boundary, which corresponds to a UV divergence of the field theory. This further establishes the IR/UV correspondence between the two dual theories [@Susskind:1998dq; @Peet:1998wn]. Details of the strongly coupled field theories dual to theories in the bulk are usually unknown. An interesting question in its own right therefore becomes even more crucial for further understanding of the duality: how can we precisely understand renormalisation group flows of the boundary theory by exploiting the duality between its energy scale and the radial coordinate? Since the early days of the correspondence numerous papers have explored this question, with some among the earlier ones: [@Akhmedov:1998vf; @Girardello:1998pd; @Distler:1998gb; @Balasubramanian:1999jd; @de; @Boer:1999xf; @Balasubramanian:1998de; @Freedman:1999gp; @deBoer:2000cz]. A conformal field theory at the AdS infinity can naturally be interpreted as a UV fixed point with its cut-off taken to infinity, without the theory running into problems such as a Landau pole. We would then like to know how to formulate the Wilsonian procedure of integrating out the UV degrees of freedom and flowing towards the IR [@Wilson:1973jj; @Wegner:1972ih; @Wilson:1993dy; @Polchinski:1983gv; @Polonyi:2001se]. This can, in fact, be done on the gravity side, as shown by Faulkner, Liu and Rangamani in [@Faulkner:2010jy] and Heemskerk and Polchinski in [@Heemskerk:2010hk], in a way which is completely analogous to the Wilsonian renormalisation group. The procedure is based on integrating out shells of geometry along the radial direction, while keeping the overall path integral fixed. Other recent developments on the subject include [@Sin:2011yh; @Harlow:2011ke; @Fan:2011wm; @Akhmedov:2010mz; @Radicevic:2011py; @Elander:2011vh; @Laia:2011wf; @Bredberg:2010ky; @Lee:2009ij; @Lee:2010ub; @Paulos:2011zu; @Kuperstein:2011fn]. It is the goal of this paper to further develop the correspondence between the Wilsonian picture and the bulk integration of geometry. In particular, we wish to propose a systematic procedure for finding an *exact dictionary between the hard Wilsonian cut-off scale on the field theory side and its dual bulk description*. We also wish to show that the *effective Wilsonian action must either include an infinite set of multi-trace operators or close in on its double-trace sector under the renormalisation group flow.* This would provide additional convincing evidence that it is possible to construct, directly from holography, a renormalisation group procedure of the boundary field theory which is fully Wilsonian. Understanding how different energy scales in a field theory, and their mixing, carry over from the dual geometric picture is important for making holography more precise. Furthermore, it is of considerable interest to the model-building of condensed matter and particle theory systems with holographic duals. Most realistic systems do not exhibit conformal, or scale, invariance. It is therefore useful to understand how various UV conformal theories can be deformed to reproduce some desired phenomenological behaviour by their effective IR theories. Once we have found such a bulk theory, all calculations could then be performed at the conformal point where holography is well understood. Physical predictions, such as the renormalised correlation functions remain invariant under renormalisation group transformations, thus providing us with a source of descriptions of IR phenomena by IR effective theories with UV completions. We focus on asymptotically Anti-de Sitter bulks with propagating scalar fields. Specifically, we wish to understand the proportionality between the energy scale and the radial coordinate. Once this is understood for conformal theories, such as the $\CN=4$ super Yang-Mills, we study the same theory with mass, thermal and density deformations to see how they alter the RG running of the Wilsonian scale. Further cases are also studied of the near-extremal M2 and M5 branes at finite temperature. Finally, we generalise the procedure to include an arbitrary number of mass deformations and Lorentz-invariance breaking thermal deformations, with the latter being induced by horizons of black branes. We will also analyse the phenomenology of each studied example and extract various physical interpretations of theories from the flows of the RG scales. Among them will be the indications for the existence of a mass gap in the confining GPPZ case, a mass gap and the emergent infra-red CFT scaling in duals of black branes in AdS, the mixing of different energy scales, etc. In order to develop the procedure of extracting the dependence of the cut-off on the bulk physics, we will establish a systematic way of finding the bare boundary action, which will run under the RG transformations of integrating out the bulk. We first use the fact that in order to have a well-defined and finite holographic description at the AdS infinity, we need to regulate and renormalise the boundary action. This is done using holographic renormalisation [@de; @Haro:2000xn; @Skenderis:2002wp; @Bianchi:2001kw; @Skenderis:2008dg], whereby counter-terms are introduced to exactly cancel off divergences in the limit of the AdS infinity. We use a combination of the regularised action and holographic counter-terms to write the bare action, and then permit various terms in it to run. The procedure yields the bare effective boundary action with extra terms, which in the Wilsonian RG come from the structure of counter-terms. This agrees with the expectation that the additional effective terms must be invariant under the full symmetries of the field theory. But these are precisely the isometries of the bulk under which the counter-terms must transform. The procedure established in [@Faulkner:2010jy] can then be used to evolve the effective action under the RG flow. Through a careful definition of the wavefunction renormalisation, the double-trace coupling can be related to the anomalous dimension of scalar operators in large-$N$ theories. The role of multi-trace deformations in AdS/CFT was studied, among other papers, in [@Witten:2001ua; @Akhmedov:2002gq; @Mueck:2002gm; @Minces:2002wp; @Vecchi:2010dd; @Hartman:2006dy]. From the large-$N$ field theory point of view, these works relate to [@Pomoni:2008de; @Vecchi:2010jz], for example. Once we have established how the Callan-Symanzik equation and the flow of the anomalous dimension of the field theory operators translate to the RG equations in the bulk, we can then analyse them in various scenarios. It is the flow of the anomalous dimension, proportional to the double-trace beta function, that allows us to find the exact dependence of the Wilsonian cut-off scale on quantities describing the physics in the bulk. This is done through a set of physical renormalisation conditions that the wavefunction renormalisation, and hence the anomalous dimension, must satisfy in any field theory. They tell us, given some physical operator momentum we wish to probe, where the flow will terminate as deep in the bulk as possible. The position can then be directly translated into the Wilsonian cut-off, which cannot be lowered below some physical momentum scale of interest. We will also see how these conditions can be shown to be consistent with the c-theorem [@Zam:1986; @Cardy:1988; @Komargodski:2011vj; @Freedman:1999gp; @Myers:2010xs; @Gubser:2002vv; @Hartman:2006dy]. The paper is structured as follows:\ In section \[Sec:RG\], we set up the procedure of obtaining the Wilsonian renormalisation group from holographic renormalisation and integrating out bulk geometry for theories with bulk scalars. In \[Sec:RG1\] we show how to construct the running bare effective boundary action from the bulk physics, by using the structure of the holographic counter-terms. In \[Sec:RG2\] we use the work of [@Faulkner:2010jy] to derive the full set of renormalisation group equations describing the flow of the effective action. In \[Sec:RG3\] we use a definition of the wavefunction renormalisation to establish the connection between the bulk construction and the Callan-Symanzik equation on the field theory side of the duality. This results in an interpretation of the Wilsonian scalar composite operator renormalisation. We also comment on thermal scalings of operator dimensions.\ In section \[Sec:AnomDim\] we examine physical conditions that the anomalous dimension should obey, in order to establish boundary conditions for the differential RG equation describing its flow. By evolving the UV Wilsonian cut-off down to some operator momentum we wish to probe, or moving the brane into the bulk in the gravity picture, this allows us to find an inequality relating the physical energy-momentum and the hard cut-off. Functional dependence of the cut-off on the bulk is then examined in several cases with different bulk geometries. First we look at pure AdS with $\CN=4$ dual theory. Then we add a mass deformation (GPPZ flow), and later temperature, as well as density, by introducing a black brane into the bulk. Afterwards, we study the case of near-extremal thermal M2 and M5 branes. We also generalise the discussion to include arbitrary, simultaneous mass and thermal deformations. Relations between the hard Wilsonian cut-off and individual scales appearing in the theories are studied in detail and phenomenology of each example is discussed.\ Section \[Sec:Alter\] includes a brief discussion on how our results can be alternatively interpreted as the running of the conformal anomaly or the Ricci scalar curvature of branes with dual field theories. A connection to the c-theorem is also discussed.\ In section \[Sec:MultiTrace\], we study the full infinite set of multi-trace terms in the Wilsonian effective action to provide further evidence for the Wilsonian nature of the constructed renormalisation group procedure. We show that the effective action is either quadratic, in which case it closes in on itself under the RG flow, or infinite, which is in full accordance with field theoretic expectations. We also explore the systematics for finding a formal solution of the entire set of multi-trace couplings.\ Finally, in section \[Sec:Sum\], we comment on the results of our analysis and point out some open questions. From holographic to Wilsonian renormalisation group {#Sec:RG} =================================================== The effective boundary action {#Sec:RG1} ----------------------------- We begin our analysis by systematically constructing the Wilsonian renormalisation group from holography. We will focus on scalar operators and only consider scalar field theories in the bulk, as dictated by the AdS/CFT dictionary [@Gubser:1998bc; @Witten:1998qj]. Throughout this work, Lorentzian bulk metrics $G_{MN}$ will have an asymptotically Anti-de Sitter boundary at radial position $r=0$ in Poincaré-like coordinates to ensure a dual field theory with a conformal UV fixed point. We use capital Latin indices for $d+1$ bulk dimensions and reserve Greek indices for induced metrics $g_{\mu\nu}$ on $d$-dimensional branes with dual field theories. The metric of the pure $AdS_{d+1}$ is $$\label{AdSMet1} ds^2 = \frac{L^2}{r^2} \left( -dt^2 + d\vec{x}^2_{d-1} +dr^2 \right).$$ We set the AdS radius to $L=1$ throughout this work. Wilsonian renormalisation group analysis starts with a quantum field theory with a UV cut-off $\Lambda_0$ [@Wilson:1973jj; @Wegner:1972ih; @Wilson:1993dy; @Polchinski:1983gv; @Polonyi:2001se]. It is important for us to be able to take $\Lambda_0 \to \infty$ without running into problems, such as a Landau pole, and to begin the flow at a fixed point. In field theory, a momentum shell is then integrated out, leaving us with an effective Lagrangian, which includes all possible terms permitted by the symmetries. The theory also has a new hard cut-off $\Lambda_1$. On the gravity side, a dual procedure was recently proposed in [@Faulkner:2010jy; @Heemskerk:2010hk]. It was shown that by integrating out shells of geometry along the radial coordinate, an effective theory is produced on the boundary, which had been moved further into the bulk, from $r=\rho_0$ to $\rho_1$. This adheres to the expectation that the energy scale of a field theory is inversely proportional to the radial coordinate, $\Lambda \propto 1 / r$ [@Susskind:1998dq; @Peet:1998wn]. Having integrated out a slice between $\rho_0$, which relates to $\Lambda_0$, and $\rho_1$, we expect the lowered Wilsonian cut-off to be a function $\Lambda_1 (\rho_1)$. Determining the exact functional dependence of $\Lambda (\rho)$ is the goal of this analysis. Note that we will be using the variable $\rho$ to specify the radial position of the brane with a dual, distinguishing it from a coordinate variable $r$. The first question we need to address is how to systematically obtain additional terms in the effective boundary action so that we can use the RG evolution procedure of [@Faulkner:2010jy]. To show that this can be done by using the structure of the holographic counter-terms, we first note that all our flows start from the AdS boundary where the metric, as well as the boundary action from which dual correlation functions are extracted, diverge. It is standard to use holographic renormalisation [@de; @Haro:2000xn; @Skenderis:2002wp; @Bianchi:2001kw; @Skenderis:2008dg] to regulate these divergences by defining the brane theory at $r=\rho_0$, dual to $\Lambda_0$, very close to the AdS infinity. The expression for this regulated bare boundary action, $S_\text{B} [\rho_0] \equiv S_\text{B}^{\text{reg}} [\rho_0 ]$, is obtained from the boundary term of the gravitational bulk action $S_\text{bulk}$ that remains after using classical equations of motion. In fact, $S_\text{bulk} = - S_\text{B} [\rho_0]$, with the minus sign coming from the lower end of integration between $\rho_0$ and the horizon, or infinity. A saddle point approximation of the full path integral is allowed as we are only working with large-$N$ theories. We thus construct an action including both brane and bulk dynamics in such a way that it vanishes on-shell: $$\label{FullBareBulkAction} S = S_B [\rho_0] + \!\!\! \int \limits_{r \geq \rho_0} \!\! d^{d+1} x \sqrt{-G} \CL \left(\Phi, \partial_M \Phi \right),$$ where the second term is $S_\text{bulk}$ and $\CL \left( \Phi,\partial_M \Phi \right) = -\frac{1}{2} \partial_M \Phi \partial^M \Phi - V(\Phi)$. We assume a polynomial potential $V(\Phi) = \frac{1}{2} m^2 \Phi^2 + \sum_{n=3}^\infty \frac{1}{n} b_n \Phi^n$, with the mass term kept explicit. The boundary action $S_\text{B}$ can also be viewed in the sense of [@Heemskerk:2010hk], as the UV part of the bulk integral coming from the infinitesimally thin $0 \leq r \leq \rho_0$ region. Using semi-classical approximation, imagine that we are performing a path integral over $\Phi = \hat\Phi + \tilde\Phi$, where $\hat\Phi$ is the classical value and $\tilde\Phi$ a small quantum perturbation. To integrate out the region $0 \leq r \leq \rho_0$ where $\rho_0$ is infinitesimally close to $r=0$, we first integrate $S_\text{bulk}$ by parts. The setup now enables us to neglect the $(d+1)$-dimensional contribution between two boundaries. This is because the $\hat\Phi$ contribution, $\Box \hat\Phi$, vanishes by equations of motion. As for additional contributions, we assumed that $\tilde\Phi$ was very small and that the volume of space between two boundaries was infinitesimally small. At the boundaries of the bulk, the configuration space of $\Phi$ is fixed so $\tilde\Phi (0) = 0$. To a linear order in $\tilde\Phi$, and in the limit $\rho_0 \to 0$, we can cancel the classical contributions between two boundaries, $\sqrt{-G} G^{rr}\hat\Phi \partial_r \hat\Phi |_{r=\rho_0} - \sqrt{-G} G^{rr} \hat\Phi \partial_r \hat\Phi |_{r=0}$, leaving us only $ - \frac{1}{2} \int_{\rho_0} \! d^d x \sqrt{-G} G^{rr} \tilde \Phi \partial_r \tilde\Phi$. This is precisely the boundary $S_\text{B}$ we would get from the on-shell contribution of $S_\text{bulk}$. We therefore have $$\label{BareBdyAct} S_{\text{B}} [\rho_0] = - \frac{1}{2} \int \limits_{r=\rho_0} \!\! d^{d} x \sqrt{-G} G^{rr} \Phi \partial_r \Phi.$$ Treating the radial coordinate as time, we can define the bare canonical conjugate momentum as $$\label{BarePi} \Pi_\text{B} \equiv \frac{\delta S_\text{B}}{\delta \Phi} = - \sqrt{-G} G^{rr} \partial_r \Phi.$$ In the language of holographic renormalisation, we define the subtracted boundary action as $$\label{SubBdyAct} S_{\text{B}}^{\text{sub}}[\rho_0] \equiv S_\text{B} [\rho_0] - S_\text{B}^{\text{c.t.}} [\rho_0],$$ where terms in the counter-term action $S_\text{B}^{\text{c.t.}}$ are taken to exactly equal divergent pieces of $S_\text{B}$ as $\rho_0 \to 0$, resulting in a “minimal-subtraction” scheme at all momentum scales, which we will be using throughout this work. Subtracting the counter-terms from the initial $S_\text{B} [\rho_0]$ therefore makes the overall on-shell action finite in the $\rho_0 \to 0$ limit and removes all contact terms. A definition of the renormalised action naturally follows from the subtracted action via relation $$S_\text{B}^{\text{ren}} \equiv \lim_{\rho_0 \to 0} S_\text{B}^{\text{sub}} [\rho_0].$$ We further define, in analogy with $\Pi_\text{B}$, a canonical conjugate momentum $$\label{Pi} \Pi \equiv \frac{\delta S_\text{B}^{\text{sub}}}{\delta \Phi},$$ which gives, using , a redefinition of the bare action useful for construction of the Wilsonian effective action, $$\label{BareBdyAct2} S_\text{B} [\rho_0] = \frac{1}{2} \int \limits_{r=\rho_0} \!\! d^d x \Pi \Phi + S_\text{B}^{\text{c.t.}} [\rho_0].$$ The holographic counter-terms take a general form of $$\label{CTBdyAct} S_\text{B}^{\text{c.t.}} [\rho_0] = - \!\!\! \int \limits_{r=\rho_0} \! \! \! d^d x \sqrt{-g} \left( \frac{\Delta_{-}}{2} \Phi^2 + \sum_{n=1}^\infty \frac{a_n}{n} \Phi^n + \frac{1}{2} \sum_{n=1}^\infty c_n \Phi \Box^n_g \Phi + ... \right),$$ with additional terms proportional to the Ricci curvature of $g$, possible higher derivative terms and terms arising from the conformal anomaly [@de; @Haro:2000xn; @Skenderis:2002wp; @Bianchi:2001kw; @Skenderis:2008dg]. In the standard quantisation, $\Delta_+ = d-\Delta_-$ is the CFT operator dimension and $\Box^n_g$ is the d’Alembertian operator on the metric $g_{\mu\nu}$. As usual, we use $\Delta_\pm = \frac{d}{2} \pm \nu = \frac{d}{2} \pm \sqrt{ \left( \frac{d}{2} \right)^2 + m^2 }$. Note that we will be working in standard Dirichlet quantisation, which will be reflected in the identification of $\phi$ as the source of the dual operator $\CO$, of which the vacuum expectation $\langle \CO \rangle$ is determined by $\Pi$. At $\rho_0$, the source is related to the bulk scalar as $\Phi = \rho_0^{\Delta_-} \phi$. Despite the fact that the structure of multi-trace operators [@Witten:2001ua; @Akhmedov:2002gq; @Mueck:2002gm; @Minces:2002wp] in the effective action is more apparent in mixed and alternative (Neumann) quantisations, we wish to avoid limitations imposed by unitarity on the interval of operator dimensions where only $\Delta_\CO = \frac{d}{2} - \nu$ with $\nu\in [0,1]$ are allowed. Mixed quantisation is a hybrid of Dirichlet and Neumann boundary conditions, but behaves as alternative at the initial cut-off. The vacuum of the theory runs towards the Dirichlet quantisation [@Vecchi:2010dd; @Hartman:2006dy]. Dirichlet and Neumann quantisations are related by a Legendre transformation and their connection has been explored in many references, among them [@Faulkner:2010jy; @Klebanov:1999tb; @Papadimitriou:2007sj; @Gubser:2002vv]. Under the RG transformation $\rho \to \rho + \delta \rho$, we begin with a flow from $\rho_0$. The bulk action will change as we reduce the bulk by integrating out slices of geometry. Since we are working with large-$N$ theories, this in practice means that $S_\text{bulk}$ only changes its integration region $[\rho, \infty)$ to $[\rho + \delta\rho, \infty)$ in . In the Wilsonian RG, the partition function of the bare action is kept fixed under the flow. Analogously, we will fix the full action $S$ from so that $S_\text{B}$ will flow to compensate for the change of $S_\text{bulk}$. We saw that $S_\text{B} [\rho_0]$ was the same as the semi-classical path integral contribution from $r \in [0,\rho_0]$, implying that $S$ defines the entire bulk theory as well as its dual. Dual scalar operators $\CO$ will therefore run as bare operators in quantum field theory, and with a sensible definition of the wavefunction renormalisation, they will keep the renormalised operators invariant. Using the relation between the bulk scalar $\Phi$ and the source $\phi$, we can generalise their relation to permit for the running of the bare source. We define $$\label{PhiZDef} \Phi (\rho) = \rho^{\Delta_-} Z (\rho) \phi(\rho),$$ with $Z(\rho_0) = 1$. On dimensional grounds, from definition , we expect $\Pi_\text{B}$ to transform under the scaling RG transformation, $\rho \to \rho = \rho_0 + \delta\rho$, as $$\label{PiBZTransform} \Pi_{\text{B}} (\rho) = \left( \frac{\rho}{\rho_0} \right)^{-\Delta_+} Z(\rho) \Pi_{\text{B}} (\rho_0).$$ Finally, we generalise the bare boundary action constructed in to permit for all its coefficients and operators to run along the flow. We also, at this point, take the $\rho_0 \to 0$ limit. The action becomes $$\label{BareActRho} S_\text{B} [\rho] = \alpha (\rho) + \!\! \int \limits_{r=\rho} \!\! d^d x \sqrt{-g} \left[ \frac{1}{2} \frac{\Pi}{\sqrt{-g}} \Phi - \sum_{n=2}^{\infty} \frac{1}{n} \lambda_n \Phi^n \right],$$ where $\alpha$, $\Pi$ and $\lambda$ are now functions of $\rho$, which will run under the RG flow equations. In a fixed background $G_{MN}$, the induced $g_{\mu\nu}$’s on $d$-dimensional boundaries are fixed functions of $r$, and hence $\rho$. Polynomial terms directly correspond to counter-terms permitting for a potentially necessary series of such terms. Their structure is completely determined by holographic renormalisation and each term transforms under the bulk isometries. This is crucial, as bulk isometries correspond to symmetries of the dual gauge theory and we expect only such terms to arise in the Wilsonian effective action. $S_{\text{B}} [\rho]$ can then be considered as the effective action of the Wilsonian renormalisation group of composite operators $\CO$. In alternative quantisation where $\Phi \sim \CO$, the multi-trace structure becomes immediately apparent and each $\Phi^n$ term corresponds to the $\CO^n$ effective term. Furthermore, the holographic counter-terms determine the initial values of the effective cosmological constant $\alpha$, the running conjugate momentum $\Pi$ and coefficients $\lambda_n$ at the start of the flow: $$\begin{aligned} \label{CoeffBC} \alpha (0) = 0,& &\Pi(0) = \Pi_0,& &\lambda_2(0) = \Delta_- = a_2,& & \lambda_n (0) = a_n.\end{aligned}$$ Notice that we did not include any derivative or logarithmic terms in . Compared to the original counter-terms , this vast simplification in the structure of the effective terms is possible because of a well-defined $\rho_0 \to 0$ limit resulting from the asymptotically AdS structure of considered spacetimes. For well-behaved bulk scalars and their derivatives near the initial arbitrary $\rho_0$, as ensured by the boundary condition and a smooth metric, derivative terms $\Phi^l \Box^n_g \Phi^m$ vanish in momentum space. To see this, consider first the $c_1 \Phi \Box_g \Phi$ counter-term. In the momentum space representation of $\Phi$, $e^{i k\cdot x}$ involves a contraction with the Minkowski metric and not the induced $g_{\mu\nu}$. The counter-term $c_1 \Phi \Box_g \Phi |_{r=\rho_0}$ therefore gives the $c_1 \rho_0^2 k^2 \Phi(k) \Phi(-k)$ momentum space contribution to the effective action in asymptotically AdS spaces. On the other hand, the coefficient in the $\Delta_{-} \Phi(k) \Phi(-k)$ term includes no factors of $\rho_0^2$. Hence, the $c_1 \Phi \Box_g \Phi$ counter-term vanishes in the limit of $\rho_0 \to 0$, where we wish to begin the RG flow at the extreme UV fixed point. We see that the boundary condition on $\lambda_2$ would still apply, even in the explicit presence of an added double-trace derivative term. The running of $c_1$ can thus be simply absorbed in $\lambda_2$. For the same reason, other counter-terms with derivatives also vanish in the limit. The logarithmic terms coming from the conformal anomaly vanish in this limit as well because they typically appear inside the coefficients of derivative terms. The coefficients of such terms behave as $\rho^n_0 ( \ln \rho_0)^m \to 0$ when $n\geq 1$ and so we may again absorb them into $\lambda_n$’s without affecting the RG flow. Another reason for this simplification is that in the RG flow equations, only derivatives with respect to $\rho$ appear and not the momentum. Additional momentum terms in the effective action $S_\text{B}$ therefore do not matter in this setup and can in general be thought of as absorbed into $\lambda_n$’s. We conclude that the polynomial structure of $\Phi^n$ terms in is sufficient to account for the entire RG flow. An example where this structure fails is the Coulomb branch of the $\CN=4$ theory, which has a counter-term $\left(1 + 1/\ln \rho_0^2 \right)\Phi^2$ at the initial $\rho_0$ cut-off boundary [@Bianchi:2001kw]. Further care then needs to be taken when specifying the boundary conditions for $\lambda_n$ and we expect that in some cases the divergent counter-terms prevent us from taking $\rho_0 \to 0$ altogether. The flow must then begin at $\rho_0 > 0$. This is, of course, not surprising in theories with a Landau pole, which may arise particularly in some scenarios on the Coulomb branch. We will not consider such examples in this paper and in our case the above simplification applies to all studied RG flows. The running coefficients $\lambda_n$ will become functions of $\rho^2 k^2$ as well as other physical scales along the flow. They can therefore be expanded in powers of the momentum. This implies that when we transform them back to position space, the bare effective action will organise itself as the gradient expansion with cut-off dependent terms $\frac{d_1}{\Lambda^2} \CO \Box \CO + ... + \frac{ d_n}{\Lambda^{2n}} \CO \Box^n \CO + ... $. We will therefore still find an infinite series of derivative terms, as is expected in the Wilsonian effective action, despite our only keeping the non-derivative terms in as $\rho_0 \to 0$. Momentum dependence will arise from the derivative kinetic term in the bulk action and descend to all of the $\lambda_n(k)$’s through the coupled differential equations for $\lambda_n$’s. Note that we are only working with scalar bulk theories with two-derivative Lagrangians. In theories with higher derivatives, which may arise from supergravity, additional complications would occur. Renormalisation group equations {#Sec:RG2} ------------------------------- With an effective boundary action $S_\text{B}[\rho]$ which we built from the structure of holographic renormalisation, we now follow [@Faulkner:2010jy] and derive renormalisation group equations for the flow of . They are obtained by varying the position of the brane, $\rho \to \rho + \delta\rho$, and insisting that the overall bare action remains constant at any $r=\rho$: $$\label{HFlowEq} \partial_\rho S_\text{B} [\rho] = - \!\! \int \limits_{r=\rho} \!\! d^d x \; \CH = - \! \! \int \limits_{r=\rho} \!\! d^d x \left( \frac{\delta S_\text{B}}{\delta \Phi} \frac{\partial \Phi}{\partial r} - \sqrt{-G} \CL \left(\Phi, \partial_M \Phi \right) \right).$$ Since we are neglecting the metric backreaction, components of $G_{MN}$ are simply treated as functions of $r$. $\CH$ stands for the Hamiltonian density with $r$ treated as time, making this a Schrödinger-type evolution equation [@Mansfield:1999kk; @Heemskerk:2010hk], or a Hamilton-Jacobi equation [@de; @Boer:1999xf; @Papadimitriou:2004ap]. Using definition , equation can be rewritten as $$\label{HFlowEq2} \sqrt{G^{rr}(\rho)} \partial_\rho S_\text{B} = - \!\! \int \limits_{r=\rho} \!\! d^d x \sqrt{ -g} \left( \frac{1}{2 g} \left( \frac{\delta S_\text{B}}{\delta \Phi} \right)^2 + \frac{1}{2} g^{\mu\nu} \partial_\mu \Phi \partial_\nu \Phi + V(\Phi) \right),$$ where in case of a singularity in $G_{MN}$, we must be more careful before dividing both sides by $G^{rr}$. Now since we are working with Dirichlet boundary conditions, we can insist on the bulk scalar $\Phi$ to remain constant on the moving brane surface throughout the flow, i.e. $\partial_\rho \Phi = 0$. This is equivalent to a general Dirichlet problem [@Brattan:2011my] and related to the more general analysis of [@Kuperstein:2011fn]. Despite fixing $\Phi$, $S_\text{B}$ still has enough structure for the flow to run. Furthermore, it is important to keep in mind that the source $\phi$, as defined in , will still run. Matching the powers of $\Phi$, we derive the following set of differential equations describing the RG flow: $$\label{PiRGeq} \sqrt{G^{rr} } \partial_\rho \Pi = - 2 \lambda_2 \Pi ,$$ $$\begin{aligned} \frac{1}{\sqrt{-G}} \partial_\rho \left( \sqrt{-g} \lambda_2 \right) =& - \lambda_2^2 + k_\mu k^\mu +m^2 + \frac{2}{\sqrt{-g}} \lambda_3 \Pi, \label{Coeff2RGeq} \\ \frac{1}{\sqrt{-G}} \partial_\rho \left( \sqrt{-g} \lambda_n \right) =& - \frac{n}{2} \sum_{m=2}^{n} \lambda_m \lambda_{n+2-m} + \frac{n}{\sqrt{-g}} \lambda_{n+1} \Pi + b_n \label{CoeffnRGeq}\end{aligned}$$ and $$\label{AlphaRGeq} \frac{1}{\sqrt{-G}} \partial_\rho \alpha = - \frac{1}{2 g} \int \frac{d^d k}{(2\pi)^d} \Pi(k) \Pi(-k).$$ The factor of $2$ in comes from the $\delta\Pi / \delta\Phi$ variation between two canonical conjugates. Given that we fixed $\Phi$, it is not surprising that $\Pi$ has to incorporate the total running of the action, proportional to two units of $\Phi$. $\Pi$ therefore has to run differently than the bare $\Pi_{\text{B}}$ in . We could have equally well used the flux factor term $\CF \Phi^2$, to which we come in section \[Sec:RG3\], instead of $\Pi$. If we had treated $\Phi$ and $\Pi$ as independent, then the differential equation , without the factor of $2$, would hold for $\Pi$. The full would describe the running of $\Pi^2$. The analysis of these equations in various fixed backgrounds will be the subject of the following sections. Two-point correlation functions and the Callan-Symanzik Equation {#Sec:RG3} ---------------------------------------------------------------- To connect the RG equations - coming from the bulk to boundary physics, we use the fact that a renormalised two-point correlation function is completely determined by the canonical conjugate momentum $\Pi$ [@Papadimitriou:2004ap; @Papadimitriou:2004rz; @Iqbal:2008by]. It is given by a flux factor $\CF$, $$\label{GAsFlux} G (k) \equiv \left\langle \CO(k) \CO(-k) \right\rangle = \CF (\rho,k), ~ \text{where} ~ \CF = \Pi / \Phi.$$ The type of the two-point function (retarded, advanced, etc.) is determined by boundary conditions imposed on $\Phi$ and consequently $\Pi$. $\CF$ is extracted from the renormalised boundary action by taking functional derivatives with respect to the source $\phi$ [@Gubser:1998bc; @Witten:1998qj; @Son:2002sd; @Herzog:2002pc], $$\label{GFromDerivatives} G(k) =\rho^{2 \Delta_-} Z^2 (\rho) \frac{\delta}{\delta\Phi} \frac{\delta}{\delta\Phi} \int \!\! d^d x \; \Phi \CF(\rho) \Phi,$$ having used equation . $G(k)$ is the fully renormalised, scale-independent two-point function. Therefore $dG / d\rho= 0$. Since everything in our equations is explicitly written out in $\rho$, we can interchangeably use $d/d\rho$ and $\partial_\rho$. Acting with $\rho\partial_\rho$ on we have $$\label{DOnG} \rho \partial_\rho \left( \frac{ \rho^{2\Delta_-} Z^2 (\rho) \Pi(\rho)}{\Phi} \right) = 0.$$ It then follows from initial conditions that $\lambda_2$ can be conveniently written as $$\label{L2Def} \lambda_2 (\rho) = \Delta_- + \gamma(\rho), ~ \text{with}~ \gamma(0) = 0.$$ Using $\partial_\rho \Phi = 0$ along with the RG equation for $\Pi$ thus gives $$\label{GammaAsZ} \gamma = \frac{ \partial \ln Z}{\partial \ln \rho}$$ for metrics, such as pure AdS in , with $G^{rr} = r^2$. Equation reveals the expected relation between the anomalous dimension $\gamma$ of the dual field theory operator $\CO$ and the wavefunction renormalisation of $\Phi$, and consequently $\Pi$. A cut-off dependent two-point function, disposing of the $\rho^{2\Delta_-}$ factor which is a consequence of the conformal (scale-invariant) scaling, obeys the Callan-Symanzik renormalisation group equation $$\label{CSEq1} \left( \rho \frac{\partial}{\partial \rho} + 2 \gamma \right) \tilde G (\rho,k) = 0.$$ Two-point function $\tilde G$, which is proportional to $\Pi$, had been holographically renormalised in our construction and is therefore finite in the $\rho_0 \to 0$ limit. Its solution has the standard form of a running two-point function, $$\tilde G (\rho, k) = \lim_{\rho_0\to 0} \tilde G (\rho_0, k) \exp \left\{ - 2 \int_{r=\rho_0}^{r=\rho} \gamma(r) d \ln r \right\},$$ so that $\tilde G(\rho) = Z^{-2} (\rho) \tilde G (\rho = 0)$. The full running dimension of the operator $\CO$ is therefore $\Delta_\CO = \Delta_+ - \gamma$. Setting $\lambda_n = 0 $ for $n \geq 3$, we see that the equation becomes the renormalisation group equation for the anomalous operator dimension. Alternatively, this equation also describes the flow of the double-trace coupling $f$ in $f \CO\CO$, with $\rho \partial_\rho \gamma$ proportional to its beta function $\beta_f$ [@Vecchi:2010dd; @Faulkner:2010jy; @Heemskerk:2010hk]. Given that is a quadratic equation, we anticipate $\beta_f \propto \gamma^2$. These results are indeed consistent with field theory calculations of large-$N$ theories in [@Pomoni:2008de], where it was found that $\gamma \propto f$ and $\beta_f \propto f \gamma \propto \gamma^2$ in large-$N$ theories conformal in their single-trace sector. Furthermore, we can now justify setting $\lambda_{n\geq3} = 0$. Coefficients $\lambda_n$ are similarly related to multi-trace couplings of the $n$-th order. In large-$N$ analysis, we may safely neglect all deformations of order higher than two (double-trace), as all such deformations are sub-dominant in the large-$N$ limit [@Pomoni:2008de]. Another reason is that by working in the standard (Dirichlet) quantisation, the smallest possible operator dimension is $\Delta_\CO = d/2$, implying that any triple-or higher-trace operator would necessarily be irrelevant. Even in the alternative (Neumann) quantisation, at the unitarity bound with $\Delta_\CO = d/2-1$, such an operator could at most have a marginal triple-trace deformation in $d=6$ dimensions. In scenarios when $G^{rr}$ is a more complicated function of $r$, we need to redefine the quantities considered above. We write $$\label{L2DefGrr} \lambda_2 (\rho) = \frac{\sqrt{ G^{rr} (\rho) }}{\rho} \left( \Delta_- + \gamma(\rho) \right).$$ To demonstrate that this is a sensible definition, we consider an example of a near-extremal D3 brane generalised to any dimension, which gives temperature to the dual of the pure AdS geometry. This spacetime is also known as the black brane in AdS and has the metric of form $$\label{BBAdSMetric0} ds^2 = \frac{L^2}{r^2} \left( - f(r) dt^2 + d\vec{x}^2_{d-1} + \frac{dr^2}{f(r)} \right),$$ with the thermal factor behaving as $f(r) \to 1$ when $r \to 0$. We will set the AdS radius $L$ to $L=1$. The metric is general enough to describe all cases with non-zero temperatures, which we will analyse in section \[Sec:AnomDim\]. With $G^{rr} = r^2 f(r)$ in , the expression takes the form $$\label{L2DefT} \lambda_2 (\rho) = \sqrt{ f (\rho) } \left( \Delta_- + \gamma(\rho) \right),$$ which still obeys the initial condition . Equation then loses its explicit dependence on the thermal factor, giving as before, $$\left( \rho \frac{\partial}{\partial \rho} + 2 \Delta_- + 2 \gamma \right) \Pi = 0.$$ Using in will, however, modify the flow equation for the anomalous dimension $\gamma$ by adding thermal corrections to it. Relation with its thermal rescaling can also be understood directly from the equation of motion, $\frac{1}{\sqrt{-G}} \partial_r \left(\sqrt{-G} G^{rr} \partial_r \Phi \right) + G^{\mu\nu} \partial_\mu \partial_\nu \Phi - m^2 \Phi = 0$, for a massive scalar field. We follow the standard procedure for determining the operator dimensions in AdS/CFT by analysing solutions near the asymptotically AdS boundary ($r \ll 1$). In pure AdS with the metric , $\Phi$ there behaves as $r^\Delta$. Using the equation of motion then gives a relation $\Delta (\Delta - d) = m^2$, where in our Dirichlet (standard) quantisation we take $d - \Delta \equiv \Delta_+$ to be the dimension of the dual operator $\CO$ to $\Phi$. The other, smaller, solution is our $\Delta_- \equiv \Delta$. Using the same behaviour of $\Phi$ in the black brane background near $r \approx 0$, which is justified by the asymptotic structure and the Fefferman-Graham expansion [@Fefferman:1985], immediately yields a modified relation $\Delta (\Delta - d) f(r) = m^2$. Now of course in the limit of $r \to 0$, $f(r) \to 1$, and so the dimension of the dual operator at the AdS infinity stays independent of the thermal factor. But this identity suggests that the thermal factor will contribute to the dimensions of scalar operators as we flow into the bulk, towards the horizon. It gives a thermal modification of $\Delta_\pm \to \sqrt{f(r)} \Delta_\pm$, with the same scaling as in . The radial derivative in the bulk is related to the dilatation operator on the field theory side of the duality [@Papadimitriou:2004ap; @Papadimitriou:2004rz]. Hence, we can think of the rescaled anomalous dimension as arising from the red-shifted dilatation operator. Energies in the field theory and thus the operator scaling dimensions are measured as being conjugate to the proper time on the flowing hypersurface at $\rho$.[^1] Anomalous dimensions and double-trace deformations {#Sec:AnomDim} ================================================== In this section, we study the renormalisation group equation , describing the radial evolution of the anomalous operator dimension induced by effective double-trace deformations. We aim to formulate a method for determining a precise functional dependence, $\Lambda(\rho,...)$, between energy scales in the brane’s QFT and their dual bulk quantities. Neglecting all but double-trace deformations, as argued in section \[Sec:RG3\], the relevant renormalisation group equation is given by $$\label{DTRGEq} \frac{1}{\sqrt{-G}} \partial_\rho \left( \sqrt{-g} \lambda_2 \right) = - \lambda_2^2 - \left( \frac{d}{2} - \nu \right) \left(\frac{d}{2} + \nu \right) + g^{\mu\nu} \eta_{\mu\rho} \eta_{\nu\sigma} k^\rho k^\sigma,$$ which we will analyse in various asymptotically AdS backgrounds. We have explicitly written out contractions of the physical brane momentum $k^\mu$ with the flat Minkowski metic $\eta_{\mu\nu}$. We use the previously established definitions of $\lambda_2$, i.e. $\lambda_2 (\rho) = \Delta_- + \gamma(\rho)$, and $\lambda_2(\rho) = \sqrt{f(\rho)} \left( \Delta_- + \gamma(\rho) \right)$ in thermal cases. For a well-defined and realistic RG flow, it is essential that we introduce physical conditions on the behaviour of $\lambda_2$. Understanding how $\lambda_2$ should behave at two ends of the flow will then translate into the necessary boundary conditions required to extract the dependence of the cut-off on the bulk from . In quantum field theory, the wavefunction renormalisation $\CZ$ is interpreted as the probability of finding a bare particle in a physical one-particle state [@Higashijima:2003et; @Peskin:1995ev]. This can easily be understood from the Källén-Lehmann spectral representation of a two-point function. As a consequence, $\CZ$ must satisfy the unitarity bound $0\leq \CZ \leq 1$, or equivalently, the spectral density function has to remain positive. Perturbatively, $\CZ = 1 - g^2 A \ln \Lambda^2 / \mu^2 $ up to one-loop, where $\mu$ is the renormalisation scale and the ratio of $\Lambda/\mu$ is always required by dimensional analysis. In theories that are conformal in their single-trace sector, all higher-loop corrections are subleading in the large-$N$ expansion [@Witten:2001ua; @Pomoni:2008de]. Therefore, $\CZ = 1$ at $\Lambda = \mu$. The renormalisation scale $\mu$ can in our construction be identified with the physical Lorentzian momentum $\sqrt{-k^2}$ at which we are probing the operator. This is because we have used the same “minimal subtraction” scheme in holographic renormalisation at the initial cut-off $\Lambda_0 (\rho_0)$ for any value of the physical operator momentum of interest. Equivalently to the discussion of the wavefunction renormalisation, a bare two-point function of an operator with dimension $\Delta$ can be schematically expanded around its anomalous dimension, to the leading order in large-$N$, as $$\label{PertAnomDimExp} \left\langle \CO(x) \CO(0) \right\rangle_\Lambda = \CZ^2 \left\langle \CO(x) \CO(0) \right\rangle = \frac{c}{x^{2 \Delta}} \left(1 - 2 g^2 A \ln \Lambda^2 x^2 + ... \right).$$ Since $x$ scales as $1/\sqrt{-k^2}$, we again recover that $\CZ = 1$ at $\Lambda = \sqrt{-k^2}$. In general, we expect both expansions for $\CZ$ and $\left\langle \CO(x) \CO(0) \right\rangle_\Lambda$ to have all higher-order terms proportional to powers of $\ln \Lambda^2 / k^2$. Hence, we expect that $\CZ$ always takes the value of $\CZ = 1$ at $\Lambda = \sqrt{-k^2}$, to all orders in large-$N$. More generally, the theories we wish to consider all have conformal fixed points in the extreme UV. These large-$N$ theories, initially conformal in their single-trace sector, will only be perturbed by relevant IR mass and thermal scale deformations. Furthermore, since we are working with the standard (Dirichlet) quantisation, double-trace deformations in the effective Wilsonian action will be irrelevant and therefore only affect the UV regime. Now, quantities such as the anomalous dimension must become cut-off independent at the end of the RG procedure. This is ensured by imposing the appropriate renormalisation condition, which fixes the value of $\Delta_\CO$ with respect to the running cut-off. We therefore demand that $d \gamma / d \Lambda = 0$ at the RG scale $\mu$. It is natural, as discussed above, that this scale should equal the momentum scale $\sqrt{-k^2}$ of the operator $\CO$. The $\sqrt{-k^2} = \mu$ scale must thus be the minimal scale down to which we can integrate out higher momentum modes, even in the presence of IR scales below the operator momentum. In extracting the cut-off dependence on the bulk, we will not consider any examples with physical momenta below the IR mass scales, which would require us to integrate out those scales in order for the cut-off to run down to $\sqrt{-k^2}$. In all cases we wish to consider, the running Wilsonian cut-off scale $\Lambda$ can only exist in the interval of $\Lambda \in [\Lambda_{\times} , \Lambda_0 \to \infty ]$, where the lowest possible cut-off $\Lambda_\times$ satisfies $\Lambda_\times = \mu = \sqrt{-k^2}$. Given our holographic construction of the Wilsonian renormalisation group in section \[Sec:RG\], the cut-off should become a function of the running scale $\rho$. It thus follows that in order for the physical operator dimension to be $\rho$-independent, we must impose the renormalisation condition $$\label{RGCon1} \frac{\partial \gamma}{\partial \rho} = 0 ~\text{at}~\Lambda_\times(\rhox) = \mu = \sqrt{-k^2},$$ while keeping all other scales fixed. The lowest possible value of the Wilsonian cut-off $\Lambda_\times$, given some physical timelike operator momentum $\sqrt{-k^2}$, must therefore be a function of the largest possible value of $\rho$ that the brane can reach while flowing into the bulk. The variable $\rhox$ is used throughout to indicate the value of $\rho \in [\rho_0\to 0,\rhox]$ where the RG flow must terminate. When $\rho$ and $k$ are the only two scales present in the theory, as in the case of the conformal $\CN=4$ theory, then each of the dimensionless running quantities $Z$ and $\gamma$ can only depend on the dimensionless product $\rho^2 k^2$. Therefore $\rho \partial \gamma/\partial \rho= \sqrt{-k^2} \partial \gamma / \partial \sqrt{-k^2}$, which implies that the renormalisation condition can be rewritten as $\partial\gamma / \partial \sqrt{-k^2} = 0$ when $\rho \neq 0$ and $\sqrt{-k^2} \neq 0$. In such a theory, the physics at the cut-off, where $\sqrt{-k^2} = \Lambda_\times$, remains conformal and only becomes modified by double-trace deformations below the cut-off scale, i.e. $\sqrt{-k^2} < \Lambda_\times$. This follows from the fact that in large-$N$ theories conformal in their single-trace sector the beta function of the double-trace coupling $f \CO^2$ behaves as $\beta_f \propto \gamma^2 \propto \mu \partial_\mu \gamma$ [@Pomoni:2008de]. If such a theory is deformed by IR energy scales such as a mass scale or temperature, the renormalisation group condition must still hold. The theory will still possess a conformal fixed point in the extreme UV, but the physics at the lowered cut-off scale will no longer be conformal. To see this, let us introduce a mass scale $\CM$, which is dual to a bulk scale at some radial position $r_1$, such that $\CM \propto 1/r_1$. Function $\gamma$ can now depend on three dimensionless combinations $\rho^2 k^2$, $r_1^2 k^2$ and $\rho^2 / r_1^2$, which implies that $\rho \partial \gamma/\partial \rho= \sqrt{-k^2} \partial \gamma / \partial \sqrt{-k^2} - r_1 \partial \gamma / \partial r_1$. The RG condition then fixes $ \sqrt{-k^2} \partial \gamma / \partial \sqrt{-k^2} = r_1 \partial \gamma / \partial r_1 \neq 0$ at $\Lambda_\times = \sqrt{-k^2}$, which shows that the theory is no longer scale-invariant at the running cut-off. It is easy to see that the same argument can be extended to the presence of several mass scales and temperature. Let us now address the question of what value the anomalous dimension $\gamma(\rhox)$ should take when we impose the renormalisation conditions. The second renormalisation condition must reflect the fact that the bare two-point function, which depends on $\Lambda_0 (\rho_0)$, should be $\Lambda(\rho)$-independent at the renormalisation scale $\mu = \sqrt{-k^2}$. This can be justified by the fact that the only divergences present in theories under consideration are the UV divergences, which are removed by the minimal subtraction at $\rho_0$. No additional IR divergences affect the bare two-point function, nor the anomalous dimension. Since we acquire no new divergences in the process of integrating out the slices of geometry, we can therefore set the bare two-point function to its initial value at the scale $\mu$ where we impose the two renormalisation conditions. In our holographic construction, the bare running two-point function must thus be $\rhox$-independent at the scale $\Lambda_\times= \mu = \sqrt{-k^2}$ where the flow terminates. With these observations in mind we can find $\gamma(\rhox)$ using the results from sections \[Sec:RG2\] and \[Sec:RG3\]. The bare operator $\CO$ scales proportionally to $\Pi_\text{B}$ and not $\Pi \propto \CF \propto \tilde G$, which runs as the holographically renormalised, scale-dependent two-point function. Namely, $$\label{OwithZ} \CO (\rho) = \left( \frac{\rho}{\rho_0} \right)^{-\Delta_+} Z(\rho) \CO(\rho_0).$$ $Z(\rho)$ in our construction is analogous to $\CZ$ in a general field theory, but without the LSZ normalisation. It quantifies the renormalisation group - scale transformations. The generating functional of the dual field theory $\left\langle \exp \{ \int \!\! \CO \phi \} \right\rangle$ gives $\langle \CO \CO \rangle$ after taking two functional derivatives with respect to $\phi$. Using with $Z(\rho_0)=1$ and $\partial_\rho \Phi = 0$, as well as , we see that the two-point function on the field theory side scales as $$\label{OOwithZ} \left \langle \CO(k) \CO (-k) \right\rangle_\rho \propto \frac{\delta}{\delta \phi} \frac{\delta}{\delta\phi} \int \!\! d^d x \; \CO \phi = \left(\frac{\rho}{\rho_0} \right)^{-2 \nu} Z^2 (\rho) \frac{\delta}{\delta \phi_0} \frac{\delta}{\delta\phi_0} \int \!\! d^d x \; \CO_0 \phi_0.$$ It therefore follows that $$\label{CutoffCond1} Z (\rhox) = \left(\frac{\rhox}{\rho_0} \right)^{\nu},$$ where the flow terminates at $\rhox$. The bare two-point function $\left \langle \CO(k) \CO (-k) \right\rangle_\rho$ remains divergent when we take $\rho_0 \to 0$. This divergence is, as discussed above, completely removed by the minimal subtraction through the holographic renormalisation of $\CO$ and $\phi$. We can therefore see that the second renormalisation condition is completely analogous to perturbatively finding that $\CZ = 1$ in expression , even in the presence of additional relevant deformations. Their presence will only be reflected in the functional dependence of $Z$. Using and , the boundary conditions for $\gamma$ at the two ends of the flow become $$\begin{aligned} \label{GammaBC} \gamma(0) = 0& &\text{and}& &\gamma_\times \equiv \gamma(\rhox) = \nu.\end{aligned}$$ It is worth noting that the rescaling in our construction works in the opposite way compared to the usual Wilsonian RG. Usually one rescales the theory upwards after integrating out high momentum modes above $\Lambda$. The rescaled effective theory thus remains defined up to the initial $\Lambda_0$ and the anomalous dimension can be extracted from the rescaling of the field variable. We are, on the other hand, rescaling the original ($\sqrt{-k^2} \leq \Lambda_0$) fields and operators down to the effective theory defined up to $\Lambda$. It follows from that the effective operator dimension at $\rhox$ is $\Delta_\CO = d/2$. This is true regardless of our choice of quantisation. The coupling $f$ in the effective double-trace term $f \CO^2$ will consequently have its mass dimension equal to zero at $\rhox$. In the absence of mass scales or temperature, it is easy to see that this fact is consistent with our expectation that an effective field theory of a single-trace UV conformal theory should remain conformal at the maximal allowed momentum, right at the cut-off scale. This is because the conformal anomaly (trace anomaly) arises when UV divergences make coupling constants acquire non-zero anomalous mass dimensions [@Callan:1970ze]. The breaking of conformal invariance by quantum fluctuations is therefore a purely UV effect. For operator momenta at the cut-off scale, however, all UV modes had been completely removed, which ensures a zero coupling dimension at all loops. Below the cut-off, when $\sqrt{-k^2} < \Lambda_\times$, operators obtain non-trivial anomalous dimensions depending on their momentum. The double-trace coupling also acquires a non-zero mass dimension. In the presence of relevant mass scales a similar argument can be repeated for the behaviour at the cut-off if we consider rescaling the entire theory along a symmetry of $\gamma$ and $Z$, including all the IR scales, up to the initial extreme UV fixed point at $\Lambda_0$. This works as long as we stay above the IR scales and as long as we have not introduced any new IR divergences. The value of $\gamma(\rhox)$ should stay the same in those cases even though the theory will no longer be at a fixed point for $\Lambda < \Lambda_0$, where now $\partial \gamma_\times / \partial \sqrt{-k^2} \neq 0$. For a well-defined RG flow, having already specified the renormalisation conditions, we can also impose the condition that $\gamma$ must remain real, as flowing to a complex operator dimension would imply an unstable theory. Such dynamical symmetry breaking would also inevitably break conformal symmetry [@Pomoni:2008de]. Furthermore, $\gamma$ must not break the unitarity bound $-\infty < \gamma \leq \nu + 1$, which follows from $d/2 - 1 \leq \Delta_\CO = d/2 + \nu - \gamma$. We will see from the analysis that $\gamma$ never decreases throughout the flow, so that $\partial_\rho \gamma \geq 0$. This is expected as it runs between $\gamma = 0$ and $\gamma = \nu$, ending with $\partial_\rho \gamma_\times = 0$. Furthermore, in our examples, there is no reason to expect the theory to run into another fixed point along the flow, so that $\partial_\rho \gamma > 0$ for $\rho \in (0,\rhox)$. This immediately implies that $\gamma \geq 0$ for all $\rho$. The only exception to this behaviour are the runnings of lightlike (or vacuum) operators which we will have to treat separately. As is usual in quantum field theory, there is no smooth limit of momentum $k^2 \to 0$ that we could take to simply recover the physics of $\CO(k^2=0)$ from timelike cases. Anyhow, since lightlike operators should be able to run into extreme IR of the bulk ($\rho\to\infty$), there is no reason to expect that our anaysis, when using sharp IR cut-offs which abruptly terminate the flow, would apply to such lightlike runnings and set $\gamma = \nu$ and $\partial_\rho \gamma = 0$ at the point where the flow terminates at the horizon or a general metric singularity. This applies to all cases, the GPPZ, the black brane, etc., with the exception of the undeformed pure AdS scenario, where we can flow until $r\to\infty$ without running into singularities. In that case, it is natural to expect that $\partial_\rho \gamma (\rho\to\infty) = 0$, so that the flow ends at an IR fixed point. As for the relevance of vacuum states, we lastly note that we will always subtract off the $k^2 = 0$, $\langle \CO(0) \CO(0) \rangle$ vacuum contribution to correlators $\langle \CO(k) \CO(-k) \rangle$, which means that $\gamma (\rho,k) = 0$ at $k^2 \to 0$. Furthermore, since the effective double-trace deformations are irrelevant, the anomalous dimension at low momenta will not be affected by them. With these observations in mind, we impose the following four conditions that the behaviour of $\gamma$ is expected to follow and from which we can extract the dependence of $\Lambda$ on the bulk by identifying where the flow terminates: *i)* $\frac{\partial \gamma}{\partial \rho} = 0$ and $\gamma = \nu$ at $\sqrt{-k^2} = \Lambda_\times$, given fixed timelike $\sqrt{-k^2}$, where the flow terminates at a maximal possible $\rho = \rhox$, *ii)* $\gamma$, as well as $\lambda_2$, must be real, *iii)* $\gamma$ must be non-singular and must not break the unitarity bound $-\infty < \gamma \leq \nu + 1$, *iv)* $\gamma$ is expected to be monotonically increasing (never decreasing), $\frac{\partial\gamma}{\partial\rho} \geq 0$, from initial $\gamma(0) = 0$, hence $\gamma \geq 0$ for all $\rho$. Pure $AdS_{d+1}$ {#Sec:AnomDimAdS} ---------------- As for our first example, let us find the anomalous dimension of a scalar operator dual to a massive scalar in pure $AdS_{d+1}$, with the metric in Poincaré coordinates given by equation . This example describes a large-$N$ theory conformal in its single-trace sector. For completeness, we repeat the metric here: $$\label{AdSMet1Rep} ds^2 = \frac{1}{r^2} \left( -dt^2 + d\vec{x}^2_{d-1} +dr^2 \right).$$ Writing $\lambda_2 (\rho) = \Delta_- + \gamma(\rho)$, the equation for the flow of $\gamma$ becomes $$\label{EqGamma1} \rho \frac{\partial \gamma}{\partial \rho} = - \gamma \left( \gamma - 2 \nu \right) + \rho^2 k^2.$$ To see how condition *iv)* applies, we analyse the case of a timelike physical momentum $k^2 < 0$. This means that $\rho^2 k^2 \leq 0$. Since $\rho \frac{\partial \gamma}{\partial \rho} \geq 0$, the only way for the right-hand side of to be non-negative is if $\gamma - 2 \nu$ remains sufficiently negative while $\rho$ increases. To see this, note that at $\rho=0$, $\gamma (\gamma - 2\nu) = 0$, as well as $\rho^2 k^2 = 0$. Now as $\rho$ increases the first term, $-\gamma(\gamma-2\nu) \geq 0$ grows larger until $\gamma > 2 \nu$. At that point, the overall sign of the first term flips and becomes negative. The second term, however, decreases monotonically into negative values and may quickly begin to dominate over the first term, running the entire right-hand-side of into negative values before $\gamma = 2\nu$. Function $\gamma(\rho)$ therefore reaches its maximal value $\gamma_\times$, at some $\rhox$, when $- \gamma_\times \left( \gamma_\times - 2 \nu \right) + \rhox^2 k^2 = 0$. The two possible solutions, $\gamma_\times = \nu \pm \sqrt{ \nu^2 + \rhox^2 k^2 }$, can only be real and consistent with condition *ii)* if $\rhox \sqrt{-k^2} \leq \nu$. But given that we seek maximal $\rhox$, this inequality implies that $\rhox = \nu / \sqrt{-k^2}$. The largest possible value of $\gamma$, given some timelike momentum and $\nu$, is therefore $\gamma_\times = \nu$ irrespective of our choice of solution, confirming the renormalisation conditions specified by *i)*. Despite this, the correct solution would be the one with the minus sign because at $\rhox = 0$, as required when $-k^2 \to \infty$, $\gamma_\times$ should vanish for it to be consistent with the initial condition . Only in this case can $\gamma$ and its derivative be continuous and non-singular. Furthermore, since the right-hand-side of vanishes at $\rhox>0$, we clearly have a maximum $\frac{\partial \gamma_\times}{\partial \rho} =0$ for any timelike operator momentum at the point where the RG flow terminates. Note that we did not have to impose the vanishing of the derivative, as dictated by the renormalisation condition *i)* at $\rhox$ by hand, even though it followed from a general field theory analysis. It is automatically satisfied, as is *iii)*, and we chose to start with condition *iv)* to show the internal consistency of the four conditions. Alternatively, we could have started with the renormalisation conditions *i)*. Looking for the maximum would imply that $\gamma_\times = \nu$ at a maximal possible $\rhox$. It can then be seen that condition *iv)* is also satisfied in order for $\gamma$ to obey *ii)* and *iii)*. This means that when *ii)* and *iii)* are enforced, *i)* implies *iv)*, but also *iv)* implies *i)*. Therefore, the two conditions are equivalent: $\textit{i)} \Leftrightarrow \textit{iv)}$. The solution of equation can be found explicitly for the anomalous dimension with $\gamma=0$ at $k=0$. Written in terms of Bessel functions, it takes the form $$\begin{gathered} \label{EtaAdSSol} \gamma(\rho) = \rho \sqrt{-k^2} \left\{ \frac{ Y_{\nu -1}(\nu ) J_{\nu -1}( \rho \sqrt{-k^2}) - J_{\nu -1}(\nu ) Y_{\nu -1}( \rho \sqrt{-k^2}) }{ \left[ Y_{\nu -1}(\nu ) - Y_{\nu }(\nu ) \right] J_{\nu }( \rho \sqrt{-k^2}) - \left[ J_{\nu -1}(\nu ) - J_{\nu }(\nu ) \right] Y_{\nu }( \rho \sqrt{-k^2}) } - \right. \\ \left. - \frac{ Y_{\nu }(\nu ) J_{\nu -1}( \rho \sqrt{-k^2}) - J_{\nu }(\nu ) Y_{\nu -1}( \rho \sqrt{-k^2}) }{ \left[ Y_{\nu -1}(\nu ) - Y_{\nu }(\nu ) \right] J_{\nu }( \rho \sqrt{-k^2}) - \left[ J_{\nu -1}(\nu ) - J_{\nu }(\nu ) \right] Y_{\nu }( \rho \sqrt{-k^2}) } \right\},\end{gathered}$$ with the range of $0 \leq \rho \leq \frac{\nu}{\sqrt{-k^2}}$. Plots of two different solutions with $\nu = 0.5$ and $\nu = 1$ are shown in Figure \[fig:AdSGamma\]. ![Solutions for anomalous dimensions $\gamma$ with $\nu_1 = 1$ (dashed curve) and $\nu_2 = 0.5$ (solid curve).[]{data-label="fig:AdSGamma"}](AdSGamma) Note that the solution is well-defined for all real $\nu > 0$, above the Breitenlohner-Freedman bound [@Breitenlohner:1982jf]. For non-integer values of $\nu$, can be simplified to give $$\label{EtaAdSSolNonInt} \gamma(\rho) = \rho \sqrt{-k^2} \frac{ \left[ J_{-\nu -1} (\nu ) + J_{-\nu } (\nu ) \right] J_{\nu -1}( \rho \sqrt{-k^2}) + \left[ J_{\nu -1}(\nu )-J_{\nu }(\nu ) \right] J_{1-\nu }( \rho \sqrt{-k^2})}{\left[ J_{-\nu -1}(\nu )+J_{-\nu }(\nu ) \right] J_{\nu }( \rho \sqrt{-k^2})+ \left[ J_{\nu }(\nu ) - J_{\nu -1}(\nu ) \right] J_{-\nu }( \rho \sqrt{-k^2})}.$$ The interpretation of how $\rho$ translates into the Wilsonian cut-off $\Lambda$ is now clear, as is the fact that the flow of $\gamma$ corresponds to the Wilsonian renormalisation group. If we reverse the argument, keep $\sqrt{-k^2}$ arbitrary, and insist on integrating out geometry between $0\leq\rho\leq\rhox$, then there is a limited interval of timelike momenta that operators can take after integration. The relation is $$\label{KRhoxIneqAdS} \sqrt{-k^2} \leq \nu / \rhox,$$ which implies the presence of a hard momentum cut-off on the brane side of holographic duality, induced by the sliding brane. It is precisely the hard Wilsonian UV cut-off with the Lorentzian signature $\sqrt{-k^2} \leq \Lambda$, that defines the energy scale up to which the effective field theory is valid. The situation is somewhat different from the Euclidean field theory case, and we are effectively integrating out energy-momentum modes in regions above floating hyperbolae in a light-cone diagram, down to asymptotically lightlike momenta. The lightlike $k^2 = 0$ can nevertheless not be reached and we need to treat that case separately. In real-time field theory, Lorentzian cut-offs always have a smooth form to ensure that the gauge invariance is preserved. Furthermore, a hard cut-off may not be sufficient to regularise the UV divergences due to the infinite volume of the energy-momentum space under the hyperbola. Nevertheless, it is natural that a Lorentzian Wilsonian cut-off should have a well-defined physical meaning, consistent with relativity. It suggests a re-ordering of physical phenomena according to their invariant length scales and implies a mixing of the usual Euclidean IR/UV separation of the energy scales. In contrast with the Lorentzian relativistic view on the renormalisation group scales, the Euclidean ordering of physical phenomena according to their length scales is motivated by locality in space.[^2] A Lorentzian cut-off may be especially suitable for the investigation of non-perturbative theories and the operator-product renormalisation of local composite operators $\CO$, which are singlets under the gauge group. The same inequality, , holds for any chosen momentum scale $k^2$ of an operator as well as any chosen scale $\rhox$, where we decide to terminate the integration. A scaling symmetry between $k$ and $\rho$ is apparent from the evolution equation , which is invariant under simultaneous $$\label{ScalingEqGamma1} \rho \to a \rho ~\text{and}~ \sqrt{-k^2} \to \sqrt{-k^2} / a,~ \text{with constant}~ a.$$ The analysis therefore provides an exact functional dependence for the correspondence between parameters describing the bulk physics and their dual Wilsonian UV cut-off $\Lambda(\rho,d,m,...)$ for all $\rho$ and $k$. It is also important for this identification that the operator $\rho \partial / \partial\rho$ is invariant under $\rho \to a \rho$ for constant $a$. Hence the boundary energy scale is $$\Lambda = \nu / \rho.$$ $\Lambda_\times = \nu / \rhox$ is then the lowest possible scale down to which we can integrate from $\Lambda \to \infty$, given some momentum $k$ at which we wish to evaluate the operator. Energy scale $\Lambda$ must be real and positive, so $\nu$ must also be real and positive. This is again ensured by the Breitenlohner-Freedman bound [@Breitenlohner:1982jf], which we analogously required for a well-defined range of $\rho$. It is important to note that the constant of proportionality $\nu$ is merely a result of the bulk coordinates we used in to establish the dependence between that particular bulk space and its boundary dual. It also clearly signals the expected probe dependence on the bulk scalar mass. We could easily redefine $r \to \nu r$ to give us a metric $$\label{AdSMet2} ds^2 = \frac{1}{\nu^2 r^2} \left( -dt^2 + d\vec{x}^2_{d-1} \right) + \frac{dr^2}{r^2}.$$ In this background, we obtain $\sqrt{-k^2} \rhox \leq 1$ and hence $\Lambda=1/\rho$. For a general Poincaré-like AdS chart, we can therefore conclude that the Wilsonian energy scale of the cut-off in a boundary theory is related to the radial bulk coordinate by $$\label{LambdaBulkAdSRelat} \Lambda = \frac{C(d,m,...)}{r},$$ where $C$ is a constant, which depends on the bulk quantities describing the background metric and can be found exactly following the above procedure. Our analysis is thus consistent with the long anticipated relationship $\Lambda \propto 1 / r$ [@Susskind:1998dq; @Peet:1998wn]. In addition, it also uniquely determines the proportionality constant for a given pair of holographically dual quantities, i.e. the composite scalar operator $\CO$ and its dual massive scalar field probe in the AdS background. At the Breitenlohner-Freedman bound with $\nu=0$, and for arbitrary momentum, the RG flow analysis breaks down unless $\rho_\times = 0$. This is also apparent from the metric which is singular at $\nu=0$. The only way to have an RG flow compatible with such an operator at the UV fixed point is when $\CO$ has lightlike on-shell momentum $k^2=0$. The solution is still $\gamma(\rho)=0$ and the anomalous dimension does not run, but it is well-defined for all $\rho$. We need to consider the lightlike (or vacuum) $k^2 = 0$ case separately. Equation drastically simplifies for all operator dimensions. To satisfy the monotonicity condition *iv)*, the anomalous dimension must behave as $\gamma_\times \to 2 \nu$ when $\rhox \to \infty$. The solution of equation that satisfies the required conditions is then $$\label{EtaAdSSolLight} \gamma(\rho) = \frac{2 \nu \chi \rho^{2\nu}}{1 + \chi \rho^{2\nu}},$$ as previously found by [@Faulkner:2010jy; @Vecchi:2010dd]. In a timelike scenario considered above, it is not possible to flow to $\gamma = 2 \nu$ without violating condition *iii)*, as there is always a singularity between $\gamma(0) = 0$ and $\gamma=2\nu$, unless $k=0$. This is precisely what prevents us from having a continuous, well-defined limit between timelike and lightlike cases. Constant $\chi$ cannot be determined from the boundary conditions we set, but needs to be matched with the normalisation of the corresponding two-point correlator. For spacelike momenta $k^2 > 0$, the energy scale becomes pure imaginary. Having found the radial coordinate to be proportional to the energy of the dual theory, it is therefore natural to take $r \to i r$, consistent with rescaling symmetry relations . The analysis of then goes through in exactly the same way as for timelike momenta. We obtain $k^2 \leq \Lambda_\times = \nu / \rho_\times$ and $\Lambda = C / r$. GPPZ flow from $\CN=4$ to $\CN=1$ with a mass deformation and a mass gap {#Sec:AnomDimGPPZ} ------------------------------------------------------------------------ The GPPZ flow [@Girardello:1999bd] describes a flow from the $\CN=4$ theory in the UV to the $\CN=1$ in the IR. This is achieved by deforming the $\CN=4$ theory with a relevant mass deformation. An $\CN=4$ vector multiplet in the adjoint is equivalent to three $\CN=1$ chiral multiplets in the adjoint. Giving mass to one of the $\CN=1$ mutiplets provides the desired deformation. The theory can flow in different ways depending on which components of the deformation are kept non-zero. They are described in [@Girardello:1998pd; @Girardello:1999bd; @Porrati:2000nb; @Porrati:2000ha]. We will focus on the case that includes the $\CN=1$ confinement, suppressing the supergravity singlet dual to a bilinear gaugino term responsible for gaugino condensate. The bulk supergravity for this construction consists of type IIB scalar modes deforming the original $AdS_5 \times S^5$ metric. This $10$-dimensional type IIB theory is then truncated on $S^5$ to give a $5$-dimensional $\CN=8$ supergravity with 42 scalars. The scalars transform as **1**, **20** and **10** under the $\CN=4$, $SU(4)_R$, R-symmetry. The masses of these fields are $m^2=0$, $-4$ and $-3$, respectively. The GPPZ flow then describes the metric deformation resulting from the backreaction with scalars of $m^2=-3$. This corresponds to a dual deformation of dimension $3$ by the scalar operator in $4$ spacetime dimensions, which can be identified as the fermion bi-linear operator with the coupling constant of mass dimension $1$. The **10** of $SU(4)$ decomposes into $\textbf{1}+\textbf{6}+\textbf{3}$ under $SU(3) \times U(1)$, of which we only keep the **6**, which is responsible for the fermion bilinear mass term. The singlet would give rise to a gaugino condensate [@Girardello:1999bd]. We further truncate the theory to only account for the large-$N$-dominant effective double-trace deformations and consider the background as static. The supergravity scalar potential is then $V(\Phi) = -3 - \frac{3}{2} \Phi^2 + \CO(\Phi^4)$, where the constant first term plays no role in classical equations of motion. This results in the same scalar theory we have been considering so far with mass $m^2 = -3$ and a $5$-dimensional metric $$\label{GPPZMetric} ds^2 = \frac{1}{r^2} \left( 1-\frac{r^2}{r_1^2} \right) \left( -dt^2 + d\vec{x}^2 \right) + \frac{dr^2}{r^2}.$$ By $r_1$ we denote the radial position where the flow terminates at a metric singularity. Note that this metric is simply obtained from its original form in [@Girardello:1999bd] by defining $y=-\ln r$. The metric interpolates between an $AdS_5$ space near $r=0$ with a conformal group $SO(4,2)$ and a $4$-dimensional Poincaré group near $r=r_1$. Taking $r_1 \to \infty$ would give an interpolation into the extreme IR. We can now analyse the renormalisation group flow of a scalar operator deformed by both effective double-trace and a relevant mass deformation. The mass deformation is provided by the GPPZ flow of the supergravity metric through its $r$ dependence and the double-trace deformation is ensured by the quadratic term in the bulk scalar potential. Writing as before $\lambda_2 = 2-\nu+\gamma$, the renormalisation group equation for $\gamma$ in background becomes $$\label{EqGamma3} \rho \frac{\partial \gamma}{\partial \rho} \left(1 - \frac{\rho^2}{r_1^2} \right)^2 = - \gamma \left( \gamma - 2 \nu \right) \left(1 - \frac{\rho^2}{r_1^2} \right)^2 + \frac{4 \rho^2}{r_1^2} \left( 2- \nu + \gamma \right) \left(1 - \frac{\rho^2}{r_1^2} \right) + \rho^2 k^2 \left(1 - \frac{\rho^2}{r_1^2} \right).$$ We need to be careful not to divide by zero in case $\rho \to r_1$, so we preserved the original form of the RG equation \[HFlowEq\]. In the case of the GPPZ flow, $\nu = 1$. We may, however, keep $\nu$, as well as $m$ general for now, restricting it only to relevant and marginal deformations with $\nu \leq 2$ so that $(2-\nu+\gamma) \geq 0$. An irrelevant deformation could anyhow not be described by a factor like $\left(1 - \rho^2 / r_1^2 \right)$, which breaks the UV symmetries with increasing intensity as the flow approaches the IR regime. We use the same reasoning as in the pure AdS case in section \[Sec:AnomDimAdS\] to determine whether the flow can terminate at a fixed point given some timelike momentum $k^2 < 0$. Given that the pure AdS scenario corresponds to the undeformed conformal $\CN=4$ theory, it is especially interesting to compare its RG flow with the RG flow in this section. Equation can be solved analytically. Its complicated explicit solution is, however, not too illuminating and will not be presented. Qualitatively, its graph behaves as that of the solution in pure AdS, which we plotted in Figure \[fig:AdSGamma\]. In the present case, we will only look for the functional dependence of $\Lambda$ on the bulk. Starting as in section \[Sec:AnomDimAdS\] with *iv)* for a monotonically increasing, positive and non-singular $\gamma$, the only term which can run the right hand side of into negative values for $\gamma < 2 \nu$ is $\rho^2 k^2$. As before, to find the maximal $\gamma_\times$ at $\rhox$, the right-hand side of has to vanish, implying the renormalisation condition *i)*. The solutions are $$\gamma_\times = \nu + \frac{2 \rhox^2}{r_1^2 - \rhox^2} \pm \frac{r_1^2}{r_1^2 - \rhox^2} \sqrt{ \nu^2 \left(1 - \frac{\rhox^2}{r_1^2} \right)^2 + \rhox^2 k^2 \left(1 - \frac{\rhox^2}{r_1^2} \right) + 8 \frac{\rhox^2}{r_1^2} \left(1 - \frac{\rhox^2}{r_1^2} \right) + 4 \frac{\rhox^4}{r_1^4} }.$$ For $\gamma_\times$ to be real when $\rhox$ is maximal (condition *ii)*), the expression under the square-root needs to be non-negative. The term $\sim \rhox^2 k^2 < 0$ will however cause the expression to inevitably flow towards $0$. We could, at this point, impose a condition whereby the entire expression under the square-root would be non-negative and thus vanishing at maximal $\rhox$, given some $k$. This is, however, insufficient. For $\rhox > r_1/ \sqrt{3}$, $\gamma_\times$ would break the unitarity condition *iii)*. Another problem is that in the case of $\rhox = r_1$, despite the vanishing square-root, the second term in $\gamma_\times$, $2 \rhox^2 / \left( r_1^2 - \rhox^2 \right)$, would blow up. This would thus also violate the non-singularity condition, so again *iii)*. To remedy these problems, we, as before, select the solution with the minus sign and impose $$\label{KRhoxIneqGPPZ1} \nu^2 \left(1 - \frac{\rhox^2}{r_1^2} \right) + \rhox^2 k^2 + 8 \frac{\rhox^2}{r_1^2} \geq 0,$$ which at maximal $\rhox$ turns into an equality and enables the $4 \rhox^4 / r_1^4$ term under the square-root to cancel the diverging second term in $\gamma_\times$ as $\rhox \to r_1$. We therefore arrive at condition *i)*, $\gamma_\times = \nu$ and $\partial_\rho \gamma_\times = 0$, at maximal $\rhox$. This is consistent with [@Porrati:2000nb], where it was shown that the GPPZ construction indeed flows between two fixed points. Alternatively, as before, we could have simply imposed condition *i)* and looked for maximal possible $\rho$ without violating *ii)* and *iii)*, arriving at the same conclusion. This would then imply *iv)*. There is a scaling symmetry in equation , equivalent to , but with an extra rescaling of $r_1$: $$\label{ScalingEqGamma2} \rho \to a \rho, ~ r_1 \to a r_1 ~ \text{and}~ \sqrt{-k^2} \to \sqrt{-k^2} / a,~ \text{with constant}~ a.$$ We can use the same analysis at any chosen momentum. Rewriting inequality as a momentum bound then determines the exact functional dependence of the Wilsonian cut-off $\Lambda$ on the bulk quantities. Equivalently, we could determine the range of allowed consecutive momentum shell integrations given some operator momentum we wish to probe. The relation is $$\label{KRhoxIneqGPPZ2} -k^2 \leq \frac{\nu^2}{\rhox^2} + \frac{4-m^2}{r_1^2} = \Lambda^2_{\CN = 4 \to 1},$$ where $4-m^2=8-\nu^2$, and in the case of the GPPZ flow $- k^2 \leq 1/\rhox^2 + 7/r_1^2$. The interpretation of clearly shows the interplay of two scales: the Wilsonian energy scale $\Lambda^2_{\CN=4} = \nu^2 / \rho^2$ of the undeformed $\CN = 4$ theory and the fixed ($\rho$-independent) mass deformation scale $$\label{MassDefScale} \Lambda_{\CM}^2 = \frac{4-m^2}{r_1^2},$$ which is independent of $\Lambda_{\CN=4}$. $\Lambda_\CM$ is a constant scale and can be tuned to a desired fixed value by setting $m$ and $r_1$. In a realistic scenario, however, $r_1$ should be very large so that the flow terminates in the IR. In the UV regime with $\rho << r_1$, $\Lambda_{\CN=4}$ dominates the cut-off of the deformed theory. Then flowing into the IR, as expected from a relevant deformation, $\Lambda_{\CM}$ becomes increasingly important. The square of the Wilsonian scale of the deformed theory can simply be written as the sum of squares of both scales present in the theory $$\label{GPPZasTwoScales} \Lambda^2_{\CN=4\to1} = \Lambda^2_{\CN=4} + \Lambda^2_\CM.$$ It is interesting to further study the limit $\rho \to r_1$. implies $\sqrt{-k^2} \leq 2 \sqrt{2} / r_1$ independently of the value $\nu$. However, if we investigate directly, terms with $\partial\gamma / \partial \rho$ and $\gamma(\gamma-2\nu)$ vanish faster than the remaining two terms on the right-hand-side of the equation. We of course have to assume well-behaved, non singular $\gamma$ and $\partial_\rho \gamma$ consistent with *iii)*. Equation is then solved at the linear order in $(r_1 - \rho)$ with $\gamma_\times = \nu$, by turning into an equality: $$\label{SolGPPZatR1} \sqrt{-k^2} = 2 \sqrt{2} / r_1.$$ This indicates that momenta lower than the cut-off scale $\Lambda_{\CN=4\to1}$ are unstable at the endpoint of the flow. The only other solutions existing at $\rhox = r_1$ that we can find to with $\partial_\rho \gamma_\times = 0$ must have either $\gamma = 0$ or $2\nu$, similar to the $k^2=0$ case in pure AdS. Momentum then takes two possible values $$\begin{aligned} \label{SolGPPZatR12} \sqrt{-k^2} = \frac{2 \sqrt{2-\nu}}{r_1} ,~\text{for} ~ \gamma = 0& & \text{and} & &\sqrt{-k^2} = \frac{2 \sqrt{2+\nu}}{r_1}, ~ \text{for} ~ \gamma = 2 \nu. \end{aligned}$$ To satisfy the unitarity bound *iii)*, $\gamma \leq \nu + 1$, the second solution is permitted only when $\nu \leq 1$, which is saturated in the GPPZ flow, thus allowing for all three momenta: $2\sqrt{2} / r_1$ and $2\sqrt{2 \pm 1} / r_1$ at $\rhox=r_1$. The case of $k^2 = 0$ with $\partial_\rho \gamma_\times = 0$ would give $\gamma_\times = \nu - 2$ solution at $\rho \to r_1$, as we have to permit such operators to flow until the end. In the GPPZ construction, it would equal $\gamma = -1$ at $\rho = r_1$. However, as we discussed above, it does not make sense to impose $\partial_\rho \gamma = 0$ for $k^2 = 0$ before $\rho \to \infty$. In any case, we subtract off the vacuum from $\langle\CO(k)\CO(-k)\rangle$, so it is only important that the GPPZ vacuum is stable. The discreteness of possible momenta above the vacuum is a clear indicator that there is a mass gap present in the theory. There are three discrete states in the spectrum. This is consistent with the confining nature of the GPPZ flow already discussed in [@Girardello:1999bd]. The analysis in case of spacelike momenta goes through as above, after we rotate the two energy scales $r \to i r$ and $r_1 \to i r_1$, as indicated by rescaling symmetries . Black brane in $AdS_{d+1}$ with thermal and density deformations, a mass gap and the emergent infra-red CFT scaling {#Sec:AnomDimAdST} ------------------------------------------------------------------------------------------------------------------- We now turn our attention to backgrounds giving non-zero temperature $T$ to their field theory duals. The example we consider first is the metric of a black brane in a $(d+1)$-dimensional AdS space of form , given by $$\label{BBAdSMetric} ds^2 = \frac{1}{r^2} \left( - f(r) dt^2 + d\vec{x}^2_{d-1} + \frac{dr^2}{f(r)} \right),~\text{with}~f(r)=1 - \frac{r^d}{r_0^d}.$$ Using , $\lambda_2(\rho) = \sqrt{f(\rho)} \left( \frac{d}{2} - \nu + \gamma(\rho) \right)$, the renormalisation group flow equation for $\gamma$ becomes $$\label{EqGamma2} f \rho \frac{\partial \gamma}{\partial\rho} = - \gamma \left( \gamma - 2 \nu \right) + \frac{\rho^d}{r_0^d} \left( \frac{d}{2} - \nu + \gamma\right)^2 - \frac{ \rho^2 \omega^2}{f} + \rho^2 \vec{k}^2.$$ In analogy with and , there is a scaling symmetry, $$\label{ScalingEqGamma3} \rho \to a \rho, ~ r_0 \to a r_0 , ~ \omega \to \omega / a ~ \text{and}~ |\vec{k}| \to |\vec{k}| / a ~ \text{with constant}~ a.$$ We can use the same procedure exactly as in sections \[Sec:AnomDimAdS\] and \[Sec:AnomDimGPPZ\] to extract the dependence of the Wilsonian cut-off scale on the bulk, consistent with conditions *i)* - *iv)*. First, $$\gamma_\times = \nu + \frac{d}{2} \frac{\rhox^d}{r_0^d - \rhox^d} - \frac{r_0^d}{r_0^d - \rhox^d} \sqrt{ \frac{d^2}{4} \frac{\rhox^d}{r_0^d} + \nu^2 \left(1 - \frac{\rhox^d}{r_0^d} \right) - \rhox^2 \omega^2 + \rhox^2 \vec{k}^2 \left(1 - \frac{\rhox^d}{r_0^d} \right) }.$$ The expression is qualitatively similar to the case of the GPPZ flow in section \[Sec:AnomDimGPPZ\]. Following the same considerations which satisfy conditions *ii)* and *iii)* leads to $\gamma_\times = \nu$ and an inequality $$\frac{d^2}{4} \frac{\rhox^d}{r_0^d} + \nu^2 f (\rhox) - \rhox^2 \omega^2 + \rhox^2 \vec{k}^2 f(\rhox) \geq \frac{d^2}{4} \frac{ \rhox^{2d}}{ r_0^{2d}},$$ which can be recast as an energy-momentum bound $$\label{KRhoxIneqAdST} \frac{\omega^2}{f(\rhox)} - \vec{k}^2 \leq \frac{\nu^2}{\rhox^2} + \frac{d^2}{4} \frac{\rhox^{d-2}}{r_0^d} .$$ It is clear from the left-hand side of that, as a result of temperature, energy and momentum scale differently. The scaling difference comes from the thermal factor $f(r)$, which completely breaks the Lorentz invariance of the boundary theory away from the UV fixed point with the $\CN=4$ theory. This is expected to happen in any thermal field theory. The Hawking temperature is $$\label{T} T = \frac{d}{4\pi r_0},$$ which enables us to rewrite as $$\label{KRhoxIneqAdST2} \frac{\omega^2}{f(\Lambda, T)} - \vec{k}^2 \leq \Lambda^2 + \frac{d^2}{4 \nu^2} \left( \frac{4\pi\nu}{d} \right)^d \left( \frac{T}{\Lambda} \right)^d \Lambda^2 \equiv \Lambda^2_{\CN=4,T},$$ with $$f (\Lambda,T) = 1 - \left( \frac{4\pi \nu}{d} \right)^d \left( \frac{T}{\Lambda} \right)^d ,$$ having used $\Lambda = \nu / \rho$ with the fact that we can apply the same analysis to any value of energy-momentum, thus removing the need to only consider the minimal cut-off $\Lambda_\times$. The Wilsonian scale has the same bulk dependence throughout the flow. We can then, for the same reason, define the thermal scale depending on $\Lambda$, and not only $\Lambda_\times$, as $$\label{Tscale} \Lambda^2_T \left( \Lambda, \frac{T}{\Lambda } \right) = \frac{d^2}{4 \nu^2} \left( \frac{4\pi\nu}{d} \right)^d \left( \frac{T}{\Lambda} \right)^d \Lambda^2,$$ where we have been using the abbreviation $\Lambda = \Lambda_{\CN=4}$, for the Wilsonian scale at $T=0$. In $d=4$, the thermal scale is $\Lambda_T^2 = 4 \pi^2 \left(\pi \nu \Lambda \right)^2 (T / \Lambda)^4$. We see that, as with the GPPZ case in equation , the two relevant scales, the undeformed ($T=0$) Wilsonian scale and the temperature deformation add quadratically to give the new Wilsonian cut-off of the $\CN=4$ theory with the temperature deformation breaking Lorentz invariance: $$\label{AdSTasTwoScales} \Lambda^2_{\CN=4,T} = \Lambda^2_{\CN=4} + \Lambda^2_T.$$ All temperature-dependent terms that appear in the final cut-off scale relation appear with $T$ to the power of the boundary field theory dimension, $T^d$. That is in the thermal factor $f( T/\Lambda )$, as well as in the square of the thermal scale $\Lambda^2_T (\Lambda, T/\Lambda)$. $\Lambda^2_T$ depends on both the original undeformed scale $\Lambda$ and the dimensionless $T/\Lambda$. We will see the same dimensional power-law behaviour of temperature, $T^d$, in all thermal cases we consider. From relation we see that the renormalisation group procedure particularly restricts the energy $\omega$. The spatial momentum $|\vec{k}|$ can be arbitrarily large without violating the bound because of a minus sign in front of it. In fact, the larger the momentum, the larger range of values $\omega$ can take. In the limit of $\rhox \to r_0$, energy becomes completely suppressed and must go to zero at least as fast as $f(\rho)$. The restrictions posed on $\omega$ are not surprising as temperature is an energy scale. We also see that the effect of $\Lambda_T$ becomes increasingly important when we run into the IR because $\Lambda_T \propto \Lambda^{2-d}$, giving it an overall negative power in a dimension greater than two. This is consistent with temperature being an IR scale. Taking the limit of $\rhox \to r_0$ in , we get $-\vec{k}^2 \leq \frac{1}{r_0^2} \left(\nu^2 + \frac{d^2}{4} \right)$. However, analogously to the GPPZ case in section \[Sec:AnomDimGPPZ\], the renormalisation group equation at the horizon with $\omega = 0$ strictly enforces the equality $$\label{MassGap4T} -\vec{k}^2 = \frac{1}{r_0^2} \left(\nu^2 + \frac{d^2}{4} \right),$$ for a well-behaved $\partial_\rho \gamma_\times$ and $\gamma_\times = \nu$. The mass of the momentum mode is then $M^2 = - \vec{k}^2$, which clearly shows the emergence of a mass gap with only one permitted value of $M$ above the vacuum. Similarly, we can set momentum to zero, i.e. $\vec k =0$, scale energy as $\omega^2 = \tilde\omega^2 f(\rho)$, use $\gamma_\times = \nu$ and expand to first-order in the near-horizon limit. We then again recover the same gap in the energy spectrum of $\tilde\omega$ as we saw in the spectrum of momentum modes in . The existence of a mass gap in the $\CN=4$ theory at non-zero temperature was shown from an AdS/CFT calculation in [@Witten:1998zw]. In imaginary-time formulation, we take $\omega \to i \omega$, $r\to i r$ and $r_0 \to i r_0$. After this transformation, the analysis follows through exactly as above. The only difference is that there is now a plus sign in front of $\vec{k}^2$ in the relation . After we have lost the Lorentzian nature of the original undeformed theory, momentum becomes just as suppressed as energy. Lastly, we turn our attention to dimensional reduction of the emergent IR CFT scaling when the renormalisation group flow reaches the horizon $\rhox \to r_0$. In this limit, the thermal scale becomes $$\label{ThermalHorizonScale} \lim_{\Lambda \to 4\pi\nu T / d} \Lambda^2_{T} = 4 \pi^2 T^2,$$ where $\Lambda$ is the undeformed Wilsonian scale. Of course, this should be a fixed point of the deformed theory, as $\partial_\rho \gamma_\times = 0$ at the horizon. If we now look at the thermal scale for an arbitrary value of the undeformed $\Lambda(\rho)$ in $d=2$ dimensions, we notice that just, as in , $$\Lambda^2_T = 4 \pi^2 T^2, ~\text{when} ~ d=2,$$ thus giving us a relation $$\label{ThermalScalesEquiv} \Lambda^2_{T} (\Lambda = 4\pi\nu T / d , d) = \Lambda^2_T (\Lambda, d=2).$$ It is essential to note that from the point of view of the bulk, the Hawking temperature $T$ is still a function of the number dimensions $d$. If we considered a fixed position of the horizon, then equality in would be incorrect. But we know that physics in the bulk does not change, and nor does its holographic dual if we adjust the position of the horizon. The Ricci scalar curvature of the black brane does not depend on the position of the horizon $r_0$, which means that we can tune it to give us any temperature we want. Such tuning does not violate the validity of the classical gravity description of the bulk physics, which is essential for the application of holographic duality. The saddle point approximation of string theory remains applicable. From the point of view of the boundary QFT, temperature can be thought of as an adjustable parameter that can be set to any value independently of the number of dimensions. We are therefore really comparing two theories in a different number of dimensions, for which the two temperatures have been adjusted to be equal by adjusting the horizons. Thermal scales are then equal in the $(d=2)$ two-dimensional boundary theory with an arbitrary undeformed cut-off $\Lambda$, and in the horizon limit in an arbitrary number of dimensions. The two-dimensional boundary theory is conformal at all scales, as $\Lambda_T$ can be tuned to any fixed value, resulting from its $\Lambda$ independence. The emergent $d$-dimensional CFT therefore behaves near the horizon, in terms of scales, as a two-dimensional conformal field theory. This result is highly reminiscent of the work by Carlip [@hep-th/9812013] and Solodukhin [@hep-th/9812056]. They showed that with an appropriate choice of boundary conditions, extremal black hole horizons in any number of dimensions gave rise to an emergent two-dimensional Virasoro algebra of the asymptotic symmetries. Physics of an arbitrary black hole horizon could therefore be described by a two-dimensional CFT.[^3] Similar behaviour was discussed in the context of AdS/CFT in the case of an extremal charged black hole in $AdS_{d+1}$ [@Faulkner:2009wj; @Faulkner:2010jy]. There, the metric takes exactly the same form as , with a different factor, $f(r) = 1 + Q^2 r^{2 d - 2} - M r^d$. Dimensional reduction of the IR CFT in the charged case is apparent from the form of the metric. The function $f(r)$ has a double zero in the horizon limit, making the RG flow an interpolation between $AdS_{d+1}$ at $r \to 0$ and $AdS_2 \times \mathbb{R}^{d-1}$ at $r \to r_0$. The emergent CFT therefore appears to be $(0+1)$-dimensional, as opposed to ($1+1$) in our case. It was, however, argued in [@arXiv:0901.1677] and discussed in [@Faulkner:2009wj] that the near-horizon region of $AdS_2$ could be understood as being described by a single copy of a two-dimensional chiral Virasoro algebra of the asymptotic symmetries. The simple metric interpolation between two AdS spaces in the charged case does not occur in the thermal case, , since $f(r)$ only has a single zero at the horizon. Nevertheless, we find a similar emergent feature in the scaling of the theory. To compare the charged case that gives rise to finite boundary density with the thermal case, we can use the generalised result for the Wilsonian momentum bound, , which we will derive in section \[Sec:AnomDimGen\]. In this example, we see that we do not exactly recover the same behaviour of the dimensional reduction in scales, as in the purely thermal case. Using $f(r) = 1 + Q^2 r^{2 d - 2} - M r^d$ and $h(r)=1$ in , we immediately find $$\label{ChargedBnd} \frac{\omega^2}{f} - \vec{k}^2 \leq \frac{\nu^2}{\rho^2} + \frac{d}{4} \rho^{d-2} \left(d M - \left(3d-4\right) Q^2 \rho^{d-2} \right).$$ The Hawking temperature for the charged metric is $$\label{ChargedT} T = \frac{d}{4\pi r_0} \left( 1 - \frac{(d-2)}{d} Q^2 r_0^{2d-2} \right),$$ where $r_0$ is the position of the horizon, which is the smallest positive solution of $f(r_0) = 0$ and therefore also of $$\label{ChargedM} M = \frac{1}{r_0^d} + Q^2 r_0^{d-2}.$$ Using and , we can take the horizon limit, $\rho \to r_0$, of the Wilsonian bound . The right-hand side of it becomes $$\label{ChargedBnd2} \Lambda^2 + 4 \pi^2 T^2 - \frac{(d-2)^2}{4 r_0^2} Q^4 r_0^{4d-4} = \Lambda^2 + 4\pi^2 \left[ T^2 - \left(T - T_0 \right)^2 \right],$$ with the usual $\Lambda = \nu / \rho$. We denoted the temperature at zero charge, $Q^2 = 0$, by $T_0 = \frac{d}{4\pi r_0}$. $T_0$ is therefore the temperature of the boundary theory at zero density. In two boundary dimensions, $d=2$, $T - T_0 = 0$ for all $\rho$. We therefore find the same conformal Wilsonian bound in the charged $d=2$ case, as in the purely massive (thermal) case, $\Lambda^2 + 4\pi^2 T^2$. This does not, however, equal for the horizon-limit bound in an arbitrary number of dimensions, unless $Q=0$ and we recover the purely thermal theory. We see a mixing between two temperature scales in the problem, one at zero and another at non-zero density of the boundary system. Near-extremal M2 and M5 branes {#Sec:AnomDimM} ------------------------------ ### M2 brane {#Sec:M2sec} We begin by writing down the metric of a non-extremal M2 brane by neglecting the spherical $d\Omega_7^2$ part, $$\label{M2Metric} ds^2 = H(r)^{-2/3} \left( -f(r) dt^2 + d\vec{x}^2_2 \right) + H(r)^{1/3} \frac{dr^2}{f(r)},$$ where $H(r) = 1 + R^6 / r^6$ and $f(r) = 1 - r_0^6 / r^6$. We take the near-extremal limit $r \ll R$, set $R=1$ and use $u = r_0^2 /r^2$ to find $$\label{NEM2Metric} ds^2 = \frac{r^4_0}{u^2} \left( - f(u) dt^2 + d\vec{x}^2_2 \right) + \frac{du^2}{4 f(u) u^2},$$ with $f(u) = 1 - u^3$. The horizon is at $u=1$. We further use a variable $q$, which is related to $u$ by $u = 2r_0^2 q$, to rewrite in the form $$\label{NEM2Metric2} ds^2 = \frac{1}{4 q^2} \left( - f(q) dt^2 + d\vec{x}^2_2 + \frac{dq^2}{f(q)} \right),$$ where $ f(q) = 1 - \frac{q^3}{q^3_0}$ and $q_0 = \frac{1}{2 r_0^2}$. This redefinition is necessary in order to avoid numerical factors in $G_{rr}$, which would change the boundary conditions for the anomalous dimension and introduce additional complications. The overall numerical rescaling factor of $1/4$ in is, however, irrelevant and can be absorbed into the AdS radius $R$, which we then reset to one. The renormalisation group equation for the anomalous dimension is itself invariant under an overall multiplication of the metric by a number. To see this, imagine rescaling a metric $ds^2 \to ds^2 / c^2$. Using would mean that $\lambda_2 \propto c (\Delta_- + \gamma)$. The relation between the scalar mass and the dual operator dimension, which can be derived from the scalar equation of motion, would also be modified to $m^2 \propto - c^2 \Delta_- \Delta_+$. Therefore, each term in would become proportional to $c^2$ and we could cancel it out. The final resulting metric is therefore $$\label{NEM2Metric3} ds^2 = \frac{1}{q^2} \left( - f(q) dt^2 + d\vec{x}^2_2 + \frac{dq^2}{f(q)} \right),$$ which is a special case of the black brane metric in $d=3$ dimensions with the horizon at $q_0$. The Hawking temperature of the near-extremal M2 background is therefore given by the black brane expression . This can verified from the non-extremal temperature where we find that, indeed, $$\label{M2T} T = \frac{3}{2\pi r_0} H(r_0)^{- 1/2 } \to \frac{3 r_0^2}{2\pi},$$ in the near-extremal limit. Using the results from section \[Sec:AnomDimAdST\], we find the Wilsonian bound $$\label{KRhoxM2} \frac{\omega^2}{1- \left(2 r_0^2\right)^3 \rhox^3} - \vec{k}^2 \leq \frac{\nu^2 }{ \rho_\times^2} + 18 r_0^6 \rhox,$$ where we have used, as before, the variable $\rho$ to indicate the position of the brane along the radial coordinate $q$ in . Using $\Lambda = \nu / \rho$ and temperature , we can rewrite to give the final Wilsonian bound in terms of the undeformed scale and the thermal defomation, $\Lambda^2_{M2} = \Lambda^2 + \Lambda_T^2$: $$\label{KRhoxM22} \frac{\omega^2}{1- \zeta_1 \left( \frac{T}{\Lambda} \right)^3 } - \vec{k}^2 \leq \Lambda^2 + \zeta_2 \left( \frac{T}{\Lambda} \right)^3 \Lambda^2,$$ with numerical factors $\zeta_1 = \left( \frac{4 \pi \nu }{3} \right)^3$ and $\zeta_2 = \frac{16 \pi^3 \nu}{3} $. ### M5 brane {#Sec:M5sec} We can now repeat exactly the same story as in section \[Sec:M2sec\], for the M5 brane background. The non-extremal M5 brane metric, without the spherical $d\Omega_4^2$ part is $$\label{M5Metric} ds^2 = H(r)^{-1/3} \left( -f(r) dt^2 + d\vec{x}^2_5 \right) + H(r)^{2/3} \frac{dr^2}{f(r)},$$ where $H(r) = 1 + R^3 / r^3$ and $f(r) = 1 - r_0^3 / r^3$. We use the near-extremal limit $r \ll R =1$, with a different set of coordinates $u^2 = r_0 / r$ in $d=6$ dual boundary field theory dimensions. Hence, $$\label{NEM5Metric} ds^2 = \frac{r_0}{u^2} \left( - f(u) dt^2 + d\vec{x}^2_5 \right) + \frac{4 du^2}{f(u) u^2},$$ with $ f(u) = 1 - u^6 $. We further use $q = \frac{2 u}{\sqrt{r_0}}$ and reabsorb the factor of $4$ into the AdS radius to, again, find a special case of the black brane scenario $$\label{NEM5Metric2} ds^2 = \frac{1}{q^2} \left( - f(q) dt^2 + d\vec{x}^2_5 + \frac{dq^2}{f(q)} \right),$$ where now $f(q) = 1 - \frac{q^6}{q_0^6}$ and $q_0 = \frac{2}{\sqrt{r_0}}$. The Hawking temperature in the near-extremal limit is $$\label{M5T} T = \frac{3}{4 \pi r_0} H(r_0)^{- 1/2 } \to \frac{3 r_0^{1/2}}{4\pi}.$$ The Wilsonian cut-off in the M5 background is $$\label{KRhoxM5} \frac{\omega^2}{1- \left( \frac{\sqrt{r_0}}{2} \right)^6 \rhox^6} - \vec{k}^2 \leq \frac{\nu^2 }{ \rho_\times^2} + \frac{9}{64} r_0^3 \rhox^4,$$ which we rewrite in terms of the undeformed $\Lambda$ and temperature . Finally, we find $$\label{KRhoxM52} \frac{\omega^2}{1- \zeta_3 \left( \frac{T}{\Lambda} \right)^6 } - \vec{k}^2 \leq \Lambda^2 + \zeta_4 \left( \frac{T}{\Lambda} \right)^6 \Lambda^2,$$ with $\zeta_3 = \left( \frac{2\pi\nu}{3} \right)^6 $ and $\zeta_4 = \frac{\pi^2}{4} \left( \frac{4\pi\nu}{3} \right)^4 $. All temperature dependence appears, as before, to the power of the boundary field theory dimension $T^d$. Since the M5 and the M2 backgrounds are special cases of the black brane in section \[Sec:AnomDimAdST\], all qualitative features of the RG scaling are the same. The general case with mass and thermal deformations {#Sec:AnomDimGen} --------------------------------------------------- We now generalise our discussion to a hybrid metric describing the $\CN=4$ theory, or some CFT, with two deformations. The first is a Lorentz symmetry preserving deformation, which can either be a relevant, marginal, or irrelevant term in the action. An example of this was the mass deformation in the GPPZ flow. We denote it by $h(r)$ and refer to it as the mass deformation inducing a mass scale, indicating that it has some general coefficient with a mass dimension. The second, $f(r)$, is a Lorentz symmetry breaking term with a horizon that can describe finite temperature, or finite density effects on the RG flow. We refer to it as the thermal deformation. This means studying a metric $G_{MN}$ of type $$\label{GenMetric} ds^2 = \frac{h(r)}{r^2} \left( -f(r) dt^2 + d\vec{x}^2_{d-1} \right) + \frac{dr^2}{r^2 f(r)},$$ in $d+1$ bulk dimensions, with $h$ and $f$ only depending on the radial coordinate. The next step is to consider equation , with $\lambda_2 = \sqrt{f} (\Delta_- + \gamma(r))$, as dictated by the form of $G^{rr}$. In accordance with previous analyses, we set $\gamma_\times = \nu$ and $\partial_\rho \gamma_\times = 0$ to obtain a Wilsonian bound, $$\label{GenBound} \frac{\omega^2}{f} - \vec{k}^2 \leq \frac{h}{\rho^2} \left[ \nu^2 - \frac{d^2}{4} \left( 1-f \right) \right] - \frac{d}{2 \rho} \left( h \frac{\partial f}{\partial \rho} + \frac{d}{2} f \frac{\partial h}{\partial \rho} \right).$$ It is easy to verify that the pure AdS, the GPPZ flow, the massive uncharged black brane, the charged black brane, as well as the M2 and M5 branes are special cases of this expression. To further analyse the Wilsonian bound in , we assume the form of $f$ with a horizon at $r_0$ to be $f(r) = 1 - \left(r/r_0\right)^p$, and define $h(r) \equiv 1 - \mu^2 (r)$. The Hawking temperature is then $$T = \frac{ p \sqrt{1-\mu^2(r_0)} }{4 \pi r_0},$$ where $\mu(\rho=r_0)$ only depends on $r_0$ and numerical constants. We can then rewrite as $$\label{GenBound2} \frac{\omega^2}{1 - \zeta \left( \frac{T}{\Lambda} \right)^p} - \vec{k}^2 \leq \Lambda^2 + \Lambda^2_\CM + \Lambda^2_T - \Lambda^2_{\text{mix}},$$ with a constant factor $\zeta = \left( \frac{4\pi\nu}{pr_0\sqrt{ 1-\mu^2(r_0) } } \right)^p$, and four different energy scales: $$\begin{aligned} \Lambda^2 &= \frac{\nu^2}{\rho^2},\label{GenScale} \\ \Lambda^2_\CM &= \left( \frac{d^2}{2\nu^2} \frac{\partial \ln \mu}{\partial \ln \rho} - 1 \right) \Lambda^2 \mu(\Lambda)^2 ,\label{GenScaleM} \\ \Lambda^2_T &= \left( \frac{dp}{2\nu^2} - \frac{d^2}{4\nu^2} \right) \zeta \Lambda^2 \left( \frac{T}{\Lambda} \right)^p, \label{GenScaleT} \\ \Lambda^2_{\text{mix}} &= \Lambda_\CM^2 \zeta \left( \frac{T}{\Lambda} \right)^p + \mu(\Lambda)^2 \Lambda^2_T+ \Lambda^2 \mu(\Lambda)^2 \zeta \left( \frac{T}{\Lambda} \right)^p . \label{GenScaleMix}\end{aligned}$$ Writing out explicitly, $$\Lambda^2_{\text{mix}} = \left( \frac{dp}{2\nu^2} - \frac{d^2}{4\nu^2} + \frac{d^2}{2\nu^2} \frac{\partial \ln \mu}{\partial \ln \rho} \right) \zeta \Lambda^2 \mu(\Lambda)^2 \left( \frac{T}{\Lambda} \right)^p .$$ In the case of $p=d$, equations and simplify to give $$\begin{aligned} \Lambda^2_T = \frac{d^2}{4\nu^2} \zeta \Lambda^2 \left( \frac{T}{\Lambda} \right)^p & &\text{and} & &\Lambda^2_{\text{mix}} = \Lambda_\CM^2 \zeta \left( \frac{T}{\Lambda} \right)^d + \mu(\Lambda)^2 \Lambda^2_T+ \Lambda^2 \mu(\Lambda)^2 \zeta \left( \frac{T}{\Lambda} \right)^d .\end{aligned}$$ It is clear that we can interpret $\Lambda^2$ in as the undeformed Wilsonian cut-off scale, as in previous sections. $\Lambda^2_\CM$ and $\Lambda^2_T$ (, ) are the mass and thermal deformation scales. Depending on the power of $\Lambda$ inside the function $\mu(\Lambda)$, the mass deformation will either modify the UV, or the IR scaling. Lastly, $\Lambda^2_{\text{mix}}$ is a new scale which arises from the mixing of both $\Lambda_T$ and $\Lambda_\CM$. It only exists when both scales are present and is subtracted from the sum of the squares of other three scales, giving us the total Wilsonian cut-off of the deformed theory. Using the results in -, we can generalise our discussion to include an arbitrary number of mass and thermal deformations. We simply write $h$ and $f$ as a product, $$\begin{aligned} h(r) = \prod_{i=1}^n h_i(r) & &\text{and}& & f(r) = \prod_{j=1}^m f_j(r),\end{aligned}$$ giving $$\begin{aligned} \label{GenBound3} \frac{\omega^2}{\prod_j f_j} - \vec{k}^2 \leq \frac{\prod_i h_i}{\rho^2} \left[ \nu^2 - \frac{d^2}{4} \left( 1-\prod_j f_j \right) \right] - \frac{d }{2 \rho} \left( \prod_{i,j} h_i f_j \right) \left( \sum_j \frac{\partial \ln f_j}{\partial \rho} + \frac{d}{2} \sum_i \frac{\partial \ln h_i}{\partial \rho} \right).\end{aligned}$$ Imagine now that we have a sequence of horizons at $r_1 < r_2 < ... < r_m$, so that in flowing from $r=0$ we cannot run past $r_1$. Writing $f_j = 1 - (r/r_j)^{p_j}$, and expanding around $u^2 = \alpha (r - r_1)$ gives the Hawking temperature, $$\begin{aligned} T = \frac{p_1 \sqrt{h(r_1)} \prod_{j=2}^{m}f_j (r_1)}{4\pi r_1},\end{aligned}$$ which can then be used to rewrite in terms of temperature $T(r_1)$ and the undeformed scale $\Lambda = \nu / \rho$, which always takes the same form. Lastly, it is essential to note that in order to have a well-defined, physical RG flow, the scale $\Lambda = \nu / \rho$ has to be real and positive. This condition is therefore always equivalent to the Breitenlohner-Freedman bound whereby $\nu$ also has to be real and positive. This shows that our construction of the Wilsonian RG is consistent with the AdS/CFT dictionary, as well as that the results are clearly probe-dependent. Alternative interpretation of the double-trace flow and the c-theorem {#Sec:Alter} ===================================================================== In this section, we comment on alternative interpretations of the renormalisation group flow we have been analysing so far. We begin by returning to the structure of the holographic counter-terms in equation . In the limit of $\rho_0 \to 0$, we could neglect all terms proportional to the conformal anomaly, $\ln (\rho) \Phi \Box_g \Phi$, and cancel terms proportional to the Ricci curvature of the induced flat boundary metric, $R[g] \Phi^2$ [@de; @Haro:2000xn]. Notice that both terms appear with quadratic powers of $\Phi$. We could therefore reinterpret the running of $(\Delta_- + \gamma) \Phi^2$ as either one by absorbing them into $\gamma$. Firstly, imagine that we keep our fixed asymptotically AdS metrics with flat boundary foliations, so that $R[g]=0$. The conformal anomaly only appears in an even number of boundary dimensions $d$. In that case, we reinterpret $\gamma$ as the coefficient in front of the conformal anomaly, which will be proportional to the expectation value of the boundary stress-energy tensor $\langle T^\mu_\mu \rangle$. Hence $\gamma$ can be interpreted as being proportional to the c-function. We saw in all examples that $\gamma$ was a monotonic function, which is therefore consistent with the c-theorem [@Zam:1986; @Cardy:1988; @Freedman:1999gp; @Komargodski:2011vj; @Myers:2010xs] . Imagine instead that we still have an asymptotically AdS bulk, but with the dual boundary theory living on a $d$-dimensional sphere, $S^d$. Its Ricci curvature, $R[g] = d (d-1) / \ell^2$ is inversely proportional to the square of its radius $\ell$. At the AdS infinity, $\ell$ also goes to infinity so $R[g]$ vanishes. When we start flowing into the bulk, the running of $\gamma$ can now be interpreted as the running of the radius $\ell$. And since $\gamma$ monotonically increases, the radius $\ell$ monotonically decreases. It is a convenient property of CFT’s on $S^d$ that their central charge can be defined by the integral $c = \left\langle \int_{S^d} d^d x \sqrt{-g} T^\mu_\mu \right\rangle$ [@Gubser:2002vv; @Hartman:2006dy]. We can match the calculated function $\gamma$ with a monotonically increasing $R[g]$ and then use the Einstein’s equation to recover the flow of $T^\mu_\mu$. This implies that the c-function can easily be determined throughout the flow where it remains monotonic. The analysis is therefore again consistent with the c-theorem. The effective action with multi-trace deformations {#Sec:MultiTrace} ================================================== We now return to the effective Wilsonian action with a full set of multi-trace deformations. We expect the effective action to include all possible terms consistent with the symmetries of the theory. An infinite series of multi-trace deformations should therefore run under the RG flow, unless a reduced form of the effective action closes in on itself under successive integrations. In case of a Gaussian action, for example, only double-trace deformations run. Even though, as argued in section \[Sec:RG3\], all triple-and higher-trace deformations are subleading in the large-$N$, they generally turn on and contribute to the flow, in analogy with the Wilsonian $\epsilon$-expansion. We could safely neglect them in extracting the functional dependence of the cut-off on the bulk, but it is important to analyse their behaviour for further establishing the Wilsonian nature of the RG procedure under consideration. Given the discussion above, we make the following claim: *the series of running terms in the effective action can either be quadratic or infinite*. The only other cases could emerge when there is some special finely-tuned relation between the metric and the scalar potential, coming from some symmetry of the uncompactified, supergravity theory. We will now show that this is precisely what happens in our holographic construction of the Wilsonian RG, as governed by equation , and hence , , and . Equation was found to describe the Callan-Symanzik equation on the QFT side. Equation describes the flow of the anomalous dimension and, equivalently, the double-trace beta function. Furthermore, describes different multi-trace beta functions. Flows of different multi-trace couplings depend on each other, which is reflected in the system of coupled and . To prove this statement, let us assume that the series of terms $\lambda_n \Phi^n$ terminates at some $N$ in the effective action, such that $\lambda_{m} = 0$, for all $m \geq N+1$. We are left with a finite number of coupled differential equations , and , which we refer to by the $n^{th}$-order coefficient they describe. The flow of $\Pi$ is described by the $1^{st}$-order . We will not use the $0^{th}$-order equation , which gives the cosmological constant. The left-hand side of the Hamiltonian RG flow equation includes terms up to $\partial_\rho \left(\sqrt{-g} \lambda_N \Phi^N\right)$. The right-hand side, however, produces terms up to the order of $\Phi^{2N - 2}$, resulting from $\left( \delta S_B / \delta \Phi \right)^2$. Matching terms in , order-by-order in $\Phi$, thus gives $2N-2$ equations, disregarding the $0^{th}$-order flow of $\alpha$. The first $N$ are differential equations for $\lambda_n$, i.e. , whereas the remaining $N-2$ are algebraic and relate the non-zero $\lambda_{n\leq N}$ among themselves. They are $$\label{AlgEqMT} 0 = - \frac{n}{2} \sum_{m=2}^{n} \lambda_m \lambda_{n+2-m} + b_n,~\text{for}~N+1\leq n \leq 2N-2.$$ We can immediately see that all couplings in the scalar potential, with $n > 2N - 2$, must be equal to zero, $b_{n > 2N-2} = 0$. To study the constraints that equations impose on $\lambda_n$, we can recursively solve the system of algebraic equations by starting with $n=2N-2$. Since all $\lambda_{m \geq N+1} = 0$, terms $\lambda_2 \lambda_{2N-2} + \lambda_3 \lambda_{2N-3} + ... +\lambda_{N-1}\lambda_{N+1} $ vanish, leaving only $\lambda^2_N$ in the sum. Therefore, $$\label{LNasB} \lambda_N^2 = \frac{b_{2N-2}}{N-1},$$ which completely fixes the value of $\lambda_N$ by the last coupling in the potential, $b_{2N-2}$. At the order of $n=2N-3$, equation has a series $\lambda_2 \lambda_{2N-3} + ... + \lambda_{N-2} \lambda_{N+1}$ of vanishing terms and a non-zero $\lambda_{N-1} \lambda_N$. We find $\lambda_{N-1} \lambda_N = \frac{b_{2N-3}}{2N-3}$, which implies $$\lambda_{N-1} = \frac{b_{2N-3}}{2N-3} \sqrt{ \frac{N-1}{b_{2N-2}}}.$$ The coefficient $\lambda_{N-1}$ is completely fixed by the two couplings $b_{2N-3}$ and $b_{2N-2}$. It is easy to see that this behaviour recursively continues as we solve for other $\lambda_n$. At $n=2N-4$, we have non-zero $2 \lambda_{N-2} \lambda_N + \lambda_{N-1}^2$. Similarly, at each step in the recursive process, the undetermined, lowest order $\lambda_{N-l}$ couples only to $\lambda_N$. All other $\{\lambda_{N-l+1}, ... , \lambda_N\}$ are at that step already fixed by $\{b_{2N-2-l+1}, b_{2N-2-l+2}, ... ,b_{2N-2}\}$. We can, therefore, solve for $\lambda_{N-l}$ in terms of $b_{n\geq 2N-2-l+1}$ and $b_{2N-2-l}$, appearing in at the $(2N-2-l)^{th}$ order. Hence, all $\lambda_{N-l}$ are completely fixed by the set of couplings, $\{b_{2N-2-l}, b_{2N-2-l+1},...,b_{2N-2}\}$. Now, since we have $N-2$ equations , this process ends at $l=N-3$ with fixing $\lambda_3$ in terms of $\{b_{N+1}, b_{N+2},...,b_{2N-2} \}$. At the lowest order of , at $n=N+1$, there is the term $\lambda_2 \lambda_{N+1} = 0$, leaving $\lambda_2$ undetermined. The anomalous operator dimension $\gamma$ and, equivalently, the double-trace coupling are therefore not constrained by the algebraic equations . Coupling constants $b_n$ do not depend on $\rho$ as they are the coefficients of the classical scalar potential. If we worked with the full quantum field theory in the bulk, it is possible that they would receive renormalisation group contributions which could depend on $\rho$. We, however, keep the bulk theory classical in correspondence with the large-$N$ boundary theory. Similarly, $b_n$ cannot depend on momentum $k$. We have so far shown that the assumption of a finite series $\lambda_n \Phi^n$ in the effective action implies that all $\lambda_{3\leq n\leq N}$ are uniquely determined by $b_{ N+1 \leq n \leq 2N-2}$. They are therefore constant for all $3\leq n\leq N$, $\partial_\rho \lambda_{n} = 0$, and equal the initial values, $\lambda_{n} = a_{n}$, which are set by the holographic counter-terms. Equation now simplifies to give $$\label{DiffEqMT} \frac{\partial_\rho \sqrt{-g(\rho)} }{\sqrt{-G(\rho)}} \lambda_n = - n \lambda_2 (\rho)\lambda_n - \frac{n}{2} \sum_{m=3}^{n-1} \lambda_m \lambda_{n+2-m} + n \lambda_{n+1} \frac{\Pi(\rho)}{\sqrt{-g(\rho)}} + b_n, ~\text{for}~ 3 \leq n \leq N.$$ When $n=N$, there is no $\lambda_{N+1} \Pi$ term, as $\lambda_{N+1} = 0$. Now, since $b_n$ are $k$-independent, so are $\lambda_n$, $\partial_k b_n = \partial_k \lambda_n = 0$, for $n\geq 3$. We should note that this depends on the fact that our probe scalar action included no higher derivatives than second in the kinetic term. If it had, the multi-trace RG equations would have to include various momentum terms in momentum space. The bulk metric $G_{MN}$ is also independent of the scalar momentum $k$, hence $\partial_k G = \partial_k g = 0$. The only momentum dependence in therefore comes from $\lambda_2$ and $\Pi$, which both have to depend on $k$ due to and . Differentiating with respect to $k$ gives $$\label{DiffEqMT2} \lambda_n \frac{\partial \lambda_2}{\partial k} = \lambda_{n+1} \frac{1}{\sqrt{-g}} \frac{\partial \Pi}{\partial k}, ~\text{for} ~3\leq n \leq N.$$ For $n=N$ with $\lambda_{N+1} = 0$, immediately gives $\lambda_N = 0$. Using then implies that $b_{2N-2} = 0$. But now since $\lambda_N = 0$, equation implies that $\lambda_{N-1} = 0$. We can therefore recursively see from that $\lambda_{N} = 0 \Rightarrow \lambda_{N-1} = 0 \Rightarrow \lambda_{N-2} = 0 \Rightarrow ... \Rightarrow \lambda_4 = 0 \Rightarrow \lambda_3 = 0$. As a result of that, all $b_n = 0$, for $N+1 \leq n \leq 2N-2$. Furthermore, from the full we can see that all $b_n = 0$, for $3 \geq n \geq N$, as well. Hence we have shown that, assuming $S_\text{B}$ had a finite number of terms $\lambda_n \Phi^n$, the RG flow equations enforce that *all* $\lambda_n$ and $b_n$, for $n \geq 3$, *be zero throughout the flow*. The only non-zero terms left are those that we studied in the context of the leading order in the large-$N$, i.e. the double-trace sector: $\Pi$, $\lambda_2$ and $m$. We have, therefore, shown that indeed, as claimed, the Wilsonian effective bare boundary action can either be quadratic or infinite. Only the double-trace sector can close in on itself under the RG procedure, as is expected in the Wilsonian RG. This fact is also in agreement with the semiclassical path integral derivation of the Wilsonian renormalisation from holography in the work of [@Heemskerk:2010hk]. Lastly, the double-trace sector, with $N=2$, is also the only case where the number of equations describing the RG flow matches the number of running terms, i.e. $2N-2=N=2$. From our discussion, we see that only in the case when we are working with the double-trace sector is it allowed to terminate the series of terms in the effective action. Otherwise we must include all possible multi-trace terms, even if the holographic counter-terms vanish for certain powers of $\Phi$. We should still allow all terms to run and use the vanishing initial conditions for the flow of the zero holographic counter-terms. It is still possible that then some of them will remain zero throughout the flow. As argued before, additional symmetries could restrict the form of multi-trace couplings and make some of them vanish. Our analysis therefore confirms the Wilsonian nature of the renormalisation procedure we set up from holography. The full, infinite set of coupled differential equations which describe the Wilsonian RG flow is, however, in general extremely hard to solve in the absence of special symmetries in the problem. We could, in principle, recursively look for a solution in the following way, through purely algebraic manipulations. First express $\lambda_2$ as $\Pi$ by using the $1^{st}$-order equation , giving us $\lambda_2 = - \frac{\sqrt{G^{rr}}}{2} \frac{\partial \ln \Pi}{\partial \rho}$. Then insert $\lambda_2 (\Pi)$ into the $2^{nd}$-order equation , to obtain $\lambda_3 (\Pi)$. At the $3^{rd}$-order in , we use both $\lambda_2 (\Pi)$ and $\lambda_3 (\Pi)$ to recover $\lambda_4(\Pi)$, and then continue with the same procedure. At each recursive step we can express the next $\lambda_n$ in terms of $\Pi$, $G_{MN}$ and their derivatives. This process then continues infinitely many times and we can hope to uncover some symmetry or a systematic progression between these extremely complicated expressions, with growing complexity at each higher-order of $\lambda_n$. There is an interesting and powerful simplification of the renormalisation group equations that we now consider as the final part of the multi-trace analysis. We begin with the argument from section \[Sec:RG3\], that all multi-trace interactions higher than double-trace, are subleading in the large-$N$ limit. As a result, we could neglect the $\lambda_3 \Pi$ term in the double-trace equation at the leading order in $N$. Note, of course, that $N$ in this final discussion means the number of the gauge group colours, or matrix elements, and not the $N$ that we have so far been using to indicate the $N^{th}$-order multi-trace coupling where we terminated the effective action series. Now, we can similarly imagine a generalisation of this argument to a scenario where the $(n+1)$-point vertices in Feynman diagrams, which are proportional to $\lambda_{n+1}$, contribute only at a subleading order to the computation of the $\lambda_n$ beta function. We can therefore recover a partial re-summation of the diagrams that contribute to the beta function of $\lambda_n$ by neglecting $\lambda_{n+1} \Pi$ in . We get $$\label{MTlargeN} \frac{1}{\sqrt{-G}} \partial_\rho \left( \sqrt{-g} \lambda_n \right) = - n \lambda_2 \lambda_n - F_n \left[\lambda_3,...,\lambda_{n-1}\right] + b_n,$$ where $F_n \left[\lambda_3,...,\lambda_{n-1}\right] = \frac{n}{2} \sum_{m=3}^{n-1} \lambda_m \lambda_{n+2-m}$, and is independent of $\lambda_n$. These first-order differential equation can be analytically solved through recursive integration, at least formally, for all multi-trace couplings $\lambda_n$. The solutions of are $$\begin{aligned} \label{MTnSol} \lambda_n = a_n e^{- \!\! \int\limits_{r=\rho_0}^{r=\rho} \!\!\! L_n(r) d\ln r} \!\! + e^{- \!\! \int\limits_{r=1}^{r=\rho} \!\! L_n(r) d\ln r} \!\! \int\limits_{r=\rho_0}^{r=\rho} \!\! \left( b_n - F_n(r)\right) \exp\left\{ \;\; \int\limits_{r'=1}^{r'=r} \!\!\! L_n(r') d\ln r'\right\} d\ln r ,\end{aligned}$$ where we have defined $$L_n [\lambda_2, g] (\rho) \equiv n \lambda_2 (\rho) + \frac{1}{2}\frac{\partial \ln g(\rho)}{\partial \ln \rho},$$ and used the Anti-de Sitter-like $G_{rr} = 1 / r^2$ to simplify the form of the solution. We could have also solved for a general function $G_{rr}$, without any additional complications. At the $n^{th}$-order, all $\{\lambda_2,...,\lambda_{n-1}\}$ are already known, so we can insert these functions in solution , and integrate to obtain $\lambda_n$. We used $\rho_0$ to indicate the brane position where we specify the boundary conditions, $\lambda_n = a_n$. Due to our construction, we can then take $\rho_0 \to 0$ and obtain finite, well-defined results for the recursive solution of the full set of running multi-trace couplings. Summary and outlook {#Sec:Sum} =================== In this paper, we proposed a systematic procedure for finding a precise functional dependence of the Wilsonian cut-off scale, in theories with scalar operators, on the bulk quantities. This procedure can be applied to a wide range of theories. To establish the relation between the bulk and the field theory cut-off scale, we constructed the effective running bare boundary action from the structure of the holographic counter-terms. Specifically, using a combination of the counter-terms and the subtracted action, from which the renormalised correlators are calculated, we rewrote the bare action and allowed for various terms in it to run. Since the counter-terms must always transform under the bulk isometries, the structure is consistent with the expectation that all additional terms in the Wilsonian effective action must be invariant under the symmetries of the original theory. The isometries of the bulk are in fact dual to the symmetries of the boundary QFT. The construction depended crucially on working with asymptotically AdS spacetimes, as the counter-term structure simplifies significantly in the AdS limit. We only focused on scalar bulk actions with dual scalar operators. However, this construction can be applied to any other theory once all the relevant holographic counter-terms are known. Of particular interest are the bulk fields with higher spins, which are dual to anomalous globally conserved boundary operators. Operators without anomalous symmetries are not expected to run under RG transformations. Interesting examples of such anomalous operators are the stress-energy tensor dual to the bulk graviton, which runs as a result of the conformal anomaly, and the R-symmetry current dual to the bulk vector field, which runs due to the triangle anomaly. A detailed analysis of the Wilsonian cut-off scale dependence on bulks with various types of bulk fields was not the main purpose of this paper. We leave that work, in particular the case of dynamical gravity, for future investigation. The renormalisation group equations were derived by integrating out thin slabs of geometry, as in [@Faulkner:2010jy], starting from the AdS infinity. The Hamilton-Jacobi equation in the bulk, with the radial direction replacing time, then evolved the effective action. Using the standard (Dirichlet) quantisation, where the conjugate canonical momentum is proportional to the expectation value of the boundary operator $\langle \CO \rangle$, we made connection with the Callan-Symanzik equation of the two-point function. We introduced the wavefunction renormalisation $Z$, which transformed under the sliding of the brane, $\rho \to \rho + \delta \rho$, and induced a rescaling transformation on the QFT side of the duality. By noting that fully renormalised two-point functions must be invariant under the RG transformations, we could connect $Z$ with the anomalous operator dimension $\gamma$, coming from the running of the counter-term $\Delta_- \Phi^2$. The RG flow of the anomalous dimension was shown to coincide with the flow of the double-trace deformation, as expected in large-$N$ theories. We imposed renormalisation conditions at the operator scale $\mu = \sqrt{-k^2}$, as well as physical conditions on the behaviour of anomalous dimensions, such as reality, non-singularity and their having to obey the unitarity bound. The first renormalisation condition was the cut-off independence of the anomalous dimension, i.e. $\partial_\rho \gamma = 0$ at the RG scale $\mu = \sqrt{-k^2}$. The second condition set the value of the anomalous dimension to $\gamma = \nu$ when the running cut-off $\Lambda$ reached the RG scale. It followed directly from our imposing the following condition on the value of a bare two-point function at the observational RG scale. Namely, since no IR divergences were present in our theories, we demanded that the bare correlation function should equal its initial value with the cut-off $\Lambda_0$ in the extreme UV, where holographic renormalisation is performed. Equivalently, the overall rescaling wavefunction renormalisation of $\CO$, where the flow terminates, had to be equal to one. We then considered several examples, and a precise correspondence between the cut-off and the bulk was established. It is interesting to note that the hard Wilsonian cut-off was Lorentzian, contrary to the usual field theory examples. We were therefore effectively integrating out momenta below a running hyperbola in a light-cone diagram. And although such a cut-off may not be sufficient for removing infinities from loop integrals, it makes perfect physical sense. It is completely consistent with the expectation that there should exist a relativistic ordering of physical phenomena according to their invariant length scales. The Euclidean IR/UV hierarchy with a separation of energy and momentum then mixes and a new relativistic hierarchy may emerge. In the simplest scenario of the $\CN = 4$ theory with a pure AdS bulk dual, we confirmed that $\Lambda \propto 1 / r$. Our construction also allowed us to find the precise probe-dependent proportionality constant, $\nu$, which could be changed by coordinate transformations in the bulk. This nicely shows how details of the boundary theory depend on the bulk coordinates. It also speaks to the fact that our results are probe-dependent and future work will be necessary to attempt to find a complete probe-independent Wilsonian description of the dual QFT. Next, we added an IR mass deformation that broke the theory down to $\CN=1$, i.e. the GPPZ flow. We saw that the square of the overall Wilsonian cut-off equaled $\Lambda^2 + \Lambda_\CM^2$, where $\Lambda$ was the undeformed Wilsonian scale and $\Lambda_\CM$ the mass deformation scale. We could also see the existence of a mass gap at the end of the flow. In the case of $\CN=4$ at finite temperature, similarly, the Wilsonian cut-off equaled $\Lambda^2 + \Lambda_T^2$, where the thermal scale $\Lambda^2_T$ depended on $(T/\Lambda)^d$ and $\Lambda^2$. In this theory, a mass gap emerged at the horizon. An interesting feature we found was that the thermal scale in $d=2$ boundary dimensions, where it was independent of $\Lambda$ (and therefore $\rho$), equaled $\Lambda^2_T$ in any number of dimensions when the boundary was pushed to the horizon. This suggests a dimensional reduction of the effective IR CFT scaling reminiscent of the studies into the emergent two-dimensional Virasoro algebras of asymptotic symmetries describing black hole horizons, [@hep-th/9812013; @hep-th/9812056]. We compared the thermal case with a charged massive black brane, which gives rise to finite temperature and density of the boundary QFT. We found that at the horizon, the Wilsonian scale involved a mixing of two temperatures - one at zero and another at non-zero density of the system. In two dimensions, the deformation scale equaled the purely thermal scale from the example at zero density. It, however, did not exactly match the deformation scale at the horizon in arbitrary $d$. Further study will be required to understand the physics of these IR CFTs. We also analysed the near-extremal M2 and M5 brane backgrounds, which are special cases of the massive $d$-dimensional black brane. All results in the two M-theory backgrounds could therefore be directly obtained from the black brane analysis. Finally, we generalised our discussion to a metric with both a horizon, inducing Lorentz violating temperature or density, and a Lorentz invariant deformation. The latter could be a relevant, marginal or irrelevant term with some mass dimension of the coupling. We saw that the overall Wilsonian cut-off equaled to the sum of the squares of the three scales, minus a mixing term which only existed when there were two or more deformations in the theory. We also showed how our generalised discussion could be extended to include an arbitrary number of mass deformations and horizons. It would be particularly interesting to further look at cases where horizons are represented by smooth functions and hence dual to smooth scales. The flow could then pass through them. We leave this problem for future investigation. At the end of the section we commented on the fact that for our construction to be well-defined and physical the running undeformed scale had to be real and positive. This consistency condition turned out to be precisely the Breitenlohner-Freedman bound on the scalar probe masses. We then went on to briefly comment on how the flow of the anomalous dimension (or the double-trace deformation) could be reinterpreted as the flow of the conformal anomaly, or alternatively the Ricci curvature of the boundary manifold. In our example, we used the $S^d$ manifold. Both cases were consistent with the c-theorem, which was ensured by the monotonicity of $\gamma$ throughout the flow. Further study, in particular of various asymptotically AdS manifolds with curved boundary foliations is kept for future work. In the last section, we studied the constructed Wilsonian effective bare boundary action with a full, infinite set of multi-trace terms. In order to provide further evidence beyond the existence of a momentum cut-off that the renormalisation group procedure under consideration is truly Wilsonian, we showed that the effective action must either be quadratic (double-trace) or have an infinite series of multi-trace terms. This was shown by analysing the full set of the renormalisation group equations, which we had derived from holography: the Callan-Symanzik equation, the flow of the anomalous dimension and the double-trace beta function, and the infinite set of multi-trace beta functions. The assumption that the bare boundary action $S_\text{B}$ only included a finite number of terms automatically lead us to the conclusion that all multi-trace couplings $\lambda_{n}$, for $n\geq 3$, had to be equal to zero throughout the entire RG flow. As a result, all coupling constants in the scalar potential also had to be zero, with the exception of mass. We therefore showed that only the double-trace sector could close in on itself under successive integrations, as in a Gaussian field theory. Otherwise, in the absence of special finely-tuned symmetries, an infinite series of all possible multi-trace terms permitted by symmetries had to be present in the effective action. These results are therefore completely consistent with the Wilsonian renormalisation group in quantum field theory. Lastly, we commented on possible recursive procedures for obtaining analytic solutions to the full set of the multi-trace flows. [99]{} J. M. Maldacena, “The Large N limit of superconformal field theories and supergravity,” Adv. Theor. Math. Phys.  [**2** ]{} (1998) 231-252. \[hep-th/9711200\]. S. S. Gubser, I. R. Klebanov, A. M. Polyakov, “Gauge theory correlators from noncritical string theory,” Phys. Lett.  [**B428** ]{} (1998) 105-114. \[hep-th/9802109\].. E. Witten, “Anti-de Sitter space and holography,” Adv. Theor. Math. Phys.  [**2** ]{} (1998) 253-291. \[hep-th/9802150\]. L. Susskind and E. Witten, “The Holographic bound in anti-de Sitter space,” arXiv:hep-th/9805114. A. W. Peet and J. Polchinski, “UV / IR relations in AdS dynamics,” Phys. Rev.  D [**59**]{} (1999) 065011 \[arXiv:hep-th/9809022\]. E. T. Akhmedov, “A Remark on the AdS / CFT correspondence and the renormalization group flow,” Phys. Lett. B [**442**]{} (1998) 152 \[hep-th/9806217\]. L. Girardello, M. Petrini, M. Porrati and A. Zaffaroni, “Novel local CFT and exact results on perturbations of N=4 superYang Mills from AdS dynamics,” JHEP [**9812**]{} (1998) 022 \[arXiv:hep-th/9810126\]. J. Distler, F. Zamora, “Nonsupersymmetric conformal field theories from stable anti-de Sitter spaces,” Adv. Theor. Math. Phys.  [**2** ]{} (1999) 1405-1439. \[hep-th/9810206\]. V. Balasubramanian, P. Kraus, “Space-time and the holographic renormalization group,” Phys. Rev. Lett.  [**83** ]{} (1999) 3605-3608. \[hep-th/9903190\]. J. de Boer, E. P. Verlinde and H. L. Verlinde, “On the holographic renormalization group,” JHEP [**0008**]{} (2000) 003 \[arXiv:hep-th/9912012\]. V. Balasubramanian, P. Kraus, A. E. Lawrence, S. P. Trivedi, “Holographic probes of anti-de Sitter space-times,” Phys. Rev.  [**D59** ]{} (1999) 104021. \[hep-th/9808017\]. D. Z. Freedman, S. S. Gubser, K. Pilch, N. P. Warner, “Renormalization group flows from holography supersymmetry and a c theorem,” Adv. Theor. Math. Phys.  [**3** ]{} (1999) 363-417. \[hep-th/9904017\]. J. de Boer, “The Holographic renormalization group,” Fortsch. Phys.  [**49** ]{} (2001) 339-358. \[hep-th/0101026\]. K. G. Wilson and J. B. Kogut, “[The Renormalization group and the epsilon expansion]{},” [[*Phys. Rept.*]{} [ **12**]{} (1974) 75–200]{}. F. J. Wegner and A. Houghton, “[Renormalization group equation for critical phenomena]{},” [[*Phys.Rev.*]{} [**A8**]{} (1973) 401–412]{}. K. G. Wilson, “[The renormalization group and critical phenomena]{},” [[*Rev. Mod. Phys.*]{} [ **55**]{} (1983) 583–600]{}. J. Polchinski, “[Renormalization and Effective Lagrangians]{},” [[*Nucl. Phys.*]{} [ **B231**]{} (1984) 269–295]{}. J. Polonyi, “Lectures on the functional renormalization group method,” Central Eur. J. Phys.  [**1** ]{} (2003) 1-71. \[hep-th/0110026\]. T. Faulkner, H. Liu, M. Rangamani, “Integrating out geometry: Holographic Wilsonian RG and the membrane paradigm,” JHEP [**1108** ]{} (2011) 051. \[arXiv:1010.4036 \[hep-th\]\]. I. Heemskerk, J. Polchinski, “Holographic and Wilsonian Renormalization Groups,” JHEP [**1106** ]{} (2011) 031. \[arXiv:1010.1264 \[hep-th\]\]. S. -J. Sin, Y. Zhou, “Holographic Wilsonian RG Flow and Sliding Membrane Paradigm,” JHEP [**1105** ]{} (2011) 030. \[arXiv:1102.4477 \[hep-th\]\]. D. Harlow, D. Stanford, “Operator Dictionaries and Wave Functions in AdS/CFT and dS/CFT,” \[arXiv:1104.2621 \[hep-th\]\]. J. Fan, “Effective AdS/renormalized CFT,” JHEP [**1109** ]{} (2011) 136. \[arXiv:1105.0678 \[hep-th\]\]. E. T. Akhmedov, I. B. Gahramanov and E. T. Musaev, “Hints on integrability in the Wilsonian/holographic renormalization group,” JETP Lett.  [**93**]{} (2011) 545 \[arXiv:1006.1970 \[hep-th\]\]. D. Radicevic, “Connecting the Holographic and Wilsonian Renormalization Groups,” JHEP [**1112**]{} (2011) 023 \[arXiv:1105.5825 \[hep-th\]\]. D. Elander, H. Isono and G. Mandal, “Holographic Wilsonian flows and emergent fermions in extremal charged black holes,” JHEP [**1111**]{} (2011) 155 \[arXiv:1109.3366 \[hep-th\]\]. J. N. Laia and D. Tong, “Flowing Between Fermionic Fixed Points,” JHEP [**1111**]{} (2011) 131 \[arXiv:1108.2216 \[hep-th\]\]. I. Bredberg, C. Keeler, V. Lysov, A. Strominger, “Wilsonian Approach to Fluid/Gravity Duality,” JHEP [**1103** ]{} (2011) 141. \[arXiv:1006.1902 \[hep-th\]\]. S. -S. Lee, “Holographic description of quantum field theory,” Nucl. Phys.  [**B832** ]{} (2010) 567-585. \[arXiv:0912.5223 \[hep-th\]\]. S. -S. Lee, “Holographic description of large N gauge theory,” Nucl. Phys.  [**B851** ]{} (2011) 143-160. \[arXiv:1011.1474 \[hep-th\]\]. M. F. Paulos, “Holographic phase space: $c$-functions and black holes as renormalization group flows,” JHEP [**1105**]{} (2011) 043 \[arXiv:1101.5993 \[hep-th\]\]. S. Kuperstein and A. Mukhopadhyay, “The unconditional RG flow of the relativistic holographic fluid,” JHEP [**1111**]{} (2011) 130 \[arXiv:1105.4530 \[hep-th\]\]. S. de Haro, S. N. Solodukhin and K. Skenderis, “Holographic reconstruction of space-time and renormalization in the AdS / CFT correspondence,” Commun. Math. Phys.  [**217**]{} (2001) 595 \[arXiv:hep-th/0002230\]. K. Skenderis, “Lecture notes on holographic renormalization,” Class. Quant. Grav.  [**19** ]{} (2002) 5849-5876. \[hep-th/0209067\]. M. Bianchi, D. Z. Freedman and K. Skenderis, “Holographic renormalization,” Nucl. Phys.  B [**631**]{} (2002) 159 \[arXiv:hep-th/0112119\]. K. Skenderis, B. C. van Rees, “Real-time gauge/gravity duality: Prescription, Renormalization and Examples,” JHEP [**0905** ]{} (2009) 085. \[arXiv:0812.2909 \[hep-th\]\]. E. Witten, “Multitrace operators, boundary conditions, and AdS / CFT correspondence,” \[hep-th/0112258\]. I. R. Klebanov, E. Witten, “AdS / CFT correspondence and symmetry breaking,” Nucl. Phys.  [**B556** ]{} (1999) 89-114. \[hep-th/9905104\]. I. Papadimitriou, “Multi-Trace Deformations in AdS/CFT: Exploring the Vacuum Structure of the Deformed CFT,” JHEP [**0705** ]{} (2007) 075. \[hep-th/0703152\]. L. Vecchi, “Multitrace deformations, Gamow states, and Stability of AdS/CFT,” JHEP [**1104** ]{} (2011) 056. \[arXiv:1005.4921 \[hep-th\]\]. E. T. Akhmedov, “Notes on multitrace operators and holographic renormalization group,” hep-th/0202055. W. Mueck, “An Improved correspondence formula for AdS / CFT with multitrace operators,” Phys. Lett.  B [**531**]{} (2002) 301 \[arXiv:hep-th/0201100\]. P. Minces, “Multitrace operators and the generalized AdS / CFT prescription,” Phys. Rev.  [**D68** ]{} (2003) 024027. \[hep-th/0201172\]. T. Hartman, L. Rastelli, “Double-trace deformations, mixed boundary conditions and functional determinants in AdS/CFT,” JHEP [**0801** ]{} (2008) 019. \[hep-th/0602106\]. S. S. Gubser, I. R. Klebanov, “A Universal result on central charges in the presence of double trace deformations,” Nucl. Phys.  [**B656** ]{} (2003) 23-36. \[hep-th/0212138\]. P. Mansfield, D. Nolland, “One loop conformal anomalies from AdS / CFT in the Schrodinger representation,” JHEP [**9907** ]{} (1999) 028. \[hep-th/9906054\]. D. Brattan, J. Camps, R. Loganayagam and M. Rangamani, “CFT dual of the AdS Dirichlet problem : Fluid/Gravity on cut-off surfaces,” JHEP [**1112**]{} (2011) 090 \[arXiv:1106.2577 \[hep-th\]\]. I. Papadimitriou and K. Skenderis, “AdS / CFT correspondence and geometry,” hep-th/0404176. I. Papadimitriou and K. Skenderis, “Correlation functions in holographic RG flows,” JHEP [**0410**]{} (2004) 075 \[hep-th/0407071\]. N. Iqbal, H. Liu, “Universality of the hydrodynamic limit in AdS/CFT and the membrane paradigm,” Phys. Rev.  [**D79** ]{} (2009) 025023. \[arXiv:0809.3808 \[hep-th\]\]. D. T. Son and A. O. Starinets, “Minkowski space correlators in AdS / CFT correspondence: Recipe and applications,” JHEP [**0209**]{} (2002) 042 \[arXiv:hep-th/0205051\]. C. P. Herzog, D. T. Son, “Schwinger-Keldysh propagators from AdS/CFT correspondence,” JHEP [**0303** ]{} (2003) 046. \[hep-th/0212072\]. C. Fefferman and C. Robin Graham, “Conformal Invariants,” in Elie Cartan et les Math«ematiques dÕaujourdÕhui (Ast«erisque, 1985) 95. L. Girardello, M. Petrini, M. Porrati, A. Zaffaroni, “The Supergravity dual of N=1 superYang-Mills theory,” Nucl. Phys.  [**B569** ]{} (2000) 451-469. \[hep-th/9909047\]. M. Porrati, A. Starinets, “On the canonical c function in 4-d field theories possessing supergravity duals,” Phys. Lett.  [**B498** ]{} (2001) 285-294. \[hep-th/0009227\]. M. Porrati and A. Starinets, “Holographic duals of 4-D field theories,” arXiv:hep-th/0009198. E. Pomoni and L. Rastelli, “Large N Field Theory and AdS Tachyons,” JHEP [**0904**]{} (2009) 020 \[arXiv:0805.2261 \[hep-th\]\]. L. Vecchi, “The Conformal Window of deformed CFT’s in the planar limit,” Phys. Rev.  [**D82** ]{} (2010) 045013. \[arXiv:1004.2063 \[hep-th\]\]. P. Breitenlohner, D. Z. Freedman, “Stability in Gauged Extended Supergravity,” Annals Phys.  [**144** ]{} (1982) 249. C. G. Callan, Jr., S. R. Coleman, R. Jackiw, “A New improved energy - momentum tensor,” Annals Phys.  [**59** ]{} (1970) 42-73. K. Higashijima, E. Itou, “Unitarity bound of the wave function renormalization constant,” Prog. Theor. Phys.  [**110** ]{} (2003) 107-114. \[hep-th/0304047\]. M. E. Peskin, D. V. Schroeder, “An Introduction to quantum field theory,” Reading, USA: Addison-Wesley (1995) 842 p. T. Faulkner, H. Liu, J. McGreevy, D. Vegh, “Emergent quantum criticality, Fermi surfaces, and AdS(2),” Phys. Rev.  [**D83** ]{} (2011) 125002. \[arXiv:0907.2694 \[hep-th\]\]. H. Lu, J. -w. Mei, C. N. Pope and J. F. Vazquez-Poritz, “Extremal Static AdS Black Hole/CFT Correspondence in Gauged Supergravities,” Phys. Lett. B [**673**]{} (2009) 77 \[arXiv:0901.1677 \[hep-th\]\]. S. Carlip, “Black hole entropy from conformal field theory in any dimension,” Phys. Rev. Lett.  [**82**]{} (1999) 2828 \[hep-th/9812013\]. S. N. Solodukhin, “Conformal description of horizon’s states,” Phys. Lett. B [**454**]{} (1999) 213 \[hep-th/9812056\]. E. Witten, “Anti-de Sitter space, thermal phase transition, and confinement in gauge theories,” Adv. Theor. Math. Phys.  [**2**]{} (1998) 505 \[hep-th/9803131\]. A.B. Zamolodchikov, “Irreversibility of the Flux of the Renormalization Group in a 2D Field Theory,” JETP Lett.  [**43**]{}, 730 (1986). \[Pisma Zh. Eksp. Teor. Fiz.  [**43**]{}, 565 (1986)\]. J.L. Cardy, “Is There a c Theorem in Four-Dimensions?,” Phys. Lett.  B [**215**]{}, 749 (1988). Z. Komargodski and A. Schwimmer, “On Renormalization Group Flows in Four Dimensions,” JHEP [**1112**]{} (2011) 099 \[arXiv:1107.3987 \[hep-th\]\]. R. C. Myers and A. Sinha, “Seeing a c-theorem with holography,” Phys. Rev.  D [**82**]{} (2010) 046006 \[arXiv:1006.1263 \[hep-th\]\]. [^1]: We thank the JHEP referee for pointing this out. [^2]: We thank Janos Polonyi for ideas and discussions on various issues related to the interpretation of a Lorentzian cut-off. [^3]: We thank Mukund Rangamani for bringing the work of Carlip and Solodukhin to our attention.
--- abstract: 'We present a detailed study of the spectral properties of a locally correlated site embedded in a BCS superconducting medium. To this end the Anderson impurity model with superconducting bath is analysed by numerical renormalisation group (NRG) calculations. We calculate one and two-particle dynamic response function to elucidate the spectral excitation and the nature of the ground state for different parameter regimes with and without particle-hole symmetry. The position and weight of the Andreev bound states is given for all relevant parameters. We also present phase diagrams for the different ground state parameter regimes. This work is also relevant for dynamical mean field theory extensions with superconducting symmetry breaking.' address: - '$^1$Department of Mathematics, Imperial College, London SW7 2AZ, UK' - '$^2$Department of Material Science, Osaka City University, Sumiyoshi-ku, Osaka 558-8585 Japan' author: - 'J Bauer$^1$, A Oguri$^2$ and A C Hewson$^1$' bibliography: - 'artikel.bib' - 'biblio1.bib' title: Spectral properties of locally correlated electrons in a BCS superconductor --- Introduction ============ As described by Bardeen, Cooper and Schrieffer (BCS) [@BCS57] electrons in condensed matter with an attractive interaction assume a superconducting state below a critical temperature, referred to as BCS state. In this state electrons with antiparallel spins form singlet bound states ($S=0$) known as Cooper pairs. This pair formation is a fermionic many-body phenomenon as it relies on the existence of a Fermi surface [@Coo56]. A singlet ground state due to many-body effects also occurs in a quite different situation, when a magnetic impurity is embedded in a metallic host [@Kon64; @hewson]. This state, known as a Kondo singlet, occurs because the electrons in the metal at low temperature experience a large effective coupling to the localised impurity spin. As a consequence it is energetically favourable to screen the local moment, resulting in a (Kondo) singlet state ($S=0$). The BCS superconductivity and the Kondo effect, are important topics in their own right, and have been extensively studied by the condensed matter physics community. The interplay and competition of these two effects have also attracted a lot of interest because metals with magnetic impurities can be superconducting at low temperatures [@AG61; @ZM70; @Zit70; @MZ71; @Shi73; @Mat77]. The problem of dealing with the two effects together is complicated because the magnetic impurities have a disruptive effect on the BCS superconducting state and the Kondo singlet formation leads to a breaking of the Cooper pairs. For a recent review on this topic we refer to [@BVZ06] and references therein. Here we address a particular aspect of the problem which has not so far received much attention, the effects of the superconductivity on the local spectral properties of the impurity. As in earlier studies, we take the BCS superconductor as a fixed reference system and take as a model for the impurity an interacting Anderson model. We employ the numerical renormalisation group method (NRG), which is a reliable approach to calculate low temperature spectral functions. From earlier studies of this model, we know that if the interaction $U$ at the impurity site is weak, the ground state is dominated by the superconducting behaviour and the singlet is predominantly a superconducting one. However, if there is a strong repulsion at the impurity site, such that single occupation is favoured, we have a situation where a single spin is coupled to the superconducting medium. If the superconducting gap $\Delta_{\rm sc}$ is very small then, similar to the case with a normal, metallic bath, the ground state is a singlet, more specifically a Kondo singlet. If this gap is increased, however, it is not possible to form a Kondo singlet, due to the lack of states in the vicinity of the Fermi level, and the ground state becomes a doublet ($S=1/2$), corresponding to an unscreened spin at the impurity site. This ground state transition at zero temperature is an example of a quantum phase transition which occurs for a level crossing that depends on a system parameter [@sachdev]. The relevant energy scales for this singlet-doublet transition to occur in the Kondo regime are the Kondo temperature $T_{\rm K}$ and the superconducting gap $\Delta_{\rm sc}$. There have been numerical renormalisation group (NRG) studies for the Kondo model [@SSSS92; @SSSS93] and Anderson model [@YO00] with superconducting bath. In these works the estimate for the ground state transition is given by $T_{\rm K}/\Delta_{\rm sc}\simeq 0.3$, i.e. for $T_{\rm K}/\Delta_{\rm sc}> 0.3$ we have a singlet ground state ($S=0$) whilst for $T_{\rm K}/\Delta_{\rm sc}< 0.3$ the ground state is a doublet. We can also consider the transition for a fixed value of $\Delta_{\rm sc}$ and increasing values of the local interaction $U$. In this case, as $U$ increases in the local moment regime, $T_{\rm K}$ decreases until the singlet to doublet transition occurs at a critical value $U=U_{c}$. Due to the proximity effect there is an induced symmetry breaking on the impurity site. As a consequence localised excited states (LES) inside the superconducting gap can be induced at the impurity site. Such states are well known from superconductor-normal-superconductor (SNS) junctions and are usually called Andreev bound states. For a weak on-site interaction the ground state of the system is usually a superconducting singlet ($S=0$) and the LES is an $S=1/2$ excitation. It is found that at the ground state transition the bound state energy of the LES becomes zero as measured from the centre of the gap. This is related to the fact that the level crossing occurs there. In recent years detailed measurements on quantum dot structure have enabled one to probe strong correlation effects [@GSMAMK98; @COK98]. In these experiments a quantum dot is coupled to two leads, which can be superconducting. In such situations finite voltage induced currents [@RBT95; @BNS02; @BBNBBS03; @VBMSY04] and Josephson currents [@DNBFK06], induced by a phase difference, were observed experimentally. For a theoretical description of this situation it is important to characterise the Andreev bound states in the gap accurately. Many of the more recent theoretical work [@RA99; @Mat01; @VMY03; @SE04; @CLKB04; @OTH04; @TOH07], focus on a quantum dot embedded in two superconducting baths with different (complex) superconducting order parameters. These situations with two channels and with Josephson or nonequilibrium currents will, however, not be covered in this paper. For the analysis presented here, which focuses on the spectral properties of locally correlated electrons in the superconducting bath, we use the NRG approach. We start by outlining some of the details of the NRG calculation with a superconducting medium in section 2. We also describe an analysis of the Andreev bound states in the gap in terms of renormalised parameters, and discuss the limit of a large gap. In section 3 we present results first for the model with particle-hole symmetry. For low energies within the superconducting gap we calculate the position and weight of the LES and also give the values for the induced anomalous on-site correlation. We also present singlet-doublet ground state phase diagrams for the symmetric and non-symmetric cases. The study is based on numerical renormalisation group (NRG) calculations, which are capable of describing the full parameter range from weak to strong coupling reliably. There have been a number of NRG studies of this situation in the past [@SSSS92; @SSSS93; @YO00; @CLKB04]. However, the dynamic response function have not been addressed in a satisfactory way. Here we present a thorough study of ground state and spectral properties, which will also be of interest for cases where the AIM is used as an effective model for superconductivity in the dynamical mean field theory (DMFT) framework. The Anderson model with superconducting medium {#sec:aimsc} ============================================== In the following we consider the Anderson impurity model (AIM) in the form $$H=H_{d}+H_{\rm mix}+H_{\rm sc}. \label{aimsc}$$ The local part $H_{d}$, which describes an impurity or quantum dot, is given as usual by $$H_{ d}= \sum_{{\sigma}}({\varepsilon}_{d}+\frac12U) { c^{\dagger}_{d,\sigma}} { c^{}_{d,\sigma}} + \frac12U\left(\sum_{\sigma}{ c^{\dagger}_{d,{\sigma}}} { c^{}_{d,{\sigma}}}-1\right)^2$$ with the impurity level ${\varepsilon}_{d}$ and an on-site interaction with strength $U$. Also the mixing term has the usual form, $$\begin{aligned} H_{\rm mix} =\sum_{{{\bm k}},{\sigma}}V({ c^{\dagger}_{{{\bm k}},\sigma}}{ c^{}_{d,\sigma}} + {\mathrm{h.c.}}).\end{aligned}$$ We define $\Gamma=\pi V^2\rho_c$ as the energy scale for hybridisation, where $\rho_c=1/2D$ is the constant band density of states of a flat band without superconducting symmetry breaking. The superconducting medium is given in a BCS mean field form $$\begin{aligned} H_{\rm sc}=\sum_{{{\bm k}},{\sigma}}{\varepsilon}_{{{\bm k}}} { c^{\dagger}_{{{\bm k}},\sigma}} { c^{}_{{{\bm k}},\sigma}} - \Delta_{\rm sc}\sum_{{{\bm k}}}[{ c^{\dagger}_{{{\bm k}},\uparrow}} { c^{\dagger}_{-{{\bm k}},\downarrow}} +{\mathrm{h.c.}}], \label{hscham}\end{aligned}$$ where $\Delta_{\rm sc}$ is the isotropic superconducting gap parameter, which is taken to be real for simplicity. In equation (\[hscham\]) the summations runs over all ${{\bm k}}$ in a wide band. Another energy scale $\omega_{\rm D}$, the Debye cutoff in BCS theory, could enter at this stage to restrict the summation. As shown in reference [@SSSS92] with a scaling argument, this effect does not alter the results substantially and merely leads to slightly different parameters. The choice here corresponds to $\omega_{\rm D}=D$, which was also assumed in earlier work [@SSSS92; @YO00]. In appendix A we derive the equation for the non-interacting local $d$-site Green’s function matrix of the system (\[freegfctscimp\]). The numerical renormalisation group (NRG) approach -------------------------------------------------- For the NRG approach we have to derive a discrete form of the Hamiltonian, which can be diagonalised conveniently in a renormalisation group scheme descending to lower energies. This is done in an analogous fashion as for a metallic medium described in [@Wil75; @KWW80a]. Essentially, there are three steps which only affect $H_{\rm mix}$ and $H_{\rm sc}$:\ (1) Mapping to a one-dimensional problem, (2) logarithmic discretisation and (3) basis transformation. We obtain $$\begin{aligned} H_{\rm mix}/D= \sqrt{\frac{2\Gamma}{\pi D}}\sum_{{\sigma}} (f^{\dagger}_{0{\sigma}}{ c^{}_{d,\sigma}}+ {\mathrm{h.c.}}), \end{aligned}$$ and $$\begin{aligned} H^N_{\rm sc}/D=\sum^{N}_{{\sigma},n=0}\gamma_{n+1} (f^{\dagger}_{n{\sigma}}f_{n+1,{\sigma}} + {\mathrm{h.c.}}) -\frac{\Delta_{\rm sc}}{D}\sum^{N}_{n=0}(f^{\dagger}_{n\uparrow}f^{\dagger}_{n,\downarrow} + {\mathrm{h.c.}}) \label{hscnrg}\end{aligned}$$ where the parameters $\gamma_n$ have the usual form [@hewson]. For more details we refer to earlier work [@SSSS92; @YO00]. The iterative diagonalisation scheme is set up in the same way as in the standard NRG case. Due to the anomalous term in the superconducting bath $H^N_{\rm sc}$ the charge $Q$ is not a good quantum number of the system. Thus eigenstates can only be characterised in terms of the spin quantum number $S$. The coefficients $\gamma_n$ fall off with $n$, but the second term in (\[hscnrg\]) does not. Thus the superconducting gap becomes a dominating energy scale for large $n$ and a relevant perturbation. It does not make sense to continue NRG iterations down to energies much below this scale as there are no continuum states anymore in the gap. Therefore, we stop the NRG procedure at an iteration $N=N_{\rm max}$, such that the typical energy scale $\Lambda^{-(N_{\rm max}-1)/2}$ is not much smaller than the superconducting gap $\Delta_{\rm sc}$. In practice the number of NRG iterations $N$ is between 20-50 depending on the magnitude of the gap, where we chose $\Lambda=1.8$ in all cases. We usually keep 800 states and the $A_{\Lambda}$ factor [@KWW80a] is taken into account in the calculations. The NRG approach constitutes a reliable non-perturbative scheme to calculate $T=0$ ground state properties of a local interacting many-body problem. By putting together information obtained from different iterations dynamic response functions can also be obtained [@hewson]. Here we calculate these spectral functions in the approach [@PPA06; @WD06pre] based on the complete Anders Schiller basis [@AS05]. The Green’s function of the interacting problem is given by the Dyson equation (\[scgreenfct\]), which involves the self-energy matrix $\underline \Sigma(\omega)$. In appendix B we describe how the diagonal part of the self-energy $\Sigma(\omega)=\Sigma_{11}(\omega)$ and the offdiagonal part of the self-energy $\Sigma^{\rm off}(\omega)=\Sigma_{21}(\omega)$ can be calculated from dynamic response functions in the NRG calculation, which is in analogy to the method described in reference [@BHP98]. The Andreev bound states {#sec:andbs} ------------------------ The denominator of the $d$-site Green’s function, equation (\[scgreenfct\]), can vanish inside the gap ${\left | \omega \right |}<\Delta_{\rm sc}$. As the imaginary part of the self-energy is zero in the gap this leads to excitations with infinite lifetime there. They correspond to the localised excited states (LES) or Andreev bound states. For the non-interacting case they are determined by the equation $D(\omega)=0$ \[cf. eq. (\[scdet\])\], $$\label{noniaandbs} \omega^2-{\varepsilon}_d^2-\Gamma^2+\frac{2\omega^2\Gamma}{E(\omega)}=0,$$ where the function $E(\omega)$ is given in equation (\[funcEom\]). The terms in equation (\[noniaandbs\]) are functions of $\omega^2$, so if $E^0_b$ is a solution so is $-E^0_b$. In general, in the interacting case we have to analyse the equation $$\fl \Big[\omega-{\varepsilon}_d+\frac{\omega\Gamma}{E(\omega)}-\Sigma(\omega)\Big] \Big[\omega+{\varepsilon}_d+\frac{\omega\Gamma}{E(\omega)}+\Sigma(-\omega)^*\Big] -\Big[\frac{\Gamma\Delta_{\rm sc}}{E(\omega)}-\Sigma^{\rm off}(\omega)\Big] \Big[\frac{\Gamma\Delta_{\rm sc}}{E(\omega)}-\Sigma^{\rm off}(-\omega)^*\Big]=0. \label{intactBE}$$ Once the self-energies are calculated it is possible to solve this equation iteratively. Here, we will develop a simplified description by using a low energy expansion of the self-energy. First note that in the gap, ${\left | \omega \right |}<\Delta_{\rm sc}$, ${\mathrm{Im}}\Sigma(\omega)={\mathrm{Im}}\Sigma^{\rm off}(\omega)=0$. We expand the real part of the diagonal self-energy $\Sigma(\omega)$ to first order around $\omega=0$, which is motivated by the Fermi liquid expansions for the normal metallic case and justified by the numerical results for the behaviour for low frequency. The offdiagonal self-energy is approximated simply by the real constant $\Sigma^{\rm off}(0)$. This approximation for the self-energy is easy to justify if the gap is small parameter, such that it only covers small values of $\omega$. The main objective is to present a simplified picture for the analysis of the Andreev bound state for the interacting system. We do not expect to be able to describe the system near the quantum phase transition accurately like this, and other limitations will be seen in the results later. Hence, we find instead of (\[intactBE\]) the simpler equation $$\label{iaandbs} \omega^2-\tilde{\varepsilon}_d^2-\tilde\Gamma^2-z^2\Sigma^{\rm off}(0)^2 +\frac{2\tilde\Gamma[\omega^2+\Delta_{\rm sc}z\Sigma^{\rm off}(0)]}{E(\omega)}=0 ,$$ where renormalised parameters $\tilde{\varepsilon}_d=z[{\varepsilon}_d+\Sigma(0)]$ and $\tilde \Gamma=z\Gamma$ were introduced. As usual $z^{-1}=1-\Sigma'(0)$. Renormalised parameters for the analysis of the Andreev bound states were also considered in reference [@VMY03; @YMV03]. The definition here corresponds to the renormalised perturbation theory framework for the AIM introduced in [@Hew93]. The form of the equations (\[noniaandbs\]) and (\[iaandbs\]) is very similar and both can be easily solved numerically to give the bound state solutions $\omega=E^{\alpha}_b=\alpha E_b$, $\alpha=\pm$. Due to the additional offdiagonal correlations induced by the self-energy term $\Sigma^{\rm off}(0)$, a simple interpretation of the interacting theory based on using renormalised parameters $\tilde{\varepsilon}_d$, $\tilde \Gamma$ in equation (\[noniaandbs\]) for the non-interacting theory is, however, not possible. Based on the same idea we can give approximate expressions for the weights of the bound states $w_b^{\alpha}$ by expanding the diagonal part of the Green’s function around $\omega=E^{\alpha}_b$. We can write the retarded Green’s function in the gap near the bound states $\omega\simeq\pm E_b$ as $$G(\omega)=\frac{w^{-}_b}{\omega-E_b^-+i\eta}+\frac{w^{+}_b}{\omega-E_b^++i\eta}.$$ Using the above approximation for the self-energy the weights are found to be $$\label{weightsbs} w_b^{\alpha}=\frac{z}2 E(E_b)^2\frac{E(E_b)(1+\alpha\frac{\tilde{\varepsilon}_d}{E_b})+\tilde\Gamma} {E(E_b)^2(E(E_b)+2\tilde\Gamma)+\tilde\Gamma(E_b^2+\Delta_{\rm sc}z\Sigma^{\rm off}(0))}.\label{wts}$$ In a more sophisticated approximation one could consider an expansion of the self-energies around the bound state energies $E_b$ rather than $\omega=0$. Various things can be inferred from expression (\[weightsbs\]). First we note that in the particle-hole symmetric case, $\tilde {\varepsilon}_d=0$, $w_b^{+}=w_b^{-}=w_b$. The weights are proportional to the renormalisation factor $z$. Since $z$ shows a similar behaviour as in the metallic lead case they decrease with increasing interaction $U$ according to (\[weightsbs\]). One can easily see that for bound state energies close to the gap, ${\left | E_b \right |}\to\Delta_{\rm sc}$, the weights go to zero, $w_b^{\alpha}\to 0$. One finds [@YO00] that for small $U/\pi\Gamma$ and $\Delta_{\rm sc}/\Gamma\ll 1$ we have $E_b\to\Delta_{\rm sc}$, and also for large $U/\pi\Gamma$ the bound state energy is close to the gap. Therefore the overall behaviour for $w_b$ is given in such a case by $w_b\to0$ for small $U$, then an increase with $U$ to a maximum and a decay again for large $U$ \[cf. figure \[boundstates\] later\]. At the ground state transition, where $E_b=0$, the weight shows a discontinuity, and from equation (\[weightsbs\]) this requires a jump of the self-energy as function of $U$. The limit of large gap {#sec:limlargegap} ---------------------- In order to obtain some obtain analytical results it is useful is to consider the case where the superconducting gap is a large parameter, $\Delta_{\rm sc}\to \infty$ [@VMY03; @OTH04; @TOH07; @ACZ00]. Then the problem essentially reduces to a localised model with an anomalous on-site term which is of the order of the hybridisation $\Gamma$. We will write it in the form $$H^{\rm \infty}_{\rm d}=\sum_{{\sigma}}\xi_d ({ c^{\dagger}_{d,\sigma}}{ c^{}_{d,\sigma}}-1) - \Gamma[{ c^{\dagger}_{d,\uparrow}} { c^{\dagger}_{d,\downarrow}} +{\mathrm{h.c.}}]+\frac U2\Big(\sum_{\sigma}n_{d,\sigma}-1\Big)^2, \label{scsingsiteham}$$ where $\xi_d={\varepsilon}_d+U/2$. Without interaction this Hamiltonian can be diagonalised by a Bogoliubov transformation and the excitation energies $E_d=\sqrt{\xi_d^2+\Gamma^2}$ are found, which lie in the gap as $\Gamma\ll\Delta_{\rm sc}$ as assumed initially. This gives a direct picture of the emergence of the Andreev bound states for large $\Delta_{\rm sc}$. We can discuss the ground state crossover from the singlet to the doublet state in terms of the single site Hamiltonian (\[scsingsiteham\]). First note that the $S=1/2$ (doublet) states, ${\! | \!\!\uparrow\ \!\! \rangle}$ and ${\! | \!\!\downarrow\ \!\! \rangle}$, are eigenstates of (\[scsingsiteham\]) with zero energy. The $S=0$ singlet states, empty site ${\! | 0\ \!\! \rangle}$ and doubly occupied site ${\! | \!\!\uparrow\downarrow\ \!\! \rangle}$, are not eigenstates of (\[scsingsiteham\]). However, the linear combinations in the “BCS-form”, $${\! | \Psi_1\ \!\! \rangle}=u_d\,{\! | 0\ \!\! \rangle}+v_d\,{\! | \!\!\uparrow\downarrow\ \!\! \rangle}, \qquad {\! | \Psi_2\ \!\! \rangle}=v_d\,{\! | 0\ \!\! \rangle}-u_d\,{\! | \!\!\uparrow\downarrow\ \!\! \rangle}, \label{wffct}$$ are eigenstates with eigenvalues $E_1=-E_d+U/2$ and $E_2=E_d+U/2$, respectively. The coefficients $u_d,v_d$ are given by $$u_d^2=\frac12\Big(1+\frac{\xi_d}{E_d}\Big), \qquad v_d^2=\frac12\Big(1-\frac{\xi_d}{E_d}\Big).$$ The ground-state is therefore a singlet as long as $E_1<0$ and a doublet otherwise. The condition $E_1=0$ or $$\frac{\xi_d^2 }{U^2}+\frac{\Gamma^2 }{U^2}=\frac14 \label{delinfphasbound}$$ defines therefore the phase boundary for the transition. It is a semicircle in the $(\xi_d/U)$-$(\Gamma/U)$-plane with radius $1/2$, which is shown in figure \[phasediag\] later. How this phase boundary looks like for finite gap $\Delta_{\rm sc}$ will be investigated in section \[sec:scawphsym\], when we look at the situation away from particle-hole symmetry. In the case of particle-hole symmetry $\xi_d=0$ and condition (\[delinfphasbound\]) reduces to $\Gamma=U/2$. Having established the formalism and the most important relations we will in the next section present results for spectral behaviour of the symmetric AIM with superconducting bath with a finite gap parameter. Results ======= In this section we present results for the local spectral properties. The diagonal and offdiagonal Green’s functions are calculated within the NRG framework usually from the Lehmann representation, $$\rho_{d}(\omega)=\frac1Z\sum_{m,n}|{ \langle m | c_{d}^{\dagger} | n \rangle}|^2\delta[\omega-(E_m-E_n)] (\e^{-\beta E_m}+\e^{-\beta E_n}), \label{speclehman}$$ and similar for the offdiagonal Green’s function. As in this procedure the discrete excitations for the spectral peaks in the Green’s functions have to be broadened, it is not straight forward like this to obtain the sharp spectral gap at ${\left | \omega \right |}=\Delta_{\rm sc}$ expected for $T=0$ . As detailed in appendix B, we can, however, determine the self-energy matrix from the one-particle Green’s function and the higher $F$-Green’s function \[cf. eq. (\[SigF\])\]. Then we can use the exact expression for the non-interacting Green’s function $\underline {G}^0_d(\omega)$ in equation (\[freegfctscimp\]), which includes a sharp spectral gap, and the Dyson matrix equation (\[scgreenfct\]) to calculate the diagonal and offdiagonal Green’s function, $G(\omega)$ and $G^{\rm off}(\omega)$ respectively. This is the way the Green’s functions are calculated for the region outside the gap, ${\left | \omega \right |}>\Delta_{\rm sc}$. Inside the gap, ${\left | \omega \right |}<\Delta_{\rm sc}$, we have extracted the weights $w_b$ and positions $E^{\alpha}_b$ of the delta-function peaks for the Andreev bound states from the NRG excitation data for the Green’s function directly from the lowest spectral excitation (SE) in equation (\[speclehman\]). These delta-functions are represented by an arrow in the plots. Altogether the diagonal spectral function $\rho(\omega)=-{\mathrm{Im}}G(\omega)/\pi$ can then be written in the form $$\rho(\omega)=\sum_{\alpha=\pm}w_b\delta(\omega- E^{\alpha}_b)+\rho_{\rm cont}(\omega), \label{diagspecfct}$$ where $\rho_{\rm cont}(\omega)$ is the continuum part for ${\left | \omega \right |}>\Delta_{\rm sc}$. The offdiagonal part of the spectrum $\rho^{\rm off}(\omega)=-{\mathrm{Im}}G^{\rm off}(\omega)/\pi$ has a similar general form as the diagonal part, $$\rho^{\rm off}(\omega)=\sum_{\alpha=\pm}\bar w^{\alpha}_b\delta(\omega- E^{\alpha}_b)+\rho^{\rm off}_{\rm cont}(\omega), \label{offdiagspecfct}$$ where the weights $\bar w^{\alpha}_b$ can have positive and negative values. For half filling the spectrum $\rho^{\rm off}(\omega)$ is an asymmetric function of $\omega$. Symmetric model --------------- We first focus on the particle-hole symmetric model, ${\varepsilon}_d=-U/2$, where the ratio $U/\pi\Gamma$ and the parameter $\Delta_{\rm sc}$ are the relevant energy scales. ### Spectral functions for small gap In figure \[diagspecdel0.005\] we show the spectral function (\[diagspecfct\]) for $\Delta_{\rm sc}=0.005$ for the diagonal Green’s function at the impurity site for a number of different values of $U$. Here and in the following we take a fixed value for the hybridisation, $\pi\Gamma=0.2$. All quantities can be thought of as being scaled by half the band width $D=1$. ![The spectral density $\rho(\omega)$ for various values of $U$ for the whole energy regime (left) and the region in the gap (right); $\Delta_{\rm sc}=0.005$ and $\pi\Gamma= 0.2$.[]{data-label="diagspecdel0.005"}](figures_ch5/specdifU0.005.eps "fig:"){width="45.00000%"} ![The spectral density $\rho(\omega)$ for various values of $U$ for the whole energy regime (left) and the region in the gap (right); $\Delta_{\rm sc}=0.005$ and $\pi\Gamma= 0.2$.[]{data-label="diagspecdel0.005"}](figures_ch5/speczoomdifU0.005.eps "fig:"){width="45.00000%"} In the plot on the left hand side we give the spectrum over the full energy range. When the interaction is increased, spectral weight is shifted to higher energies as the atomic limit peaks at $\pm U/2$ develop . We also observe the beginning of the formation of a Kondo resonance at low frequencies. For larger $U$ the Kondo resonance becomes narrower, but its formation is suppressed in the very low frequency regime because the spectral density vanishes in the gap region $-\Delta_{\rm sc}<\omega<\Delta_{\rm sc}$. This is not visible on the scale used in the left hand panel of figure \[diagspecdel0.005\]. In the right hand panel of figure \[diagspecdel0.005\] we give an enlarged plot of the gap region, which shows the delta function contributions from the Andreev bound states, where the arrows give the position of the bound state $E^{\pm}_b$ and their height indicates the spectral weight $w_b$. It can be seen that the position of the bound state changes when we increase the interaction. The weight first increases and then decreases as a function of $U$, which corresponds to the feature which was interpreted earlier using equation (\[wts\]). It is generally of interest to see how much spectral weight is transfered from the continuum to the bound states, and an overview for this is given in the later figure \[ddexp\] (right) and \[hfphasedia\]. Note that the largest value of $U$ shown, is greater than the critical $U_c$ for the singlet-doublet transition ($U_c/\pi\Gamma\simeq 3.2$). In the high energy spectrum there is no significant change to be seen in the behaviour, however, at low energies we observe the crossing of the bound state energies at $\omega=0$ at $U_c$. ![The spectral density $\rho^{\rm off}(\omega)$ for various values of $U$ for the whole energy regime (left) and the region in the gap (right); $\Delta_{\rm sc}=0.005$ and $\pi\Gamma= 0.2$.[]{data-label="offdiagspecdel0.005"}](figures_ch5/specoffdifU0.005.eps "fig:"){width="45.00000%"} ![The spectral density $\rho^{\rm off}(\omega)$ for various values of $U$ for the whole energy regime (left) and the region in the gap (right); $\Delta_{\rm sc}=0.005$ and $\pi\Gamma= 0.2$.[]{data-label="offdiagspecdel0.005"}](figures_ch5/specoffzoomdifU0.005.eps "fig:"){width="45.00000%"} In figure \[offdiagspecdel0.005\] we show the offdiagonal spectral function (\[offdiagspecfct\]) for $\Delta_{\rm sc}=0.005$ for a number of different values of $U$. In the plot on the left hand side we show the behaviour for the continuum part outside the gap. Notice that the frequency range only extends up to $\omega=\pm 0.1$. We can see a peak close to $\omega=\pm \Delta_{\rm sc}$, which is suppressed for larger $U$ and changes sign towards the singlet-doublet transition. The behaviour of the bound state peaks in the offdiagonal spectrum is displayed on the right hand side of the figure. We can see similar features as observed before in the diagonal part, i.e. the weight first increases with $U$ and then decreases. If we follow the excitations with the weight of the same sign we can see, that at the singlet-doublet transition the bound state levels cross at $\omega=0$. ### Bound state behaviour A more detailed analysis of the behaviour of the bound state as a function of $U/\pi\Gamma$ and the gap in the medium $\Delta_{\rm sc}$ is presented in figure \[boundstates\]. On the left hand side we plot the bound state energies $\pm E_b$ and on the right hand side the corresponding weights $w_b$. ![Bound state energies $E_b$ (left) and weights $w_b$ (right) for various $U/\pi \Gamma$ and $\Delta_{\rm sc}$. Both quantities have been scaled by the corresponding value of $\Delta_{\rm sc}$; $\pi\Gamma= 0.2$.[]{data-label="boundstates"}](figures_ch5/EbUdeppiGamma0.2difdelsc.eps "fig:"){width="45.00000%"} ![Bound state energies $E_b$ (left) and weights $w_b$ (right) for various $U/\pi \Gamma$ and $\Delta_{\rm sc}$. Both quantities have been scaled by the corresponding value of $\Delta_{\rm sc}$; $\pi\Gamma= 0.2$.[]{data-label="boundstates"}](figures_ch5/weightbUdeppiGamma0.2difdelsc.eps "fig:"){width="45.00000%"} We can see that in the non-interacting case the bound state energy for the cases with small gap ($\Delta_{\rm sc}=0.001,0.01$) is very close to $\pm\Delta_{\rm sc}$ and decreases to zero with increasing interaction. For a critical value $U_c$ the nature of the ground-state changes from a singlet ($S=0$) to a doublet ($S=1/2$) and at this point $E_b=0$. For this transition we can think of the positive $E^+_b$ and negative solution $E^-_b$ for the bound states as crossing at $\omega=0$. When the interaction is increased further, ${\left | E^{\pm}_b \right |}$ becomes finite again and increases with $U$. The larger the gap $\Delta_{\rm sc}$ the smaller critical value $U_c$ for this ground state transition becomes. In the case where $\Delta_{\rm sc}$ is of the order of $\Gamma$ - as can be seen for the case $\Delta_{\rm sc}=0.06$ - the bound state energy $E_b$ lies in the middle of the gap already for the non-interacting case, but otherwise shows a similar behaviour as described above. On the right hand side of figure \[boundstates\] the weight $w_b$ of these bound states can be seen. We have marked the position $U_c$ of the singlet-doublet crossover point by a symbol on the $x$-axis. The two curves for a value of the gap $\Delta_{\rm sc}=0.001$ and $\Delta_{\rm sc}=0.01$ have a maximum for some intermediate value of $U$ which is smaller than the critical $U_c$ for the ground state transition. This behaviour can be understood from the analytic behaviour of the explicit equation (\[weightsbs\]) derived earlier. For the other curve ($\Delta_{\rm sc}=0.06$) the weight is maximal for the non-interacting case. In all cases the weight becomes very small for large $U$. Note that we plot the weight scaled by the gap parameter, $w_b/\Delta_{\rm sc}$, and therefore the absolute values are larger for the cases with larger superconducting gap. At the singlet-doublet transition we can see discontinuous behaviour as the weight changes sharply. This is a feature of the zero temperature calculation, where the matrix elements in the Lehmann sum (\[speclehman\]) change their values discontinuously when the levels cross on increasing $U$, such that the nature of the ground state changes. It can be seen for the anomalous correlations $\langle d_{\uparrow}d_{\downarrow}\rangle$ in figure \[ddexp\] later, as well. For finite temperature this discontinuity becomes smooth. ### Spectral functions for larger gap In figure \[diagspecdel0.02\] we show for comparison the diagonal spectral function for a larger gap $\Delta_{\rm sc}=0.02$ for the diagonal Green’s function at the impurity site for a number of different values of $U$. ![The spectral density $\rho(\omega)$ for various values of $U$ for the whole energy regime (left) and the region in the gap (right); $\Delta_{\rm sc}=0.02$ and $\pi\Gamma= 0.2$.[]{data-label="diagspecdel0.02"}](figures_ch5/specdifU0.02.eps "fig:"){width="45.00000%"} ![The spectral density $\rho(\omega)$ for various values of $U$ for the whole energy regime (left) and the region in the gap (right); $\Delta_{\rm sc}=0.02$ and $\pi\Gamma= 0.2$.[]{data-label="diagspecdel0.02"}](figures_ch5/speczoomdifU0.02.eps "fig:"){width="45.00000%"} The overall picture on the left is similar to the case in figure \[diagspecdel0.005\] with the smaller gap. Due to the larger gap the formation of the central Kondo resonance is completely suppressed, but the high energy spectrum is as before. From the behaviour within the gap (right side in figure \[diagspecdel0.02\]) we can see that the bound state position $E^{\pm}_b$ goes to zero for smaller $U$ values than in the case $\Delta_{\rm sc}=0.005$, and hence the ground state transition occurs for smaller $U_c$ for the larger gap ($U_c/\pi\Gamma\simeq 2.03$). This was analysed in detail in figure \[boundstates\] above. For the values of $U$ shown the spectral weight of the bound states $w_b$ decreases with increasing $U$. The weight $w_b$ of the peaks in the gap has been scaled differently in figures \[diagspecdel0.005\] and \[diagspecdel0.02\], so that their height should not be compared directly. The spectral function of the offdiagonal Green’s function at the impurity site (\[offdiagspecfct\]) for this value of the gap, $\Delta_{\rm sc}=0.02$, is shown in figure \[offdiagspecdel0.02\] for a number of different values of $U$. ![The spectral density $\rho^{\rm off}(\omega)$ for various values of $U$ for the whole energy regime (left) and the region in the gap (right); $\Delta_{\rm sc}=0.02$ and $\pi\Gamma= 0.2$.[]{data-label="offdiagspecdel0.02"}](figures_ch5/specoffdifU0.02.eps "fig:"){width="45.00000%"} ![The spectral density $\rho^{\rm off}(\omega)$ for various values of $U$ for the whole energy regime (left) and the region in the gap (right); $\Delta_{\rm sc}=0.02$ and $\pi\Gamma= 0.2$.[]{data-label="offdiagspecdel0.02"}](figures_ch5/specoffzoomdifU0.02.eps "fig:"){width="45.00000%"} For larger frequencies outside of the gap (left) we can see a peak near $\omega=\Delta_{\rm sc}$, whose height is reduced on increasing $U$. At larger frequencies we find that the tails develop a broad peak for larger values of $U$. This has not been observed in the case with the smaller gap shown in figure \[offdiagspecdel0.005\]. Also a sign change of the low energy peak is found as before. The behaviour near and in the gap (right) can be understood as before, where in this case we have shown two values of $U$ with a singlet ground state and two with a doublet ground state. ### Analysis of bound states with renormalised parameters In section \[sec:aimsc\] we have discussed how the bound state energy, which so far was deduced from the spectral excitations (SE), can also be calculated from the bound state equation (BE) (\[intactBE\]). The latter was derived by expanding the self-energy to first order. It involves the renormalised parameters $\tilde{\varepsilon}_d$, $\tilde \Gamma$ and the constant value of the offdiagonal self-energy $\Sigma^{\rm off}(0)$. In figure \[compboundstates\] we compare the bound state energies calculated by these two methods for two values of the gap $\Delta_{\rm sc}=0.005$ (left) and $\Delta_{\rm sc}=0.06$ (right). ![Bound state energies $E_b$ as calculated from the spectral excitations (SE) and from the bound state equation (BE) (\[intactBE\]) with renormalised parameters for $\Delta_{\rm sc}=0.005$ (left) and for $\Delta_{\rm sc}=0.06$ (right) for various $U/\pi\Gamma$; $\pi\Gamma= 0.2$ is fixed.[]{data-label="compboundstates"}](figures_ch5/comprpEbUdeppiGamma0.2delsc0.005.eps "fig:"){width="45.00000%"} ![Bound state energies $E_b$ as calculated from the spectral excitations (SE) and from the bound state equation (BE) (\[intactBE\]) with renormalised parameters for $\Delta_{\rm sc}=0.005$ (left) and for $\Delta_{\rm sc}=0.06$ (right) for various $U/\pi\Gamma$; $\pi\Gamma= 0.2$ is fixed.[]{data-label="compboundstates"}](figures_ch5/comprpEbUdeppiGamma0.2delsc0.06.eps "fig:"){width="45.00000%"} We can see that for values of $U<U_c$ the agreement is excellent in both cases. However, when $U\ge U_c$ we find less accurate values with the method based on bound state equation (BE) with renormalised parameters. Since the method to calculate the bound state energy from the NRG spectral excitations (SE) is very accurate we expect inaccuracies to be found in the BE method. Indeed, the closer inspection of the numerical results for the diagonal and off-diagonal self-energies reveals that the linear and constant approximation made in section \[sec:andbs\] to derive the bound state equation with renormalised parameters (\[intactBE\]) becomes less applicable for $U\ge U_c$. The self-energy displays additional features there. In section \[sec:aimsc\] we have also derived an expression (\[weightsbs\]) for the weights $w_b$ of the bound states in the gap. It can be expressed in terms of the renormalised parameters $\tilde{\varepsilon}_d$, $\tilde \Gamma$, the offdiagonal self-energy $\Sigma^{\rm off}(0)$ and the bound states energy $E_b$. In figure \[compweights\] we compare the weights calculated from the spectral excitations (SE) with the ones from the bound state equation (BE) analysis with renormalised parameters. We show the results for the same parameters $\Delta_{\rm sc}=0.005$ (left) and $\Delta_{\rm sc}=0.06$ (right). ![Weights $w_b$ for the Andreev bound states as calculated from the spectral excitations (SE) and from the equation (\[weightsbs\]) with renormalised parameters for $\Delta_{\rm sc}=0.005$ (left) and for $\Delta_{\rm sc}=0.06$ (right) for various $U/\pi\Gamma$; $\pi\Gamma= 0.2$ is fixed.[]{data-label="compweights"}](figures_ch5/comprpweightbUdeppiGamma0.2delsc0.005.eps "fig:"){width="45.00000%"} ![Weights $w_b$ for the Andreev bound states as calculated from the spectral excitations (SE) and from the equation (\[weightsbs\]) with renormalised parameters for $\Delta_{\rm sc}=0.005$ (left) and for $\Delta_{\rm sc}=0.06$ (right) for various $U/\pi\Gamma$; $\pi\Gamma= 0.2$ is fixed.[]{data-label="compweights"}](figures_ch5/comprpweightbUdeppiGamma0.2delsc0.06.eps "fig:"){width="45.00000%"} We can see for both cases that the overall behaviour of the weights as a function of $U$ is described reasonably well by equation (\[weightsbs\]). It is, however, clearly visible that the agreement is between the SE and BE values is much better in the singlet regime for $U<U_c$. This is similar as observed for the values of the bound states energies $E_b$ in figure \[compboundstates\], and the reason for this is the same. The discontinuity for the weight is not reproduced by the approximation based on equation (\[weightsbs\]). As can be seen from that equation this would require a sudden change in the self-energy as function of $U$, which was not found with sufficient accuracy in the present calculation. This can partly be attributed to the broadening procedure involved and to the inaccuracies when calculating the numerical derivative. ### Anomalous expectation value and phase diagram The anomalous expectation value $\langle d_{\uparrow}d_{\downarrow}\rangle$ is an indicator for the strength of the proximity effect of the superconducting medium at the impurity site and quantifies the induced on-site superconducting correlations. In the following figure \[ddexp\] we show the dependence of $\langle d_{\uparrow}d_{\downarrow}\rangle$ on the interaction $U/\pi\Gamma$ for the same values of $\Delta_{\rm sc}$ as in figure \[boundstates\]. The values are scaled by the gap $\Delta_{\rm sc}$. ![Left: Anomalous expectation values $\langle d_{\uparrow}d_{\downarrow}\rangle$ as a function of $U/\pi \Gamma$ for various $\Delta_{\rm sc}$. The values are scaled by the gap $\Delta_{\rm sc}$; $\pi\Gamma= 0.2$. Right: The total weight of the bound states $w$ in the gap as calculated from the spectral excitations as a function of $U/\pi\Gamma$ for various $\Delta_{\rm sc}/\pi\Gamma$.[]{data-label="ddexp"}](figures_ch5/ddexpUdeppiGamma0.2difdelsc.eps "fig:"){width="45.00000%"} ![Left: Anomalous expectation values $\langle d_{\uparrow}d_{\downarrow}\rangle$ as a function of $U/\pi \Gamma$ for various $\Delta_{\rm sc}$. The values are scaled by the gap $\Delta_{\rm sc}$; $\pi\Gamma= 0.2$. Right: The total weight of the bound states $w$ in the gap as calculated from the spectral excitations as a function of $U/\pi\Gamma$ for various $\Delta_{\rm sc}/\pi\Gamma$.[]{data-label="ddexp"}](figures_ch5/weightallUdeltdelsc2.eps "fig:"){width="45.00000%"} We see that as a general trend $\langle d_{\uparrow}d_{\downarrow}\rangle$ decreases for increasing on-site interaction. This is expected since the superconducting correlations are suppressed by the repulsive interaction. We have marked the ground state transition with a symbol on the $x$-axis, and we see that $\langle d_{\uparrow}d_{\downarrow}\rangle$ changes discontinuously in magnitude and sign there. This is characteristic for this zero temperature quantum phase transition. The sign change is due to a phase change of $\pi$ of the local order parameter which occurs at the transition as discussed in reference [@BVZ06]. In the situation of infinite gap in the medium, which was discussed in section \[sec:limlargegap\], $\langle d_{\uparrow}d_{\downarrow}\rangle$ drops only to zero at the transition point and is zero in the doublet ground state. At finite temperature the behaviour becomes continuous. An overview of the transfer of spectral weight from the continuum to the bound states is shown in figure \[ddexp\] (right). There we plot the total weight $w=w_b^+ +w_b^-$ as a function of $U/\pi\Gamma$ for four selected values of $\Delta_{\rm sc}/\pi\Gamma$ ranging from 0.005 to 1. The curves are similar as before in figure \[boundstates\] and show the discontinuity at the ground state transition. Here the values are not scaled by $\Delta_{\rm sc}$. We can see that the smaller $U$ and the larger $\Delta_{\rm sc}$ are the more spectral weight is found in the bound states. In the extreme case of $\Delta_{\rm sc}\to 0$ we have $w=0$, and for large gap, $\Delta_{\rm sc}\to \infty$, and small $U$ equation (\[weightsbs\]) gives $w\to 1$. The tendency to both of these limiting cases can be inferred from figure \[ddexp\] (right) and we can see that, for instance, for $\Delta_{\rm sc}=\pi\Gamma$ already about 80% of the spectral weight is in the bound states. Summarising the behaviour for different parameters, we present a phase diagram for singlet and doublet states for the symmetric model in the following figure \[hfphasedia\]. ![Phase diagram for singlet and doublet ground-state as a function of $\Delta_{\rm sc}/\pi \Gamma$ and $U/\pi \Gamma$, where the full line with large dots describes the phase boundary. The dotted line corresponds to $U/\Gamma=2$, which shows the singlet doublet transition for $\Delta_{\rm sc}\to\infty$. The dashed line gives the transition as $T_{\rm K}/\Delta_{\rm sc}\simeq 0.3$ with $T_{\rm K}$ given in equation (\[yoshiokatk\]). As a background colour we have included the amount of spectral weight transfered to the bound states; the discontinuous behaviour at the the singlet doublet ground state transitions is slightly blurred in the interpolated representation.[]{data-label="hfphasedia"}](figures_ch5/phdiahfandweights.eps){width="65.00000%"} For small $U$ the ground state is always a singlet. It can become a doublet when $U/\pi \Gamma$ is increased. The critical $U_c$ for the transition decreases with increasing value of the gap $\Delta_{\rm sc}$ as can be seen in the diagram. In the limit $\Delta_{\rm sc}\to \infty$, the critical interaction is given by $U_c/\pi\Gamma=2/\pi$, which is shown with a dotted vertical line in the figure. As mentioned in the Introduction there have been estimates of the phase boundary for the singlet and doublet ground state in the strong coupling regime [@SSSS92; @YO00] as $T_{\rm K}/\Delta_{\rm sc}\simeq 0.3$. In this case the Kondo temperature is given as in equation (3.9) in reference [@YO00], $$\label{yoshiokatk} T_{\rm K}=0.182 U\sqrt{\frac{8\Gamma}{\pi U}}\e^{-\pi U/8\Gamma}.$$ We have added a dashed line representing this result which agrees with the ones presented here in the strong coupling regime, but starts to deviate for smaller values of $U$. As a background colour we have included in figure \[hfphasedia\] how much spectral weight $w$ is transfered to the bound states (The value of $w$ is given by the colour bar on the top part of the figure.). As noted before in figure \[ddexp\] (right) we can see generally that the weight is maximal in the region of large gap and small on-site repulsion $U$. At $\Delta_{\rm sc}\to 0$ the ground-state is a singlet for any value of $U$ as the Kondo effect always leads to a screened impurity spin in a singlet formation. For finite gap the nature of the singlet ground state can differ depending on the magnitude of $U$. It can be a singlet corresponding to an s-wave pair like in the wave function given in equation (\[wffct\]), which is a superposition of zero and double occupation. This is the natural singlet ground state for a BCS superconductor. In the strong coupling regime we can, however, also have a screened local spin, i.e. a Kondo singlet. The wave function has a different form then and consists rather of a singly occupied impurity state coupled to the spins of the medium as many-body state. In our NRG calculations it is not easy to distinguish clearly this different nature of the singlet ground states and draw a definite line to separate them. We can, however, get an indication for what is favoured from the two particle response functions in the spin and in the charge channel. In figure \[suscdel0.005\] we show the imaginary part of the dynamic charge and spin susceptibility, $\chi_c(\omega)$ and $\chi_s(\omega)$, for $\Delta_{\rm sc}=0.005$ and a series of values for the interaction $U$. ![The imaginary part of the dynamic charge (left) and spin (right) susceptibility various values of $U$; $\Delta_{\rm sc}=0.005$ and $\pi\Gamma= 0.2$. The scale on both axes is the same such that the results can be compared well.[]{data-label="suscdel0.005"}](figures_ch5/bp_specdifUdelsc0.005.eps "fig:"){width="45.00000%"} ![The imaginary part of the dynamic charge (left) and spin (right) susceptibility various values of $U$; $\Delta_{\rm sc}=0.005$ and $\pi\Gamma= 0.2$. The scale on both axes is the same such that the results can be compared well.[]{data-label="suscdel0.005"}](figures_ch5/ss_specdifUdelsc0.005.eps "fig:"){width="45.00000%"} We can see that the peaks in the charge susceptibility exceed the ones in the spin susceptibility for zero and weak interaction indicating the dominance of the symmetry breaking in the charge channel, and a ground state of superconducting singlet nature. However, for $U/\pi\Gamma>1$ the spin susceptibility develops a large and narrow peak at low frequency. This signals the importance of the spin fluctuations and low energy spin excitations and indicates a ground states of a screened spin. In contrast the decreasing peaks in the charge susceptibility for large $U$ is consistent with the effect of suppression of the on-site superconducting correlations. Away from particle-hole symmetry {#sec:scawphsym} -------------------------------- So far we have considered the special situation of particle-hole symmetry, ${\varepsilon}_d=-U/2$. In this section we will briefly discuss a few aspects that change in the situation away from particle-hole symmetry. Let us consider the case where for a given gap $\Delta_{\rm sc}$, on-site interaction $U$, and hybridisation $\Gamma$, the ground-state of the system is a doublet at half filling, $\xi_d=0$. When $\xi_d$ is increased, we find that a transition to a singlet state can occur at a certain value $\xi_d^c$. This is illustrated in the following figure \[boundstatesasy\], where we have plotted the bound state energy $E_b$ for fixed $\Delta_{\rm sc}=0.01$, two values of $U/\pi\Gamma=3,5$ and a series of values of the on-site energy scaled by $U$, $\xi_d/U$. As before we have $\pi\Gamma= 0.2$. ![The dependence of the bound state energies $E_b$ (left) and weights $w_b$ (right) on $\xi_d/U$ for $\Delta_{\rm sc}=0.01$ and $U/\pi\Gamma=3,5$; $\pi\Gamma= 0.2$ is fixed.[]{data-label="boundstatesasy"}](figures_ch5/EbxideppiGamma0.2difU_delsc0.01.eps "fig:"){width="47.00000%"} ![The dependence of the bound state energies $E_b$ (left) and weights $w_b$ (right) on $\xi_d/U$ for $\Delta_{\rm sc}=0.01$ and $U/\pi\Gamma=3,5$; $\pi\Gamma= 0.2$ is fixed.[]{data-label="boundstatesasy"}](figures_ch5/weightbxideppiGamma0.2difU_delsc0.01.eps "fig:"){width="45.00000%"} The critical interaction for the ground state transition for this case at half filling is $U_c/\pi\Gamma\simeq 2.6$, such that both cases possess a doublet ground state for $\xi_d=0$. We can see that with increasing asymmetry $\xi_d$ the bound state energy ${\left | E_b \right |}$ first decreases towards zero and then increases again in the singlet regime for $\xi_d>\xi_d^c$. As in the symmetric case the singlet-doublet transition is accompanied by ${\left | E_b \right |}=0$. The weights $w_b^{\pm}$ for these bound states are shown on the right hand side of figure \[boundstatesasy\]. Away from particle-hole symmetry the weight $w_b^+$ for the positive energy $E_b^+$ and $w_b^-$ the one for the negative bound state $E_b^-$ are not equal, as was already pointed out below equation (\[weightsbs\]). We can see that the weights $w_b^{\pm}$ start to assume different values when $\xi_d$ is increased from 0. At the ground state transition the values change discontinuously similar as observed in the half filled case. If we follow both the positive weight $w_b^+$ and the negative $w_b^-$ separately the weights cross at the transition point. If, however, we think of the bound states as crossing at zero, i.e. $w_b^+\leftrightarrow w_b^-$ at the transition, a more direct connection can be deduced from the results shown. In the singlet phase there is a maximum for both the positive and the negative bound state weight, more pronounced for $w_b^+$. Also in the asymmetric case it is possible to calculate the bound state position $E_b$ from equation (\[iaandbs\]) and the weights from equation (\[weightsbs\]) employing the renormalised parameters. We do not show the plots here, but note that the results resemble figures \[compboundstates\] and \[compweights\] in the respect that they give good agreement in the singlet regime, but deviations for parameters where the ground state is a doublet. In the following figure \[phasediag\] (left) we show the dependence of the anomalous expectation value $\langle d_{\uparrow}d_{\downarrow}\rangle$ on the asymmetry scaled by the interaction $\xi_d/U$ for the same value of $\Delta_{\rm sc}$ as in figure \[boundstatesasy\]. ![Left: Anomalous expectation values $\langle d_{\uparrow}d_{\downarrow}\rangle$ for various $U/\pi \Gamma$, $\Delta_{\rm sc}=0.01$ and $\pi\Gamma= 0.2$. Right: Phase diagram showing the regions for singlet and doublet ground state as dependent on $\Gamma/U$ and $\xi_d/U$ for different values of the gap $\Delta_{\rm sc}$. The full semicircular line corresponds to the phase boundary for $\Delta_{\rm sc}=\infty$ as discussed in equation (\[delinfphasbound\]).[]{data-label="phasediag"}](figures_ch5/ddexpxideppiGamma0.2difU_delsc0.01.eps "fig:"){width="45.00000%"} ![Left: Anomalous expectation values $\langle d_{\uparrow}d_{\downarrow}\rangle$ for various $U/\pi \Gamma$, $\Delta_{\rm sc}=0.01$ and $\pi\Gamma= 0.2$. Right: Phase diagram showing the regions for singlet and doublet ground state as dependent on $\Gamma/U$ and $\xi_d/U$ for different values of the gap $\Delta_{\rm sc}$. The full semicircular line corresponds to the phase boundary for $\Delta_{\rm sc}=\infty$ as discussed in equation (\[delinfphasbound\]).[]{data-label="phasediag"}](figures_ch5/phasediag.eps "fig:"){width="45.00000%"} The values for $\langle d_{\uparrow}d_{\downarrow}\rangle$ are scaled by the gap $\Delta_{\rm sc}$. For the values of $U$ shown, at half filling the system has a doublet ground state and $\langle d_{\uparrow}d_{\downarrow}\rangle$ is negative. First it does not vary much when $\xi_d$ is increased, but at the transition to the singlet ground state we find, as in the half filled case, a jump to a positive value and $\langle d_{\uparrow}d_{\downarrow}\rangle$ increases to a saturation value on further increasing $\xi_d$. This value is smaller for larger $U$, similar to what has been found in the symmetric case. On the right hand side of figure \[phasediag\] we present a global phase diagram of the parameter regimes for singlet and doublet ground states for the non-symmetric case. This representation in the $\Gamma/U$-$\xi_d/U$-plane is motivated by the result for the phase boundary for the case $\Delta_{\rm sc}\to \infty$ derived in section \[sec:limlargegap\], equation (\[delinfphasbound\]). The semicircle corresponding to this case is shown in the figure together with the phase boundaries for some finite values of the gap $\Delta_{\rm sc}$. These are seen to have a similar form, but the boundary decreases to smaller values of $\Gamma/U$ with $\Delta_{\rm sc}/\pi\Gamma$. Note that the parameters on the line on the $x$-axis, to which the phase boundary contracts in the limit $\Gamma\to 0$ or $U\to\infty$, possess a doublet ground state for ${\left | \xi_d \right |}/U<1/2$. Conclusions =========== We have discussed and quantitatively analysed the different forms of behaviour that can occur for an interacting impurity site in a medium with offdiagonal symmetry breaking in the charge channel. This study is motivated by the experimental situations of impurities in superconductors and nanoscale quantum dot systems with superconducting leads. In the local spectral functions we found that the low energy spectrum is dominated by the superconducting gap, and we saw that the lowest excitations in these cases are Andreev bound states within the gap region. For higher energies the spectrum resembles the form usually found in a metallic bath with broadened atomic limit peaks for large $U/\pi\Gamma$. The formation of the Kondo resonance, whose width is proportional to $T_{\rm K}$, is in direct competition with the superconducting spectral gap of magnitude $\Delta_{\rm sc}$. Therefore, depending on the ratio of these parameters a screened Kondo singlet or an unscreened local moment is observed. The lowest spectral excitations, the Andreev bound states within the gap region, change position and weight according to the other parameters. These have been analysed in detail in both the symmetric and the asymmetric model. We have given a simple interpretation of their position and weight in terms of renormalised parameters. It turned out that the assumptions for the definition of these were satisfied better in the singlet ground state regime. The reason for this should be subject of further investigation. In the quantum dot systems currents have been observed involving multiple Andreev processes [@BNS02; @BBNBBS03]. It is expected that a quantitative understanding of these currents require accurate information about the weight and position of the Andreev bound states, which have been provided here. To study the experimental situation in detail and to describe the differential conductance dependent on the local bound state behaviour can be subject of a separate publication, where also the details of the experimental setup are taken into account more carefully. The behaviour of the ground state of the system, which can be a spin singlet or a doublet, is summarised in the two phase diagrams in figures \[hfphasedia\] and \[phasediag\]. For the overlapping parameter ranges our results for the ground state and the locally excited states are in agreement with earlier NRG studies [@SSSS92; @SSSS93; @YO00]. Differences can be seen in the spectral representation of the bound states in the gap. Here we report delta function peaks, whereas an earlier study [@CLKB04] presented broadened peaks. The method of calculating spectral functions and the self-energy used and explained in the appendix of this paper will be relevant for extensions of the calculation to the lattice model within the dynamical mean field theory framework. There an effective Anderson impurity model could be used to study the phases with superconducting symmetry breaking for instance in the attractive Hubbard model. We would like to acknowledge helpful discussions with R. Bulla and Hyun-Jung Lee. JB is grateful for the hospitality at Osaka City University, where this work was initiated, and to SFB 484 at the University of Augsburg, where this work was finalised. JB thanks the Gottlieb Daimler- and Karl Benz-Foundation, EPSRC, the DAAD and JSPS for financial support, and AO acknowledges the support by the Grant-in-Aid for Scientific Research for JSPS. We also wish to thank W. Koller and D. Meyer for their earlier contributions to the NRG program. Relevant Green’s functions ========================== For the Green’s functions it is convenient to work in Nambu space, ${\bm C}^{\dagger}_{d}=({ c^{\dagger}_{d,\uparrow}},{ c^{}_{d,\downarrow}})$, with $2\times2$ matrices. The relevant retarded Green’s functions are then $$\fl \underline {G}_d(\omega)= {\langle\!\langle {\bm C}_{d};{\bm C}^{\dagger}_{d} \rangle\!\rangle}_{\omega}= \left(\begin{array}{c c} {\langle\!\langle { c^{}_{d,\uparrow}};{ c^{\dagger}_{d,\uparrow}} \rangle\!\rangle}_{\omega} & {\langle\!\langle { c^{}_{d,\uparrow}};{ c^{}_{d,\downarrow}} \rangle\!\rangle}_{\omega} \\ {\langle\!\langle { c^{\dagger}_{d,\downarrow}};{ c^{\dagger}_{d,\uparrow}} \rangle\!\rangle}_{\omega} & {\langle\!\langle { c^{\dagger}_{d,\downarrow}};{ c^{}_{d,\downarrow}} \rangle\!\rangle}_{\omega} \end{array}\right) =\left(\begin{array}{c c} G_{11}(\omega) & G_{12}(\omega) \\ G_{21}(\omega) & G_{22}(\omega) \end{array}\right).$$ In the NRG approach we calculate $G_{11}$ and $G_{21}$ directly and infer $G_{22}(\omega)=-G_{11}(-\omega)^*$, which follows from $G_{A,B}^{\rm ret}(\omega)=-G_{B,A}^{\rm adv}(-\omega)$ and $G_{A,B}^{\rm ret/adv}(\omega)=-G_{A^{\dagger},B^{\dagger}}^{\rm ret/adv}(-\omega)^*$ for fermionic operators $A$, $B$. Similarly, we can find $G_{12}(\omega)=G_{21}(-\omega)^*$. In the derivation one has to be careful and include a sign change for up down spin interchange in the corresponding operator combination. In the non-interacting case we can deduce the $d$-site Green’s function matrix exactly. To do so rewrite the term $H_{\rm sc}$ by introducing the vector of operators and the symmetric matrix $$\label{cknambu} {\bm C}_{{{\bm k}}}:= \left(\begin{array}{c} \! { c^{}_{{{\bm k}},\uparrow}} \\ \! { c^{\dagger}_{-{{\bm k}},\downarrow}} \end{array}\right), \qquad A_{{{\bm k}}}:= \left(\begin{array}{cc} \! {\varepsilon}_{{{\bm k}}} & \! -\Delta_{\rm sc}\\ \! -\Delta_{\rm sc} & \! -{\varepsilon}_{{{\bm k}}} \end{array}\right).$$ Then $H_{\rm sc}$ can be written as $$H_{\rm sc}=\sum_{{{\bm k}}}{\bm C}_{{{\bm k}}}^{\dagger}A_{{{\bm k}}}{\bm C}_{{{\bm k}}}.$$ The matrix Green’s function in the superconducting lead is then given by $\underline {g}_{{{\bm k}}}(i\omega_n)=(i\omega_n {\mathbbm{1}}_2-A_{{{\bm k}}})^{-1}$, $$\underline {g}_{{{\bm k}}}(i\omega_n)^{-1} =i\omega_n{\mathbbm{1}}_{2}-{\varepsilon}_{{{\bm k}}}\tau_3 +\Delta_{\rm sc}\tau_1, \label{freescbandgfct}$$ where $\tau_i$ are Pauli matrices. It follows that $$\underline {g}_{{{\bm k}}}(i\omega_n) =\frac{i\omega_n{\mathbbm{1}}_{2}+{\varepsilon}_{{{\bm k}}}\tau_3 -\Delta_{\rm sc}\tau_1}{(i\omega_n)^2-({\varepsilon}_{{{\bm k}}}^2+\Delta_{\rm sc}^2)}.$$ In the wide band limit with a constant density of states the hybridisation term takes the form $$V^2 \frac1N\sum_{{{\bm k}}}\underline {g}_{{{\bm k}}}(i\omega_n)= -\Gamma\frac{i\omega_n{\mathbbm{1}}_{2}+\Delta_{\rm sc}\tau_1} {E(i\omega_n)}.$$ We are mostly interested in the limit of zero temperature here, and the function in the denominator $E(z)$ after analytic continuation reads $$\label{funcEom} E(\omega)= \left\{ \begin{array}{c c} -i{\rm sgn}(\omega)\sqrt{\omega^2-\Delta_{\rm sc}^2} & {\rm for} \; |\omega|>\Delta_{\rm sc} \\ \sqrt{\Delta_{\rm sc}^2-\omega^2} & {\rm for}\; |\omega|<\Delta_{\rm sc} \end{array}\right . .$$ In the non-interacting case for $T=0$, we have therefore $$\underline {G}^0_d(\omega)^{-1}=\omega{\mathbbm{1}}_{2}-{\varepsilon}_{d}\tau_3+ \Gamma\frac{\omega{\mathbbm{1}}_{2}+\Delta_{\rm sc}\tau_1} {E(\omega)}.$$ The Green’s function is obtained by matrix inversion, which yields $$\underline {G}^0_d(\omega)= \frac{1}{D(\omega)} \Big[\omega\Big(1+\frac{\Gamma}{E(\omega)}\Big){\mathbbm{1}}_{2} -\frac{\Gamma\Delta_{\rm sc}}{E(\omega)}\tau_1 +{\varepsilon}_{d}\tau_3\Big] , \label{freegfctscimp}$$ where the determinant, $D(\omega):=\det(\underline {G}^0_d(\omega)^{-1})$ is given by $$D(\omega)= \omega^2 \Big[1+\frac{\Gamma}{E(\omega)}\Big]^2 -\frac{\Gamma^2\Delta_{\rm sc}^2}{E(\omega)^2} -{\varepsilon}_{d}^2. \label{scdet}$$ The full Green’s function matrix $\underline {G}_d(\omega)^{-1}$ at the impurity site is given by the Dyson matrix equation $$\underline {G}_d(\omega)^{-1}= \underline G_0^{-1}(\omega)- \underline \Sigma(\omega), \label{scgreenfct}$$ where we have introduced the self-energy matrix $\underline \Sigma(\omega)$. Self-energy using the higher $F$-Green’s function ================================================= As described by Bulla et al. [@BHP98] there is a method to calculate the self-energy employing a higher $F$-Green’s function, and it can also be used for the case with superconducting bath. In order to derive the equations of motions for the correlation functions, the identity $$\omega{\langle\!\langle A;B \rangle\!\rangle}_{\omega}+{\langle\!\langle [H,A],B \rangle\!\rangle}_{\omega}={ \langle [A,B]_{\eta} \ \!\! \rangle}$$ ($\eta=+$ for fermions) is useful. The calculation taking into account all offdiagonal terms yields the following matrix equation $$\underline G_0^{-1}(\omega) \underline {G}_d(\omega)-U\underline F(\omega)={\mathbbm{1}}_2, \label{eomoff}$$ with the matrix of higher Green’s functions $\underline F(\omega)$, $$\underline F(\omega)= \left(\begin{array}{c c} F_{11}(\omega) & F_{12}(\omega) \\ F_{21}(\omega) & F_{22}(\omega) \end{array}\right).$$ We have introduced the matrix elements $F_{11}(\omega)={\langle\!\langle { c^{}_{d,\uparrow}}n_{\downarrow};{ c^{\dagger}_{d,\uparrow}} \rangle\!\rangle}_{\omega}$, $F_{12}(\omega)={\langle\!\langle { c^{}_{d,\uparrow}}n_{\downarrow};{ c^{}_{d,\downarrow}} \rangle\!\rangle}_{\omega}$, $F_{21}(\omega)=-{\langle\!\langle { c^{\dagger}_{d,\downarrow}}n_{\uparrow};{ c^{\dagger}_{d,\uparrow}} \rangle\!\rangle}_{\omega}$ and $F_{22}(\omega)=-{\langle\!\langle { c^{\dagger}_{d,\downarrow}}n_{\uparrow};{ c^{}_{d,\downarrow}} \rangle\!\rangle}_{\omega}$. In the NRG we calculate $F_{11}$ and $F_{21}$ and the others follow from $F_{12}(\omega)=-F_{21}(-\omega)^*$ and $F_{22}(\omega)=F_{11}(-\omega)^*$. We can define the self-energy matrix by $$\underline \Sigma(\omega)= U \underline F(\omega)\underline {G}_d(\omega)^{-1}. \label{SigF}$$ The properties of the Green’s function and the higher $F$-Green’s function lead to the relations $\Sigma_{12}(\omega)=\Sigma_{21}(-\omega)^*$ and $\Sigma_{22}(\omega)=-\Sigma_{11}(-\omega)^*$ for the self-energies. We can therefore calculate the diagonal self-energy $\Sigma(\omega)=\Sigma_{11}(\omega)$ and the offdiagonal self-energy $\Sigma^{\rm off}(\omega)=\Sigma_{21}(\omega)$ and deduce the other two matrix elements from them. With the relation (\[SigF\]) between $\underline G$, $\underline F$ and $\underline \Sigma$ the Dyson equation (\[scgreenfct\]) is recovered from (\[eomoff\]). Therefore, the Green’s function can be calculated from the free Green’s function as given in (\[freegfctscimp\]) and the self-energy as calculated from (\[SigF\]). This scheme will be useful for applications of dynamical mean field theory with superconducting symmetry breaking, where the self-energy matrix has to be calculated accurately to find a self-consistent solution. References {#references .unnumbered} ==========
--- abstract: 'One part of Sylow’s famous theorem in group theory states that the number of Sylow $p$-subgroups of a finite group is always congruent to $1$ modulo $p$. Conversely, Marshall Hall has shown that not every positive integer $n\equiv 1\pmod{p}$ occurs as the number of Sylow $p$-subgroups of some finite group. While Hall’s proof relies on deep knowledge of modular representation theory, we show by elementary means that no finite group has exactly $35$ Sylow $17$-subgroups.' author: - 'Benjamin Sambale[^1]' title: Pseudo Sylow numbers --- Introduction ============ Every student of abstract algebra encounters at some point one of the most fundamental theorems on finite groups: \[sylow\] Let $G$ be a group of finite order $n=p^am$ where $p$ is a prime not dividing $m$. Then the number of subgroups of $G$ of order $p^a$ is congruent to $1$ modulo $p$. In particular, there is at least one such subgroup. The subgroups described in are called *Sylow $p$-subgroups* of $G$. Apart from Sylow’s original proof [@Sylow] from 1872, a number of different proofs appeared in the literature and they are presented in the survey article [@Waterhouse]. More recently, a very elementary proof by Robinson [@RobinsonSylow] of the last part of appeared in <span style="font-variant:small-caps;">Monthly</span>. Fixing $p$, it is natural to ask if every positive integer $n\equiv 1\pmod{p}$ is a *Sylow $p$-number*, i.e., $n$ is the number of Sylow $p$-subgroups of some finite group. Certainly, $n=1$ is a Sylow $p$-number for the trivial group $G=\{1\}$ and every prime $p$. Moreover, every odd $n$ is a Sylow $2$-number for the dihedral group of order $2n$. This is the symmetry group of the regular $n$-gon and the Sylow $2$-subgroups are in one-to-one correspondence with the reflections. For odd primes $p$ the question is more delicate. Philip Hall [@Hallgroups] observed that in *solvable* groups the prime factorization of a Sylow $p$-number $n=p_1^{a_1}\cdots p_s^{a_s}$ satisfies $p_i^{a_i}\equiv 1\pmod{p}$ for $i=1,\ldots,s$. For example, no solvable groups has exactly six Sylow $5$-subgroups. Nevertheless, the symmetry group of the dodecahedron of order $120$ does have six Sylow $5$-subgroups which can be identified with the $5$-fold rotations of the six axes. About forty years later Marshall Hall [@MHall] reduced the determination of the Sylow $p$-numbers to *simple* groups. (Recall that simple groups are like prime numbers in that they have only two normal subgroups: the trivial group and the whole group.) More precisely, he showed that every Sylow $p$-number is a product of prime powers $q^t\equiv 1\pmod{p}$ and Sylow $p$-numbers of (nonabelian) simple groups. Conversely, every such product is in fact a Sylow $p$-number which can be seen by taking suitable direct products of affine groups and simple groups. Since nowadays the extremely complicated classification of the finite simple groups is believed to be complete (see [@CFSG]), one can in principle determine the Sylow $p$-numbers by going through the list of simple groups (see [@OEIS]). M. Hall instead used Brauer’s sophisticated theory of $p$-blocks of defect $1$ (a part of modular representation theory) to show that *not* every positive integer $n\equiv 1\pmod{p}$ is a Sylow $p$-number. More precisely, he constructed such *pseudo* Sylow $p$-numbers for every odd prime $p$ (for instance $n=22$ works for $p\in\{3,7\}$). In the present paper we are content to provide only one such number which is a special case of [@MHall Theorem 3.1]: \[17\] No finite group has exactly $35$ Sylow $17$-subgroups. The Fermat prime $17$ is chosen to make the proof as easy as possible. Apart from Sylow’s theorem we only use first principles of group actions. It seems that such an elementary proof has not appeared in the literature so far. Lastly, we remark that Frobenius [@FrobeniusSylow2] has extended Sylow’s theorem to the following: If a prime power $p^a$ divides the order of a finite group $G$, then the number $n_{p^a}$ of subgroups of order $p^a$ in $G$ is congruent to $1$ modulo $p$. Moreover, if $p^{a+1}$ divides $|G|$, it is known by work of P. Hall [@HallFrobenius Lemma 4.61 and Theorem 4.6] that $n_{p^a}$ is congruent to $1$ or $1+p$ modulo $p^2$. In particular, the number of $17$-subgroups of a fixed order of any finite group is never $35$. This gives rise to *pseudo Frobenius numbers* which are those positive integers $n$ congruent to $1$ or $1+p$ modulo $p^2$ such that no finite group has exactly $n$ subgroups of order $p^a$ for some $a\ge 0$. The existence question of pseudo Frobenius numbers will be resolved in a different paper (see [@SambaleFrob]). Proof of Theorem A ================== We assume that the reader is familiar with elementary group theory as it is given for example in [@Lang Chapter I]. In order to introduce notation we review a few basic facts. In the following $G$ is always a finite group with identity $1$. Let $\Omega$ be a finite nonempty set. Then the permutations of $\Omega$ form the *symmetric group* ${\operatorname{Sym}}(\Omega)$ with respect to the composition of maps. Let $S_n:={\operatorname{Sym}}(\{1,\ldots,n\})$ be the symmetric group of *degree* $n$. The even permutations in $S_n$ form the *alternating group* $A_n$ of degree $n$. Recall that $|S_n|=n!$ and $|S_n:A_n|=2$ for $n\ge 2$. An *action* of $G$ on $\Omega$ is a map $$\begin{aligned} G\times\Omega&\to\Omega,\\ (g,\omega)&\mapsto {^g\omega}\end{aligned}$$ such that $^1\omega=\omega$ and $^{gh}\omega={^g({^h\omega})}$ for all $\omega\in\Omega$ and $g,h\in G$. Every action determines a group homomorphism $\sigma:G\to{\operatorname{Sym}}(\Omega)$ which sends $g\in G$ to the permutation $\omega\mapsto{^g\omega}$ of $\Omega$. We call ${\operatorname{Ker}}(\sigma)$ the *kernel* of the action. If ${\operatorname{Ker}}(\sigma)=1$, we say that $G$ acts *faithfully* on $\Omega$. For $\omega\in\Omega$, the set $^G\omega:=\{{^g\omega}:g\in G\}$ is called the *orbit* of $\omega$ under $G$. Finally, the *stabilizer* of $\omega$ in $G$ is given by $G_\omega:=\{g\in G:{^g\omega}=\omega\}\le G$. \[orbstab\] For $g\in G$ and $\omega\in\Omega$ we have $$|^G\omega|=|G:G_\omega|.$$ It is easy to check that the map $G/G_\omega\to {^G\omega}$, $gG_\omega\mapsto{^g\omega}$ is a well-defined bijection (see [@Lang Proposition I.5.1]). The most relevant action in the situation of Sylow’s theorem is the action of $G$ on itself by *conjugation*, i.e., $^gx:=gxg^{-1}$ for $g,x\in G$. Then the orbits are called *conjugacy classes* and the stabilizer of $x$ is the *centralizer* ${\operatorname{C}}_G(x):=\{g\in G:gx=xg\}$. Conjugation also induces an action of $G$ on the set of subgroups of $G$. Here the stabilizer of $H\le G$ is the *normalizer* ${\operatorname{N}}_G(H):=\{g\in G:gH=Hg\}$. Clearly, ${\operatorname{N}}_G(H)$ acts by conjugation on $H$ and the corresponding kernel is $${\operatorname{C}}_G(H):=\{g\in G:gh=hg\ \forall h\in H\}\unlhd{\operatorname{N}}_G(H).$$ The action on the set of subgroups can be restricted onto the set ${\operatorname{Syl}}_p(G)$ of Sylow $p$-subgroups, since conjugation preserves order. Then the following supplement to Sylow’s theorem implies that this action of $G$ has only one orbit on ${\operatorname{Syl}}_p(G)$. \[sylow2\] Let $P\in{\operatorname{Syl}}_p(G)$. Then every $p$-subgroup of $G$ is conjugate to a subgroup of $P$. In particular, all Sylow $p$-subgroups of $G$ are conjugate. See [@Lang Theorem I.6.4]. The Propositions \[orbstab\] and \[sylow2\] imply that $\lvert{\operatorname{Syl}}_p(G)\rvert=|G:{\operatorname{N}}_G(P)|$ for any $P\in{\operatorname{Syl}}_p(G)$. Hence, by Lagrange’s theorem, the number of Sylow $p$-subgroups of $G$ divides $|G|$ (see [@Lang Proposition I.2.2]). Moreover, the *$p$-core* $${\operatorname{O}}_p(G):=\bigcap_{P\in{\operatorname{Syl}}_p(G)}P$$ of $G$ lies in the kernel of the conjugation action of $G$ on ${\operatorname{Syl}}_p(G)$. Our next ingredient is a less known result by Brodkey [@Brodkey]. \[brodkey\] Suppose that $G$ has abelian Sylow $p$-subgroups. Then there exist $P,Q\in{\operatorname{Syl}}_p(G)$ such that $P\cap Q={\operatorname{O}}_p(G)$. Choose $P,Q\in{\operatorname{Syl}}_p(G)$ such that $|P\cap Q|$ is as small as possible. Since $P$ and $Q$ are abelian, it follows that $P\cap Q\unlhd P$ and $P\cap Q\unlhd Q$. This means that $P$ and $Q$ are Sylow $p$-subgroups of $N:={\operatorname{N}}_G(P\cap Q)$. Now let $S\in{\operatorname{Syl}}_p(G)$ be arbitrary. By , there exists $g\in N$ such that $^gS\cap N={^g(S\cap N)}\le P$. We conclude that $$^gS\cap Q={^gS}\cap N\cap Q\le P\cap Q.$$ By the choice of $P$ and $Q$, we have equality $P\cap Q={^gS}\cap Q\le{^gS}$. Conjugating by $g^{-1}$ on both sides yields $P\cap Q={^{g^{-1}}(P\cap Q)}\le S$. Since $S$ was arbitrary, we obtain $P\cap Q\le{\operatorname{O}}_p(G)\le P\cap Q$ as desired. For the proof of Theorem A we need three more specific lemmas. \[centalt\] Let $p$ be an odd prime and let $\sigma$ be a product of two disjoint $p$-cycles in $A_{2p}$. Then $$\lvert{\operatorname{C}}_{A_{2p}}(\sigma)\rvert=p^2.$$ Although the claim can be proved with the orbit-stabilizer theorem, we prefer a more direct argument. First observe that $\sigma$ is in fact an even permutation and therefore lies in $A_{2p}$. Without loss of generality, we may assume that $\sigma=(1,\ldots,p)(p+1,\ldots,2p)$. Then $\langle(1,\ldots,p),(p+1,\ldots,2p)\rangle\le{\operatorname{C}}_{A_{2p}}(\sigma)$ and we obtain $\lvert{\operatorname{C}}_{A_{2p}}(\sigma)\rvert\ge p^2$. For the converse inequality, let $\tau\in{\operatorname{C}}_{S_{2p}}(\sigma)$. There are (at most) $2p$ choices for $\tau(1)$. For $i=2,\ldots,p$ we have $\tau(i)=\tau(\sigma^{i-1}(1))=\sigma^{i-1}(\tau(1))$. Thus after $\tau(1)$ is fixed, there are only $p$ possibilities for $\tau(p+1)$ left. Again $\tau(p+i)=\sigma^{i-1}(\tau(p+1))$ for $i=2,\ldots,p$. Altogether there are at most $2p^2$ choices for $\tau$ and we obtain $\lvert{\operatorname{C}}_{S_{2p}}(\sigma)\rvert\le 2p^2$. Observe that $$\tau:=(1,p+1)(2,p+2)\ldots(p,2p)\in{\operatorname{C}}_{S_{2p}}(\sigma),$$ but since $p$ is odd we have $\tau\notin A_n$. Hence, ${\operatorname{C}}_{A_{2p}}(\sigma)\subsetneq{\operatorname{C}}_{S_{2p}}(\sigma)$ and Lagrange’s theorem yields $\lvert{\operatorname{C}}_{A_{2p}}(\sigma)\rvert\le\lvert{\operatorname{C}}_{S_{2p}}(\sigma)\rvert/2\le p^2$. \[NC\] Assume that $G$ has a Sylow $p$-subgroup $P$ of order $p$. Then ${\operatorname{N}}_G(P)/{\operatorname{C}}_G(P)$ is cyclic of order dividing $p-1$. As we have remarked after , ${\operatorname{N}}_G(P)$ acts by conjugation on $P$ with kernel ${\operatorname{C}}_G(P)$. By the first isomorphism theorem, ${\operatorname{N}}_G(P)/{\operatorname{C}}_G(P)$ is isomorphic to a subgroup of ${\operatorname{Sym}}(P)$. Since conjugation induces automorphisms on $P$, we may even regard ${\operatorname{N}}_G(P)/{\operatorname{C}}_G(P)$ as a subgroup of the automorphism group ${\operatorname{Aut}}(P)$. By hypothesis, $P\cong{\mathbb{Z}}/p{\mathbb{Z}}$ and we obtain ${\operatorname{Aut}}(P)\cong({\mathbb{Z}}/p{\mathbb{Z}})^\times$. Now a standard fact in algebra states that $({\mathbb{Z}}/p{\mathbb{Z}})^\times$ is cyclic of order $p-1$ (see [@Lang Theorem IV.1.9]). The claim follows with Lagrange’s theorem. \[cyc2\] If $G$ has a cyclic Sylow $2$-subgroup $P$, then there exists a unique $N\unlhd G$ such that $|G:N|=|P|$. Since this is a common exercise in many textbooks (see [@IsaacsAlgebra Exercise 6.10] for instance), we only sketch the proof. We argue by induction on $|P|=2^n$. For $n=0$ the claim holds with $N=G$. Thus, let $n\ge 1$. It is easy to see that $G$ acts faithfully on itself by multiplication on the left, i.e., $^gx:=gx$ for $g,x\in G$. Hence, we may regard $G$ as a subgroup of $S_{|G|}$. Doing so, every nontrivial element of $G$ is a permutation without fixed points. Let $x$ be a generator of $P$. Then $x$ is a product of $|G|/2^n$ disjoint $2^n$-cycles. In particular, $x$ is an odd permutation and $H:=G\cap A_{|G|}$ is a normal subgroup of $G$ of index $2$. Moreover, $P\cap H$ is a cyclic Sylow $2$-subgroup of $H$. By induction there exists a unique $N\unlhd H$ with $|H:N|=2^{n-1}$. For $g\in G$ we have $gNg^{-1}\unlhd gHg^{-1}=H$ and $|H:gNg^{-1}|=|H:N|$. The uniqueness of $N$ shows that $N=gNg^{-1}\unlhd G$ and $$|G:N|=|G:H||H:N|=2^n.$$ Finally, if $M\unlhd G$ with $|G:M|=2^n$, then $M\unlhd H$ and the uniqueness of $N$ gives $M=N$. Let $G$ be a minimal counterexample. For ease of notation let $p=17$ and $n=2p+1=35$. **Step 1:** $G$ acts faithfully on ${\operatorname{Syl}}_p(G)$ and $G\le A_n$.\ Let $K\unlhd G$ be the kernel of the conjugation action of $G$ on ${\operatorname{Syl}}_p(G)$. We show that $$\begin{aligned} \gamma:{\operatorname{Syl}}_p(G)&\to{\operatorname{Syl}}_p(G/K),\\ P&\mapsto PK/K\end{aligned}$$ is a bijection. For $P\in{\operatorname{Syl}}_p(G)$, the isomorphism theorems show that $PK/K\cong P/P\cap K$ is a $p$-group, and $|G/K:PK/K|=|G:PK|$ divides $|G:P|$ and thus is not divisible by $p$ (see [@Lang p. 17]). Hence, $PK/K$ is a Sylow $p$-subgroup of $G/K$. By , every Sylow $p$-subgroup of $G/K$ has the form $(gK)PK/K(gK)^{-1}=gPg^{-1}K/K$ for some $g\in G$. Hence, $\gamma$ is surjective. To show injectivity, let $P,Q\in{\operatorname{Syl}}_p(G)$ such that $PK/K=QK/K$. Then $PK=QK$. Since $K$ acts trivially on ${\operatorname{Syl}}_p(G)$, $P$ is the only Sylow $p$-subgroup of $PK$ and $Q$ is the only Sylow $p$-subgroup of $QK$. Hence, $P=Q$ and $\gamma$ is injective. It follows that $G/K$ has exactly $n$ Sylow $p$-subgroups and by the choice of $G$ we must have $K=1$. Therefore, $G$ acts faithfully on ${\operatorname{Syl}}_p(G)$ and we may regard $G$ as a subgroup of $S_n$. Since $A_n$ contains every element of odd order, every Sylow $p$-subgroup of $G$ lies in $A_n$. Consequently, $G\cap A_n$ is also a counterexample and we obtain $G\le A_n$ by minimality of $G$. In the following we fix $P\in{\operatorname{Syl}}_p(G)$. **Step 2:** $|P|=p$.\ Since $|S_n|=n!$ is not divisible by $p^3$, Step 1 and Lagrange’s theorem already imply $|P|\le p^2$. In particular, $P$ is abelian (see [@Lang Exercise I.24]). Since ${\operatorname{O}}_p(G)$ lies in the kernel of the conjugation action on ${\operatorname{Syl}}_p(G)$, Step 1 also yields ${\operatorname{O}}_p(G)=1$. By Brodkey’s proposition there exists $Q\in{\operatorname{Syl}}_p(G)$ such that $P\cap Q=1$. Since $Q$ is the only Sylow $p$-subgroup of ${\operatorname{N}}_G(Q)$, we conclude that ${\operatorname{N}}_P(Q)\le P\cap Q=1$. The orbit-stabilizer theorem applied to the conjugation action of $P$ on ${\operatorname{Syl}}_p(G)$ yields $$|P|=|P:{\operatorname{N}}_P(Q)|=|^PQ|\le n<p^2.$$ Hence, $|P|=p$. **Step 3:** $|G|=5\cdot 7\cdot 17$.\ By construction, ${\operatorname{N}}_G(P)$ is the stabilizer of $P$ and Step 1 implies $$P\le{\operatorname{C}}_G(P)\le{\operatorname{N}}_G(P)\le S_{n-1}\cap A_n=A_{2p}.$$ It follows easily from Step 2 that $P$ has orbits of size $1$, $p$ and $p$ on ${\operatorname{Syl}}_p(G)$. Hence, $P$ is generated by a product of two disjoint $p$-cycles in $A_{2p}$. Now it follows from Step 1 and that $$P\le {\operatorname{C}}_G(P)={\operatorname{C}}_{A_{2p}}(P)\cap G=P.$$ Consequently, ${\operatorname{N}}_G(P)/P={\operatorname{N}}_G(P)/{\operatorname{C}}_G(P)$ is cyclic of order dividing $p-1=2^4$ by . Since $|G:{\operatorname{N}}_G(P)|=n$ is odd, ${\operatorname{N}}_G(P)$ contains a Sylow $2$-subgroup of $G$. In particular, the Sylow $2$-subgroups of $G$ are cyclic and yields a normal subgroup $N\unlhd G$ of order $|P||G:{\operatorname{N}}_G(P)|=5\cdot 7\cdot 17$. Since every Sylow $p$-subgroup of $G$ is contained in $N$, we have $G=N$ by minimality of $G$. **Step 4:** Contradiction.\ By Sylow’s theorem, $G$ has a unique Sylow $5$-subgroup $T\unlhd G$. Then $PT$ is a subgroup of $G$ of order $5\cdot 17$. Again by Sylow’s theorem, $PT$ has only one Sylow $p$-subgroup. In particular $P\unlhd PT$ and $T\le{\operatorname{N}}_G(P)$. This gives the contradiction $|G:{\operatorname{N}}_G(P)|<n$. It is possible to modify the proof above to construct more pseudo Sylow numbers. In fact the first three steps work more generally whenever $\lvert{\operatorname{Syl}}_p(G)\rvert=2p+1$. One ends up with a group of odd order which must be solvable according to the celebrated Feit–Thompson theorem [@FeitThompson]. Then the factorization property by P. Hall mentioned in the introduction implies that $2p+1$ is a prime power. Unfortunately, the long proof of the Feit–Thompson theorem is even more challenging than the methods used by M. Hall. We invite the interested reader to show by elementary means that no finite group has exactly $15$ Sylow $7$-subgroups. Acknowledgment {#acknowledgment .unnumbered} ============== The author is supported by the German Research Foundation (project SA 2864/1-1). [10]{} M. Aschbacher, *The status of the classification of the finite simple groups*, Notices Amer. Math. Soc. **51** (2004), 736–740. J. S. Brodkey, *A note on finite groups with an abelian [S]{}ylow group*, Proc. Amer. Math. Soc. **14** (1963), 132–133. W. Feit and J. G. Thompson, *Solvability of groups of odd order*, Pacific J. Math. **13** (1963), 775–1029. F. G. Frobenius, *Verallgemeinerung des Sylow’schen Satzes*, Sitzungsber. Preuß. Akad. Wiss. **1895** (1895), 981–993. M. Hall, *On the number of [S]{}ylow subgroups in a finite group*, J. Algebra **7** (1967), 363–371. P. Hall, *A [N]{}ote on [S]{}oluble [G]{}roups*, J. London Math. Soc. **3** (1928), 98–105. P. Hall, *On a [T]{}heorem of [F]{}robenius*, Proc. London Math. Soc. (2) **40** (1935), 468–501. I. M. Isaacs, *Algebra: a graduate course*, Graduate Studies in Mathematics, Vol. 100, American Mathematical Society, Providence, RI, 2009. S. Lang, *Algebra*, Graduate Texts in Mathematics, Vol. 211, Springer-Verlag, New York, 2002. OEIS Foundation, Inc., *The On-Line Encyclopedia of Integer Sequences, Numbers n such that for all finite groups G and all primes p, the number of Sylow p-subgroups of G does not equal n*, <https://oeis.org/A130751>. G. R. Robinson, *Cauchy’s theorem and [S]{}ylow’s theorem from the cyclic case*, Amer. Math. Monthly **118** (2011), 448–449. B. Sambale, *Pseudo Frobenius numbers*, to appear in Expo. Math., [DOI:10.1016/j.exmath.2018.10.003](https://doi.org/10.1016/j.exmath.2018.10.003). M. L. Sylow, *Théorèmes sur les groupes de substitutions*, Math. Ann. **5** (1872), 584–594. W. C. Waterhouse, *The early proofs of [S]{}ylow’s theorem*, Arch. Hist. Exact Sci. **21** (1980), 279–290. [^1]: Fachbereich Mathematik, TU Kaiserslautern, 67653 Kaiserslautern, Germany, <[email protected]>
--- abstract: | We present a method that self-consistently tracks the growth of supermassive black holes (BHs) and the feedback from active galactic nuclei (AGN) in cosmological, hydrodynamical simulations. Our model is a substantially modified version of the one introduced by [@spri05] implemented in a significantly expanded version of the [gadget III]{} code, which contains new prescriptions for star formation, supernova feedback, radiative cooling and chemodynamics. We simulate the growth of BHs from an initial seed state via Eddington-limited accretion of the surrounding gas, and via mergers with other BHs. Because cosmological simulations at present lack both the resolution and the physics to model the multiphase interstellar medium, they tend to strongly underestimate the Bondi-Hoyle accretion rate. To allow low-mass BHs to grow, it is therefore necessary to increase the predicted Bondi-Hoyle rates in star-forming gas by large factors, either by explicitly multiplying the accretion rate by a numerical correction factor, or using an unresolved, subgrid model for the gas close to the BH. We explore the physical regimes where the use of such multiplicative factors is reasonable, and through this introduce a new prescription for gas accretion by BHs. Feedback from AGN is modeled by coupling a fraction of the rest-mass energy of the accreted gas thermally into the surrounding medium. We describe the implementation as well as the limitations of the model in detail and motivate all the changes relative to previous work. We demonstrate how general physical considerations can be used to choose many of the parameters of the model and demonstrate that the fiducial model reproduces observational constraints. We employ a large suite of cosmological simulations, in which the parameters of the BH model are varied away from their fiducial values, to investigate the robustness of the predictions for the cosmic star formation history and the redshift zero cosmic BH density, BH scaling relations, and galaxy specific star formation rates. We find that the freedom introduced by the need to increase the predicted accretion rates by hand, the standard procedure in the literature, is the most significant source of uncertainty. Our simulations demonstrate that supermassive BHs are able to regulate their growth by releasing a fixed amount of energy for a given halo mass, independent of the assumed efficiency of AGN feedback, which sets the normalization of the BH scaling relations. Regardless of whether BH seeds are initially placed above or below the BH scaling relations, they grow onto the same scaling relations. AGN feedback efficiently suppresses star formation in high-mass galaxies. author: - | C. M. Booth$^{1}$[^1] and Joop Schaye$^{1}$\ $^{1}$Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, the Netherlands title: 'Cosmological simulations of the growth of supermassive black holes and feedback from active galactic nuclei: method and tests' --- \[firstpage\] Cosmology: Theory – Galaxies: Active – Galaxies: Evolution – Galaxies: Formation – Hydrodynamics – Galaxies: Quasars: General Introduction {#sec:intro} ============ Over the past decades a growing body of observational and theoretical evidence has suggested that supermassive black holes (SMBHs; ${\rm m}_{{\rm BH}}>10^6{\rm M}_{\odot}$) exist in the centers of all galaxies with spheroids [e.g. @korm95; @ferr00] and that the properties of these SMBHs are tightly correlated with the properties of the spheroid in which they reside. For example, the mass of the SMBH is found to be tightly correlated with the bulge stellar mass or luminosity [@magg98; @mclu02; @hari04; @laor01], stellar velocity dispersion [@gebh00; @merr01; @trem02], and galaxy concentration, as measured by the Sérsic index [@grah07]. Some recent work has demonstrated that these correlations may be understood in terms of a black hole (BH) fundamental plane, relating BH mass, galaxy effective radius, stellar velocity dispersion and stellar mass. Here, the mass of the SMBH essentially tracks the binding energy of the stellar bulge [@marc03; @feol05; @alle07; @hopk07], although other authors argue that the appearance of a fundamental plane is actually due to biasing caused by the presence of galaxies with bars [@grah08]. The exact mechanisms leading to the tight observed coupling between galaxy spheroidal components and central active galactic nuclei (AGN) are not yet fully understood, although it has long been recognized that the formation mechanisms of SMBHs [e.g. @silk98] and stars [e.g. @deke86] are most likely self-regulating. These results suggest that the same processes that shape galaxy spheroids also act on the central BHs. Correlations between AGN activity and other processes provide other clues about the mechanisms that lead to the buildup of the SMBH population. There is evidence that there exists a link between galactic star formation and accretion onto a central AGN: in a global sense, the evolution of the cosmic star formation rate [e.g. @mada96] and the luminosity density of quasars are tightly correlated [@boyl98]. Additionally, on the scale of individual objects it has been found that the most powerful narrow line AGN are preferentially found in galaxies that appear to have undergone a recent starburst phase [@kauf03]. The massive BHs present in the centres of galaxies are likely to have started their lives as seed BHs. The typical masses of seed BHs remains somewhat uncertain, and depends upon the mechanism by which they form. Plausible mechanisms include the collapse of population III stars, giving rise to BHs with masses in the range $10^2{\rm M}_{\odot}<{m_{{\rm BH}}}<10^3{\rm M}_{\odot}$ [e.g. @mada01; @isla03; @schn02], and direct collapse of matter in high redshift, low angular momentum haloes, which may give rise to seed BHs with masses $\sim10^5\,{\rm M}_\odot$ [e.g. @loeb94; @brom03; @bege06; @dijk08; @volo09]. These seed mass BHs can then grow either by mergers with other BHs [e.g. @isla03], or through accretion of gas and/or stars. Accretion of matter onto central BHs, accompanied by the release of a fraction of the rest mass energy of the fuel, has long been recognized as one of the most likely mechanisms to power AGN [@salp64], and a coupling between the accretion history of an AGN and the gas dynamics of the bulge provides a plausible mechanism by which AGN and bulge properties could become strongly correlated coupled [e.g. @silk98]. For example, it has been suggested that the central BHs grow until they release sufficient energy to unbind the gas that feeds them from the host galaxy [@fabi99]. Bursts of AGN activity then expel gas from galaxies and remain quiescent until stellar mass loss replenishes the galaxy’s gas reservoir [@ciot01] A theoretical link between galaxy mergers and both galaxy-scale starburst events and active AGN phases has been well established and modelled. Galaxy mergers have long been recognized as a mechanism by which gas can potentially be channeled to the centre of a galaxy [@toom72], and N-body simulations of galaxy mergers confirmed and extended this picture by showing that the asymmetrical gravitational potential present during mergers is capable of funneling gas efficiently to the center of a galaxy [@miho94], where it may be accreted by a SMBH. Hydrodynamical simulations of galaxy mergers [@barn91; @barn96; @miho96; @kapf05], and numerical models of AGN growth [@kapf05; @spri05a] predict that these merger events are indeed responsible for the rapid growth of AGN. Recent numerical studies [e.g. @mici07] have indicated that both BH mergers and gas accretion are important processes in forming the population of BHs that we observe in the local universe. Thus, it seems that we can paint a coherent picture in which emission by AGN, galaxy mergers and the growth of supermassive BHs are closely intertwined. As such the study of any one of these processes requires an understanding of all of them. For this reason detailed studies of the co-evolution of the AGN and galaxy populations in a cosmological context often resort to numerical techniques. Early theoretical studies of the galaxy-AGN connection relied upon dark matter halo merger rates, without any separate treatment of galaxy formation processes [@efst88; @haeh93]. Later work expanded upon this groundwork by incorporating AGN feedback into semi-analytic modelling of galaxy formation [e.g. @kauf00; @catt01; @bens03; @gran04; @crot06; @bowe06; @lago08; @baug06]. Semi-analytic models indicate that feedback from AGN is necessary in order to build up a red-sequence of galaxies. Although a combination of photo-heating by reionization and supernova feedback can suppress star formation in low mass haloes – bringing the galaxy luminosity function in line with observations at the low mass end – models without AGN feedback face the problem that the reheated gas in massive haloes would eventually cool, giving rise to an excessive number of bright galaxies [e.g. @bowe06] as compared to the local universe. A more computationally challenging approach is to simulate galaxies hydrodynamically, with additional sub-grid modelling of the growth and energy feedback from AGN. Numerical hydrodynamic simulations of galaxy mergers containing AGN [e.g. @spri05; @dima05; @hopk06; @robe06] have shown that the presence of a central AGN can significantly alter the structure of merger remnants, particularly by expelling a hot halo of diffuse, low-angular momentum gas from the center of the remnant. More recent numerical studies have revealed that dissipation and dry mergers are likely to play a fundamental role in shaping the co-evolution of BHs and galaxies [@hopk08]. Hydrodynamic simulations of full cosmological volumes [@dima08; @sija07; @crof08; @okam08] have probed the effect of AGN on a cosmologically representative set of galaxies and showed that the inclusion of AGN physics into galaxy formation simulations allows us to match many of the observed properties of galaxies in the local universe. The modelling of an AGN population in this manner is both computationally very expensive and subject to very many, as yet poorly understood, numerical effects. As such studies of this type must take care to test the robustness of the models to all physical and numerical parameters. The focus of the current work is to present and test a new model for the co-evolution of BHs and galaxies. We note that nearly all BH models published thus far in the literature employ the star formation and supernova feedback models of @spri03 (hereafter SH03), and the model for BH growth and AGN feedback of @spri05[^2] (hereafter S05). Throughout this paper we highlight similarities and differences between our approach and that used previously in the literature. The primary difference between our models and previous work is that we employ a different parametrization of the process of gas accretion onto BHs as well a different implementation of AGN feedback. We show that changes to the BH accretion model can lead to profound differences in galaxy properties, global star formation rates and BH demographics. We examine both the global properties of the simulation, such as the integrated star formation rate and cosmic BH density, and consider the properties of individual galaxies, including specific star formation rates, and the BH fundamental plane. We quantify how uncertainties in our numerical model and all of our parameter choices affect the reliability of our results. We find that changes in the numerical model that generates seed mass BHs, and in the model that distributes feedback energy into the ISM do not strongly affect our results. However, the accretion model is found to be of crucial importance in understanding our results. Throughout we assume a flat $\Lambda$CDM cosmology with the cosmological parameters: $\{\Omega_m,\Omega_b,\Omega_\Lambda,\sigma_8,n_s,h\}=\{0.238,0.0418,0.762,0.74,0.951,0.73\}$, as determined from the WMAP 3-year data [@sper07] and consistent[^3] with the WMAP 5-year data [@koma08]. Where necessary, observational results have been scaled to our chosen cosmology, and the stellar initial mass function (IMF) assumed in observational analyses has been scaled to the Chabrier IMF used in our simulations. The paper is structured as follows: In Sec. \[sec:method\] we introduce our simulation set, and describe briefly the sub-grid physics modules that are not directly related to BHs. In Sec. \[sec:met-bh\] we describe in detail our model for BH formation, growth and feedback and we motivate our choices for numerical parameters in Sec. \[sec:pars\]. In Sec. \[sec:results\] we present simulation results, including a comparison with redshift zero observational data and an investigation into the severity of uncertainties introduced by different parameter choices. Finally, in Sec. \[sec:discussion\] we discuss and summarize our findings. In a companion work we investigate in detail the interplay between feedback from AGN and other feedback processes, including winds driven by Type 2 supernovae and mass loss from the stellar population. Numerical simulations {#sec:method} ===================== In this section we introduce the numerical techniques used in our simulations and provide a brief overview of the sub-grid physics modules that are not directly related to BH growth or AGN feedback. We have carried out a suite of cosmological simulations using Smoothed-Particle Hydrodynamics (SPH) [@lucy77; @ging77; @mona92], employing a significantly extended version of the parallel PMTree-SPH code [gadget III]{} [@spri05a; @spri01], a Lagrangian code used to calculate gravitational and hydrodynamic forces on a particle-by-particle basis. The initial particle positions and velocities are set at $z=127$ using the Zel’dovich approximation to linearly evolve positions from an initially glass-like state. The production simulations used in this study are run in boxes of size 50 comoving Mpc/$h$, and contain $256^3$ particles of both gas and dark matter. Comoving gravitational softenings are set to $1/25$ of the mean comoving inter-particle separation down to $z=2.91$, below which we switch to a fixed proper scale of 2 kpc$/h$. The production simulations have gas particle masses of $8.64\times10^{7}\,{\rm M}_\odot/h$. The boxes are evolved all the way to redshift zero. In addition to hydrodynamic forces we treat star formation, supernova feedback, radiative cooling, chemodynamics, black hole accretion, and AGN energy feedback in these simulations. Star formation is tracked in the simulations following the prescription of @scha08. Gas with densities exceeding a critical density for the onset of the thermo-gravitational instability (hydrogen number densities $n_{{\rm H}}=10^{-2}-10^{-1}$ cm$^{-3}$) is expected to be multiphase and to form stars [@scha04]. Because we lack both the physics and the resolution to model the cold interstellar gas phase, we impose an effective equation of state (EOS) with pressure $P\propto \rho^{\gamma_{{\rm eff}}}$ for densities $n_{{\rm H}} >n_{{\rm H}}^*$ where $n_{{\rm H}}^*=0.1$ cm$^{-3}$, normalised to $P/k=10^3$ cm$^{-3}$K at the threshold. We use $\gamma_{{\rm eff}}=4/3$ for which both the Jeans mass and the ratio of the Jeans length to the SPH kernel are independent of the density, thus preventing spurious fragmentation due to a lack of numerical resolution [@scha08]. As described in [@scha08], gas on the effective EOS is allowed to form stars at a pressure-dependent rate that reproduces the observed Kennicutt-Schmidt law [@kenn98] by construction, renormalised by a factor[^4] 1/1.65 to account for the fact that it assumes a Salpeter IMF whereas we use a Chabrier IMF. Energy injection due to supernovae is included through kinetic feedback. We employ the prescription of [@dall08], which is a variation of the SH03 recipe for kinetic feedback. In this prescription core-collapse supernovae locally inject kinetic energy and kick gas particles into winds. The feedback is specified by two parameters: Firstly, the initial mass-loading $\eta=\dot{m}_{\rm w}/\dot{m}_*$, which describes the initial amount of gas put into the wind, $\dot{m}_{\rm w}$, as a function of the local SFR, $\dot{m}_*$, and secondly the wind velocity, $v_{{\rm w}}$. We use $\eta=2$ and $v_{\rm w}=600$ km/s, which corresponds to 40% of the total amount of supernova energy. In contrast with the models of SH03, the kinetic energy is injected *locally* to every star formation event and wind particles are *not* temporarily decoupled from the hydrodynamics when they are put into the wind. As described in [@wier09], we follow the timed release of 11 different elements from massive stars (Type II supernovae and stellar winds) and intermediate mass stars (Type Ia supernovae and asymptotic giant branch stars), assuming a Chabrier initial mass function spanning the range 0.1 to 100 ${\rm M}_\odot$. Radiative cooling was implemented following [@wier08][^5]. In brief, net radiative cooling rates are computed element-by-element in the presence of the cosmic microwave background and a [@haar01] model for the UV/X-ray background radiation from quasars and galaxies. The contributions of the eleven elements hydrogen, helium, carbon, nitrogen, oxygen, neon, magnesium, silicon, sulphur, calcium, and iron are interpolated as a function of density, temperature, and redshift from tables that have been precomputed using the publicly available photo-ionization package [CLOUDY]{}, last described by [@ferl98], assuming the gas to be optically thin and in (photo-)ionization equilibrium. The black hole model {#sec:met-bh} ==================== We now provide a detailed description of our models for BH formation and accretion (Sec. \[sec:met-accr\]), BH mergers (Sec. \[sec:met-merger\]) and energy feedback from AGN (Sec. \[sec:met-feed\]). Throughout this section we highlight and justify the aspects of our model that differ from previous works. We also introduce all the relevant parameters. In Sec. \[sec:pars\] we motivate our choices for these parameters. Black hole formation and accretion {#sec:met-accr} ---------------------------------- Plausible BH seed formation mechanisms lead to the creation of BHs with masses in the range $10-10^5\,{\rm M}_\odot$, whereas SMBHs in the local Universe have masses of up to $10^9\,{\rm M}_\odot$ (see Sec \[sec:intro\] for a discussion). To understand the origin of the redshift zero BH population we therefore need to model how BHs can grow to the sizes observed in present-day galaxies. Over the past decades a picture has emerged in which SMBHs are embedded in dense stellar systems in the centres of galaxies and increase their masses primarily by the accretion of gas [e.g. @bege78]. BHs may also grow by mergers with other BHs, or by the disruption and capture of stars [e.g. @lynd69]. The capture of stars has been put forward as an explanation for ultra-luminous X-ray sources [see e.g. @fabb06 for a review]. However, we neglect this process in the current work, and instead focus on how BHs can accrete gas from their surroundings. The model presented in this section is a substantially modified version of the model introduced by S05 and employed in almost all of the large-scale numerical simulations of AGN growth thus far available in the literature (see Table \[tab:allparlist\] for an overview). Because cosmological simulations have neither the resolution nor the necessary physics to simulate the formation of the seed BHs that eventually grow into SMBHs, it is assumed that low-mass seed BHs are produced sufficiently regularly that every halo above a certain threshold mass contains one such object at its center. Here, our model follows that of [@dima08] in that we regularly run a friends-of-friends group finder with linking length equal to 0.2 times the initial mean inter-particle spacing [@davi85] on all of the dark matter particles during the simulation. We do so at times spaced evenly in log expansion factor, $a$, such that $\Delta a=0.02a$, which corresponds to a proper time of $\sim$250 Myr ($\sim 70\,{\rm Myr}$) at redshift zero (three) for our cosmology. When a halo grows above some threshold mass, $m_{{\rm halo,min}}$, and does not already contain a BH, then its most gravitationally bound baryonic particle is converted into a collisionless BH particle. The initial mass of these BHs is usually chosen to be well below the resolution limit of our cosmological simulations (see Sec. \[sec:pars\]), and as such we need to employ sub-grid models to follow the BH. Although we convert the entire particle into a black hole particle, the mass of the seed BH ($m_{{\rm seed}}$) associated with this particle is usually initially significantly less than the particle mass ($m_{{\rm seed}} \ll m_{{\rm g}}$; where ${m_{{\rm g}}}$ is the simulation gas particle mass). We therefore store the mass of the subgrid BH separately. For the gravitational interactions, other than BH accretion, the full mass of the particle (${m_{{\rm g}}}$) is used, but for calculating the BH-specific processes we use the sub-grid BH mass ($m_{{\rm BH}}$). We now discuss in more detail the manner in which we track the growth of the BH. BH particles are collisionless sink particles that contain a sub-grid BH, initially of mass $m_{{\rm seed}}$, chosen to be well below the observed mass of BHs in haloes of this size. From their initial seed mass, BHs may grow via one of two processes: mergers with other BHs and accretion of surrounding ambient gas. We now treat each of these processes in turn. In our models BHs accrete from the surrounding ambient gas phase at a rate proportional to that given by the Bondi-Hoyle-Lyttleton [@bond44; @hoyl39] formula $$\dot{m}_{{\rm accr}}=\alpha\frac{4\pi G^2 {m_{{\rm BH}}}^2 \rho}{(c_{s}^2+v^2)^{3/2}}\,, \label{eq:bhl}$$ where ${m_{{\rm BH}}}$ is the mass of the BH, $c_{s}$ and $\rho$ are the sound speed and gas density of the local medium, $v$ is the velocity of the BH relative to the ambient medium, and $\alpha$ is a dimensionless efficiency parameter. The factor $\alpha$ did not appear in the original analyses of @bond44 and @hoyl39, but was introduced by S05 as a numerical correction factor, to compensate for the limitations of the numerical simulations. The assumption that BHs grow via Bondi-Hoyle accretion is reasonable even if they are in reality fed by accretion discs that are far smaller than the resolution limit of our simulations as long as the latter grow by Bondi-Hoyle accretion. However, we will see that very large factors of $\alpha$ are required for low-mass BHs to grow, in which case one cannot claim to be simulating Bondi-Hoyle accretion. The amount of accreted mass is related to the rate of growth of the BH by[^6] $\dot{m}_{{\rm BH}}=\dot{m}_{{\rm accr}}(1-\epsilon_{\rm r})$, where $\epsilon_{\rm r}$ is the radiative efficiency of a BH, which we always assume to be 10%, the mean value for the radiatively efficient [@shak73] accretion onto a Schwarzschild BH. In order to resolve Bondi-Hoyle accretion onto a BH we need to resolve the Bondi-Hoyle radius ($r_{\rm b}$), defined as [e.g. @edga04]: $$r_{{\rm b}}=\frac{G{m_{{\rm BH}}}}{c_s^2}\approx 0.042\Big(\frac{M_{{\rm BH}}}{10^6\,{\rm M_\odot}}\Big)\Big(\frac{c_s}{10\,{\rm km/s}}\Big)^{-2}\,{\rm kpc}.$$ Comparing this to the Jeans length, $$L_{\rm J}\sim \sqrt{\frac{c_s^2}{G\rho}}\sim\frac{GM_{{\rm J}}}{c_{{\rm s}}^2}\,,$$ where $M_{{\rm J}}$ is the Jeans mass, we see that $r_{{\rm b}}\sim L_{{\rm J}}$ if ${m_{{\rm BH}}}\sim M_{{\rm J}}$ and $r_{{\rm b}}\gg L_{{\rm J}}$ if ${m_{{\rm BH}}}\gg M_{{\rm J}}$. Hence, any simulation that resolves the Jeans scales will also resolve accretion onto black holes of mass ${m_{{\rm BH}}}> {m_{{\rm g}}}$. We can then parametrize accretion onto a BH in two different ways: [**1. Density-Independent Accretion Efficiency:**]{} Most AGN models in the literature use a constant value of $\alpha = 10^2$ [e.g. @spri05; @dima05; @sija07; @dima08; @bhat08; @colb08; @crof08; @joha09 see also Table \[tab:allparlist\]]. Although most authors do not motivate or even mention [^7] their choice of $\alpha$, we note that values much greater than unity can be justified in one of two ways: firstly by noting that the Bondi-Hoyle accretion rate depends strongly upon the local ISM sound speed. Galaxy formation simulations currently have neither the resolution nor the physics to self-consistently track the properties of the cold-phase of the ISM, and as such the temperature of the gas accreted by the AGN may be overestimated by orders of magnitude. Hence, we can justify very large values of $\alpha$ in star forming gas. Secondly, in low-resolution simulations we do not resolve the Jeans scale, even in single-phase gas, so the density of the gas at the Bondi radius is underestimated, allowing us to again justify large values of $\alpha$. We call models that use a fixed value of $\alpha$ constant-$\alpha$ models. Constant-$\alpha$ models are parametrized by a constant multiplicative factor in the Bondi-Hoyle accretion rate, $\alpha_0$. An alternative way of increasing accretion rates is to employ a subgrid model for the unresolved ISM properties, and use this to artificially boost the ISM densities local to the BHs. We discuss this further in the following sections. We will show in Sec. \[sec:bhg\] that the use of a constant-$\alpha$ model has a profound effect on the ability of BHs to grow, and that changes in this very poorly constrained parameter can lead to large changes in the global properties of the simulation such as the global density in BHs. We emphasize that values of $\alpha \gg 1$ imply the assumption that the simulation predictions for the gas density and temperature are sufficiently wrong that the Bondi-Hoyle accretion rate is underestimated by two orders of magnitude. Although this assumption can be justified for high-density gas, it does mean that the value of $\alpha$ is in fact more important than the predicted densities and temperatures. One could therefore argue that models of this kind do not really simulate Bondi-Hoyle accretion. Cosmological simulations can, however, already model Bondi-Hoyle accretion of low-density gas and it hence makes sense to use an accretion model for which the “fudge factor” $\alpha$ becomes unity in the regime where the simulations are reliable. We therefore introduce a new class of black hole accretion models in which the the value of $\alpha$ depends on the local gas density, while keeping the number of free parameters fixed. [**2. Density-Dependent Accretion Efficiency:**]{} The assumptions used to justify large values of $\alpha$ in simulations similar to ours break down when two conditions are satisfied: firstly the local gas density must be lower than required for the formation of a cold (i.e. $T\ll 10^4$ K) phase, and secondly the simulation must resolve the Jeans scale of the single-phase gas. Our highest resolution simulations (as well as many published AGN simulations) do resolve the Jeans scale at the star formation threshold and so in contrast to most published AGN models we choose to parametrize the accretion efficiency parameter as a function of density $$\label{eq:beta} \alpha=\left\{ \begin{array}{cc} 1 & {\rm if}\,n_{{\rm H}}<n_{{\rm H}}^* \\ \Big(\frac{n_{{\rm H}}}{n_{{\rm H}}^*}\Big)^\beta & \textrm{otherwise.}\end{array}\right.$$ Here, the accretion efficiency ($\alpha$) becomes unity for densities lower than the critical value required for the formation of a cold interstellar gas phase ($n_{{\rm H}}^*=0.1\,{\rm cm}^{-3}$; see Sec. \[sec:method\]). As discussed above, provided the simulations resolve the Jeans scale, the Bondi radius will be resolved for BHs with ${m_{{\rm BH}}}\ge {m_{{\rm g}}}$, which means that values of $\alpha \gg 1$ are unphysical for such BHs. We then choose to parametrize our lack of knowledge about both the physical properties of the multiphase ISM and the rate at which it accretes onto the central AGN using a power-law of the gas density, with slope $\beta$. This constant-$\beta$ model, which has the same number of free parameters (one) as the constant-$\alpha$ models used in previous work, allows us to correctly describe accretion in the physical regime where it is resolved by our simulations and to introduce a reasonable scaling when it is not. We will show in Sec. \[sec:res-pareffects\] that the change from a constant-$\alpha$ to a constant-$\beta$ model can have a profound effect on the growth of BHs, particularly for low mass galaxies. We call models of this type constant-$\beta$ models. A second approach to boosting accretion rates, which operates in a similar manner to the constant-$\beta$ models, is to make use of a subgrid model for the unresolved subgrid physics not encapsulated by the simulations. For example, @pelu07 use the star formation and supernova feedback models of SH03 to estimate the amount of time that a BH spends in dense, molecular clouds, and @okam08 use a subgrid model in which drag due to stellar radiation on a clumpy ISM can give rise to large accretion rates onto a central BH. We note that differing implementations of the subgrid model can lead to large differences in the properties of the ISM and for the purposes of this work we emphasize that the functional form (Eq. \[eq:beta\]), as well as the value for $\beta$, are ad-hoc. Any function for which $\alpha \rightarrow 1$ at gas densities for which the simulations are reliable and for which $\alpha \gg 1$ at higher densities would do. We chose to use a simple power-law dependence because it satisfies these constraints, is continuous and uses only one free parameter. We will investigate the effect of changing $\beta$ in the following sections. In the limit that $\beta \rightarrow 0$ the behaviour of the constant-$\beta$ model will tend towards that of a constant-$\alpha$ model with $\alpha_0=1$, and in the limit that $\beta \rightarrow \infty$ the model tends towards behaviour where the accretion is pure Bondi-Hoyle in non star-forming gas, and always Eddington limited in gas with densities above the star formation threshold. We caution that this prescription is not suitable for simulations that resolve the relevant physics at densities exceeding $n_{\rm H}^\ast$. Because we have changed the density-dependence of the accretion rate, we cannot claim to be simulating Bondi-Hoyle accretion. Values of $\alpha\gg 1$ are, however, motivated by the Bondi-Hoyle formula. Moreover, for $n_{\rm H} > n_{\rm H}^\ast$ the density should be interpreted as the mass-weighted mean density of the unresolved, multiphase medium, smoothed on the scale of the spatial resolution of the simulation, whereas the density appearing in the Bondi-Hoyle formula applies to a single gas phase. Since the accretion rate-weighted mean density (which we can only compute if we know the mass distribution of the multiphase gas as a function of density and temperature) is unlikely to be proportional to this effective density, there is no reason to keep the Bondi-Hoyle scaling. For this reason, and because $\alpha\gg 1$ implies the assumption that the predicted densities and temperatures are greatly in error, we argue that constant-$\alpha$ prescriptions with $\alpha \gg 1$ can no more claim to be modeling Bondi-Hoyle accretion than constant-$\beta$ prescriptions. We prefer the latter since it allows us to get the right answer in the regime where the simulations are reliable, that is, at sufficiently low densities. Finally, we note that both accretion models use the Bondi-Hoyle scaling of the accretion rate with the mass of the BH, $\dot{m}_{\rm accr} \propto m_{\rm BH}^2$. In common with the models of S05 we limit the accretion rate to the Eddington rate: $$\dot{m}_{{\rm Edd}}=\frac{4\pi G {m_{{\rm BH}}}m_{{\rm p}}}{\epsilon_{{\rm r}} \sigma_{{\rm T}} c}\,,$$ where $m_{{\rm p}}$ is the proton mass and $\sigma_{\rm T}$ is the Thomson cross section for scattering of free electrons. Because $\dot{m}_{{\rm Edd}}\propto {m_{{\rm BH}}}$ whereas $\dot{m}_{{\rm accr}} \propto {m_{{\rm BH}}}^2$, Eddington limited accretion tends to be more important for more massive BHs. Following S05, we allow BH particles to stochastically swallow neighbouring baryonic particles with a probability $$p_i=\left\{ \begin{array}{ll} ({m_{{\rm BH}}}-m_{{\rm part}})\rho^{-1}W(r_{{\rm BH}}-r_i,h_{{\rm BH}}) & \textrm{if } {m_{{\rm BH}}}>m_{{\rm part}} \\ 0 & \textrm{otherwise}\end{array}\right.$$ where $\rho$ is the local gas density, ${m_{{\rm BH}}}$ is the mass of the sub-grid black hole, $m_{{\rm part}}$ is the mass of the particle containing the sub-grid BH and $W(r_{{\rm BH}}-r_i,h_{{\rm BH}})$ is the SPH kernel, evaluated between the positions of the BH and gas particle $i$. The BH smoothing length, $h_{{\rm BH}}$, is chosen such that within a distance $h_{{\rm BH}}$ from the BH there are $N_{{\rm ngb}}=48$ neighbours, the same number of neighbours as we used in our SPH calculations. This process ensures that the mass of the BH particle always closely tracks $m_{{\rm BH}}$ When the mass of the BH particle is smaller than or of the same order of magnitude as the simulation mass resolution, the black hole does not dominate the local dynamics and may wander from the centre of mass of its parent halo due to numerical effects. Conservation of momentum from accreted ISM gas can lead to similar effects. In order to avoid this we employ the same scheme as in the models of S05. At every timestep the gravitational potential energy is calculated at the position of each of the BH’s neighbouring gas particles and the BH particle is repositioned on top of the particle with the minimum potential energy. In order to prevent the BH from being dragged by a minimum-potential particle with a large relative velocity, we only perform this process if the relative velocity between the BH and its most-bound gas particle neighbour is less than 0.25 $c_s$, where $c_s$ is the local sound speed. This process ensures that the location of the BH particle always tracks the centre of mass of its parent halo very closely. This procedure is halted after the mass of the SMBH becomes greater than ten times the initial gas particle mass in the simulation because by this point the BH dominates the dynamics in the centre of the halo. Black hole mergers {#sec:met-merger} ------------------ Galaxy mergers are thought to be one of the major processes driving the evolution of galaxies. When galaxies merge it is expected that their central BHs will eventually also merge. Indeed, the build-up of BHs through mergers may play an important part in the growth of SMBHs. Similarly to S05 we have implemented BH merging as follows. When any two BHs pass within a distance $h_{{\rm BH}}$ of each other with a relative velocity smaller than the circular velocity at a distance $h_{{\rm BH}}$ ($v_{{\rm rel}}<\sqrt{Gm_{{\rm BH}}/h_{{\rm BH}}}$, where $h_{{\rm BH}}$ and ${m_{{\rm BH}}}$ are the smoothing length and mass of the most massive BH in the pair respectively) then they are allowed to merge. This velocity criterion is necessary in order to prevent BHs from merging during a fly-through encounter of two galaxies, as this could lead to BHs being quickly removed from their host galaxies due to momentum conservation. This velocity scale is somewhat different from that employed by S05, who used the local sound speed, $c_s$, as the relevant velocity scale, arguing that the sound speed represents a simple measure of the characteristic velocity scale of the galaxies, and hence gives a simple measure of the velocity scale at which BHs will be able to merge. However, because AGN input large amounts of energy into their surroundings, it is not necessarily true that the sound speed local to the AGN reflects the depth of the potential well. The BH merging rate estimated from our simulations likely represents an upper limit to the true merger rate as our simulations do not have the resolution required to resolve the formation of the tight BH binaries that are a prerequisite for their eventual coalescence [@call08]. Since it is not yet fully understood how long it takes to harden a BH binary [@maki04], we assume that the merging process is instantaneous. Energy feedback from black holes {#sec:met-feed} -------------------------------- The precise mechanism by which energy emitted from a BH is coupled to the surrounding medium is as yet unknown, but plausible mechanisms include radiation pressure on free electrons (which gives rise to the classical Eddington limit), Compton heating of the infalling gas [e.g. @ciot01; @wang06b], photoionization pressure [@buff74; @cowi78] and radiation pressure on dust grains [e.g. @murr05]. Regardless of the precise coupling mechanism, there is a catalogue of observational evidence indicating that energy output from AGN can drive galactic outflows. For example, absorbers seen in X-rays show evidence of outflow [@laor97] and broad absorption line systems show evidence of outflows at very high velocity [e.g. @poun03]. Although these observations indicate that high velocity outflows are present around some AGN, they do not tell us how much mass (and hence how much energy) is present in the outflow. Estimates of the mass outflow rate in the winds are highly uncertain. Some studies [e.g. @chel08] imply that the actual rate of mass outflow is only a small fraction of the bolometric luminosity of the AGN sources, while other studies [e.g. @arav02; @arav08] suggest large mass outflow rates in quasar driven winds. In our models BHs inject a fixed fraction of the rest mass energy of the gas they accrete into the surrounding medium. The feedback is implemented thermally, that is: energy is deposited into the surrounding gas by increasing its internal energy, as opposed to the kinetic feedback used to inject supernova energy, which is deposited by kicking the gas particles (see Sec. \[sec:method\]). The fraction of the accreted rest mass energy that is injected is assumed to be independent of both the environment and the accretion rate. We thus do not differentiate between quasar mode and radio mode feedback as in the models of [@sija07]. In a future work we will consider how spatially distributed AGN heating mechanisms affect the cosmological evolution of galaxies. The amount of energy returned by a BH to its surrounding medium in a timestep $\Delta t$ is given by $$\label{eq:epsilon} E_{{\rm feed}}={\epsilon_{{\rm f}}}\epsilon_{\rm r} \dot{m}_{{\rm BH}} c^2 \Delta t\,,$$ where ${\epsilon_{{\rm f}}}$ is the efficiency with which a BH couples the radiated energy into its surroundings – a free parameter in our simulations – and $c$ is the speed of light. Only the product of $\epsilon_{{\rm r}}$ and ${\epsilon_{{\rm f}}}$ is important in calculating the amount of energy feedback in our model. Because our sub-grid model for SF relies on an effective EOS and does not include a (semi-)analytic sub-grid model for the multiphase ISM, our energy distribution mechanism is different from that in S05. In contrast, because we prefer to minimize the use of semi-analytic models within our hydrodynamical simulations, our models rely only on an effective EOS and leave the distribution of the mass over unresolved gas phases undefined. We therefore need to make two changes to the EOS model of [@scha08] that was used in our simulations without AGN feedback. Firstly, in the original models, once gas was identified as star-forming it was forced to remain on the EOS, until its density dropped below the critical density for star-formation, $n_{{\rm H}}^*$, it turned into a star particle, or it was kicked into the wind. It is therefore necessary that we change this by allowing strongly heated gas to leave the EOS. This is implemented numerically by taking gas that is heated by more than 0.5 dex above the EOS in a single time step off the EOS (i.e. it is no longer star-forming and its pressure is no longer constrained to lie on the EOS). Gas is placed back onto the EOS if its temperature falls back below 0.5 dex above the EOS temperature corresponding to its density. By checking SFRs, both globally and for individual objects, and by comparing gas distributions on the $\rho-T$ plane we have verified that making this change to our EOS model has a negligible effect on the results in a simulation that does not include AGN feedback. A second possible change to the AGN model would have been to treat the effective EOS as a lower limit to the gas temperature. We tested this and again found the differences in our results to be negligible. We choose to use the first procedure in order to facilitate direct comparisons between the simulations containing AGN and those run earlier in the project. Secondly, in order to ensure that the thermal feedback from BHs is not immediately radiated away it is necessary to impose a *minimum heating temperature*. BHs store feedback energy until they have accumulated enough energy to increase the temperature of ${n_{{\rm heat}}}$ of their neighbours by an amount of $\Delta T_{{\rm min}}$, i.e. $$E_{{\rm crit}}=\frac{{n_{{\rm heat}}}{m_{{\rm g}}}k_{{\rm B}} \Delta T_{{\rm min}}}{(\gamma-1)\mu m_{{\rm H}}}\,,$$ where $E_{{\rm crit}}$ is the critical energy for a heating event to be triggered and $\mu$ is the mean molecular weight of the gas (we assume $\mu=0.58$, appropriate for a fully ionized gas of primordial composition). The internal energy of the heated gas is instantaneously increased by an amount $E_{{\rm crit}}$. This implementation of quasar mode feedback is similar to the radio mode feedback introduced by @sija07. If $\Delta T_{{\rm min}}$ is set too low then the cooling time of the AGN heated gas will remain very short, and the energy will be efficiently radiated away. If ${n_{{\rm heat}}}\Delta T_{{\rm min}}$ is set too high then the threshold energy for a heating event to occur and hence the time period between AGN heating events will become very large. In particular, a time interval larger than the Salpeter time would prevent the BH from regulating its growth. Finally, we note that the energy is deposited into the ambient gas isotropically, equally distributed to a random fraction $n_{{\rm heat}}/N_{{\rm ngb}}$ of the BH’s neighbours. If, on a given timestep, a BH accretes more energy than necessary to heat $n_{{\rm heat}}$ particles to $\Delta T_{{\rm min}}$ then the process it repeated until the BH has distributed all of its energy, so individual gas particles may be heated by an amount $\Delta T_{{\rm min}}$ multiple times on a given timestep. Parameter choices {#sec:pars} ================= Both the mechanism by which BHs grow and the efficiency of their thermal feedback can be changed drastically by changing the values of the parameters of the AGN model. In this section we discuss how parameter values are chosen to minimize unphysical numerical effects whilst simultaneously requiring that the global properties of the BH distribution satisfy various observational constraints. For quick reference, Table \[tab:parlist\] contains a full list of the parameters that control the behaviour of the BH growth and AGN feedback model, along with their fiducial values. [l|l|l]{} Parameter&Fiducial Value&Description\ $m_{{\rm seed}}$&$0.001\,{m_{{\rm g}}}$&Initial mass of the sub-grid BH\ $m_{{\rm halo,min}}$&$100\,m_{{\rm DM}}$&Minimum halo mass into which BH seeds may be placed\ $\epsilon_{\rm r}$&0.1&Radiative efficiency of the BH accretion discs\ $\epsilon_{\rm f}$&0.15&Fraction of energy emitted by BHs that couples into the ambient gas\ ${n_{{\rm heat}}}$&1&Number of neighbouring particles heated per feedback event\ $\Delta T_{{\rm min}}$&$10^8$ K&Amount by which BH feedback heats surrounding gas\ \ $\alpha_0$&100&Normalization of the Bondi-Hoyle accretion efficiency (Eq. \[eq:bhl\]) *in constant-$\alpha$ models*\ $\beta$&2&Slope of the Bondi-Hoyle accretion efficiency (Eq. \[eq:beta\]) *in constant-$\beta$ models*\ \[tab:parlist\] Because it is difficult to discuss the effect of each parameter in isolation, we will first discuss the general properties of the BH model (e.g. growth mechanisms, feedback efficiency) and use each of these general themes to motivate our fiducial choices for the parameters of the AGN model. Black hole growth {#sec:bhg} ----------------- We allow BHs to grow by two processes: mergers with other BHs and accretion of gas. The BH accretion time-scale $t_{{\rm accr}}\equiv {m_{{\rm BH}}}/\dot{m}_{{\rm BH}}$ is, for Bondi-Hoyle accretion (Eq. \[eq:bhl\]), proportional to ${m_{{\rm BH}}}^{-1}$. Therefore, depending upon choices for various parameters, the initial growth of BHs can proceed in one of two different ways. Firstly, in the regime where the time scales over which gas accretion operates are very long, BHs may grow primarily by mergers with other (seed mass) BHs until $m_{{\rm BH}}$ is large enough for the accretion rate to become appreciable. In this regime black holes initially grow at a rate governed by the integrated mass of seed BHs they collide with. Secondly, seed mass BHs may have accretion rates large enough for the BHs to experience runaway growth until their accretion rate is limited by feedback processes. ![BH growth times (${m_{{\rm BH}}}/\dot{m}_{{\rm BH}}$) as a function of the ambient gas density under various accretion models, all for a BH mass of $10^6\,{\rm M}_\odot$. The normalization of all black lines scales as ${m_{{\rm BH}}}^{-1}$. Lines are shown for both a constant-$\alpha$ accretion model (solid, black/grey line) and for constant-$\beta$ accretion models (all other black/grey lines). The solid, red line shows the Salpeter time (the growth time for a BH accreting at the Eddington rate), and represents the lower limit on the BH growth time in the simulations. The grey section of each line represents the region where the accretion rate is greater than the Eddington rate. The vertical dotted line shows the star formation density threshold, $n_{\rm H}^*=10^{-1}$ cm$^{-3}$. Above this density the gas follows the effective equation of state defined by Eq. \[eq:effeos\]. For lower densities we have assumed the gas to be isothermal (only in this figure, not in the simulations). Note that the constant-$\alpha$ accretion model predicts that a $10^6\,{\rm M}_\odot$ BH will be growing at an Eddington limited rate even in gas with density at the star-formation threshold (0.1 cm$^{-3}$).[]{data-label="fig:bhga"}](alpha.eps){width="8.3cm"} ![The gas density above which the accretion rate onto a black hole becomes Eddington limited as a function of BH mass, assuming that gas with density above the critical density for star formation, $n_{\rm H}^*=10^{-1}$ cm$^{-3}$ has properties governed by the effective EOS and that gas below the critical density follows an isothermal EOS. The thick, black line shows the behaviour of a constant-$\alpha$ accretion model, and all other lines show how models with a constant-$\beta$ accretion rate behave. The grey line shows the critical density for star formation in our simulations. Except for very low BH masses, the constant-$\alpha$ model becomes Eddington limited at much lower gas densities than the constant-$\beta$ models.[]{data-label="fig:rhoedd"}](rhoedd.eps){width="8.3cm"} We can estimate the growth rate of BHs by noting that in our star formation model we impose an effective equation of state on star-forming gas, and as such can immediately calculate the local ISM pressure, $P$, (and hence $c_s=\sqrt{\gamma P/\rho}$) from $$P= P_{{\rm crit}}\Big(\frac{n_{{\rm H}}}{n_{{\rm H}}^*}\Big)^{\gamma_{{\rm eff}}}\,, \label{eq:effeos}$$ where $n_{{\rm H}}^*=0.1\,{\rm cm}^{-3}$ and $P_{{\rm crit}}$ are the critical threshold density and pressure for star formation respectively (see Sec. \[sec:method\]). Fig. \[fig:bhga\] shows BH growth times as a function of the ambient gas density for both constant-$\alpha$ and constant-$\beta$ models, assuming here that the BH is of mass $10^6\,{\rm M}_\odot$. For the purposes of this plot we assume that when gas densities are below the star-formation threshold the EOS is isothermal, but note that in the simulations we calculate the pressure self-consistently. Following other authors, we set $\alpha_0=100$ for the constant-$\alpha$ model. For the constant-$\beta$ lines we set $\beta=[1,2,4]$. The horizontal, red line in this plot shows the growth time of a BH that is accreting at the Eddington rate (i.e. the Salpeter time). The Salpeter time depends only upon physical constants and the BH radiative efficiency, such that $$t_{{\rm Salpeter}}\equiv \frac{{m_{{\rm BH}}}}{\dot{m}_{{\rm Edd}}}=\frac{\epsilon_{\rm r}\sigma_{\rm T}c}{4\pi G m_{\rm p}}=4.5\times10^{5}\Big(\frac{\epsilon_{\rm r}}{0.1}\Big)\,{\rm yr}\,.$$ It is immediately clear from Fig. \[fig:bhga\] that the choice of accretion model strongly affects the local density at which the BH growth becomes Eddington limited, with black holes accreting in the constant-$\alpha$ model becoming Eddington limited at densities 1-2 orders of magnitude lower than the same black hole accreting in the constant-$\beta$ model. From the simulations we find that, for our chosen value of $m_{{\rm halo,min}}$ (=$100\,m_{{\rm DM}}$; see Sec. \[sec:bhdem\]) and at high redshift, typical birth densities of BHs are $\sim 10-100$ times the star formation threshold. Hence, in the regimes of interest, in constant-$\alpha$ models all BHs of mass $> 10^5\,{\rm M}_\odot$ grow initially at close to the Eddington rate[^8] (for a seed mass of $10^5\,{\rm M}_\odot$ and typical initial gas densities of $10^1-10^2$ cm$^{-3}$ the initial Eddington ratio is 0.037-0.37) until feedback effects reduce the local gas density to values below the threshold for star formation. From Fig. \[fig:bhga\] we can see that for a BH of mass $10^6\,{\rm M}_\odot$ the accretion rate remains Eddington limited until the local gas density falls to $n_{{\rm H}} \la 10^{-2}$ cm$^{-3}$. Fig. \[fig:rhoedd\] presents this information in a slightly different manner by showing the density above which a BH’s accretion rate is Eddington limited as a function of BH mass for the same accretion models as in Fig. \[fig:bhga\]. Again, it is clear that the gas density below which the accretion rate depends on the density, and can thus more easily be regulated by feedback from the AGN, has a strong dependence on the accretion model used. Take, for example, the case of a BH of mass $10^8\,{\rm M}_{{\rm \odot}}$, here in a constant-$\alpha$ model this BH will grow at the Eddington rate until it can modulate its local density to below $10^{-3.5}$ cm$^{-3}$, more than two orders of magnitude below the star-formation threshold. This is well within the regime where Bondi-Hoyle accretion is resolved in the simulations. In the constant-$\alpha$ model with $\alpha_0=100$ the growth of BHs therefore proceeds as follows: Seed mass BHs (typical seed masses are in the range $10^3-10^5{\rm M}_\odot$ in our simulations) grow exponentially by Eddington limited accretion until feedback from the BH has decreased the local ISM density to the point that growth is no longer Eddington limited, and further energy output from the AGN can decrease the accretion rate. For BHs with masses greater than $10^6\,{\rm M}_\odot$ self-regulation can only occur at densities orders of magnitude below the star formation threshold (Fig. \[fig:rhoedd\]). In this regime we resolve Bondi-Hoyle accretion, invalidating the assumption used to justify large values of $\alpha$ in the first place. In contrast, simulations that employ a constant-$\beta$ model, are not necessarily Eddington limited from birth. Taking again the case of a $10^6\,{\rm {\rm M}}_\odot$ BH, and our fiducial value of $\beta=2$, we see that the BH can decrease its accretion rate at a much higher gas density, and as such the period of Eddington limited growth will be much shorter than for the constant-$\alpha$ model. The difference in the gas density below which AGN accretion rates are no longer Eddington limited can lead to large differences in the properties of low mass galaxies and BH growth in small haloes. Efficient thermal feedback {#sec:met-therm} -------------------------- As discussed in Sec. \[sec:met-feed\], it is necessary to impose a minimum heating temperature in order to prevent BHs from heating their surroundings before they have generated enough energy for thermal feedback to become efficient. Two parameters control how efficient this feedback may be: the number of neighbours to heat (${n_{{\rm heat}}}$) and the temperature to which the neighbours are heated ($\Delta T_{{\rm min}}$). Two competing effects control our choices for these parameters. If $\Delta T_{{\rm min}}$ is too low then AGN heated gas will retain a low temperature and therefore also a short cooling time (an analogous problem to the overcooling of supernova heated gas in early cosmological simulations [@katz96], which is, however, usually attributed to an overestimate of the gas density). In this regime the energy will be immediately radiated away, making AGN feedback ineffective. Conversely, if $\Delta T_{{\rm min}}$ or $n_{{\rm heat}}$ are set too high then the time scale over which BHs accrete enough energy to heat ${n_{{\rm heat}}}$ of their neighbours by an amount $\Delta T_{{\rm min}}$ will become longer than either the dynamical time in the vicinity of the BH (or the Salpeter time for Eddington limited growth), leading to spurious growth as BHs are unable to self-regulate. The choice of the minimum heating temperature, $\Delta T_{{\rm min}}$, is motivated by the fact that we wish to choose the minimum value (and so minimum time between heating events) for which the cooling time of the heated gas is long enough that the energy is not immediately radiated away. In practice it was found that $\Delta T_{{\rm min}}=10^{8}$ K is the minimum temperature for which BH feedback has a sufficient effect on galaxy clusters. We return to this point in Sec. \[sec:res-pareffects\]. The second parameter, ${n_{{\rm heat}}}$, is calibrated by noting that although ideally we would like to allow AGN to heat gas instantaneously, our finite resolution forces us to store energy until the feedback is effective, hence introducing a delay to AGN heating. We can minimise the effect of this delay by noting that the numerically imposed time between heating events should be lower than the typical time-scales of dynamical processes that affect AGN feedback. If ${n_{{\rm heat}}}$ is set too high then it is possible that the amount of time taken for a BH to accrete enough energy to perform a heating event would be large enough that we see spurious growth. We can quantify this effect by calculating the mean time between heating events for BHs of different masses in different density environments. This is demonstrated in Fig. \[fig:bhg\], which shows the mean time between heating events as a function of ${m_{{\rm BH}}}/{m_{{\rm g}}}$ for models with both constant-$\beta$ and constant-$\alpha$ accretion rates. Plotting the heating time as a function of this mass ratio allows us to make this plot in a resolution independent manner. Fig. \[fig:bhg\] assumes $\Delta T_{{\rm min}}=10^8$ K and ${n_{{\rm heat}}}=1$, but all lines can be shifted vertically in proportion with the quantity $\Delta T_{{\rm min}}{n_{{\rm heat}}}$. To make the time between feedback events as small as possible we should choose $n_{{\rm heat}}$ as small as possible, but it is not immediately obvious that BHs will be able to regulate their growth if we heat only a small number of neighbours of an AGN that it will be able to self-regulate. However, we found from numerical tests (Sec. \[sec:res-pareffects\]) that $n_{{\rm heat}}=1$ is sufficient for BH feedback to be effective, and so $n_{{\rm heat}}=1$ is the parameter value used in our fiducial simulations. The dependence of our results on the two parameters $\Delta T_{{\rm min}}$ and $n_{{\rm heat}}$ will be discussed in Sec. \[sec:res-pareffects\]. The next model parameter is ${\epsilon_{{\rm f}}}$, the efficiency with which energy radiated from the BH is coupled to the ISM. The parameter ${\epsilon_{{\rm f}}}$ sets the normalizations of the global BH density and the BH-galaxy scaling relations. We therefore tune ${\epsilon_{{\rm f}}}$ after setting all of the other parameters in order to match the redshift zero observations (Sec. \[sec:res-pareffects\], Fig. \[fig:params\_bhdens\]d) and find that a value of $\epsilon_{{\rm f}}=0.15$ provides a good match to the observations. ![image](bhg.ps){width="\textwidth"} Black hole seed mass and minimum halo mass {#sec:bhdem} ------------------------------------------ Our initial choice for the halo mass into which we insert a BH is motivated by the fact that we wish for every resolved halo with $m_{{\rm halo}} \gg m_{{\rm seed}}$ to contain a seed BH. We therefore choose to place BH seeds into haloes of a constant particle number. Using $m_{{\rm halo,min}}=100\,m_{{\rm DM}}$ ensures that haloes containing BHs are always well defined [e.g. @diem07]. The choice of a constant particle number halo mass also has the advantage that if we change the simulation mass resolution, BHs will still be placed into the smallest allowable mass of dark matter haloes without the need to tune any parameters. Note, however, that this prescription will have to be changed for simulations that have sufficient resolution for $100\,m_{{\rm DM}}$ to be comparable or smaller than the minimum halo mass expected to be able to form seed mass BHs. Given the minimum halo mass into which we place BH seeds, we must ensure that the integrated number of seed BHs generated between redshifts $z=\infty$ and zero is much smaller than the observed cosmic BH density. We can obtain an upper limit on the cumulative cosmic density of BH seeds by taking the redshift zero dark matter halo mass function $f(m)=n(m)dm$ assuming that *all* collapsed mass was assembled through mergers of critical mass haloes: $$\rho_{{\rm seed}}(m_{{\rm halo,min}})<\frac{m_{{\rm seed}}}{m_{{\rm halo,min}}}\int^\infty_{m_{{\rm halo,min}}}mf(m)dm\,.$$ This quantity is plotted for a number of values of the seed BH mass in Fig. \[fig:seeddens\], where the two vertical grey lines represent the masses of haloes of 100 DM particles (=$m_{{\rm halo,min}}$) in our fiducial simulations run at the mass resolution as the OWLS runs of Schaye et al. (in preparation). Given choices for $\Delta T_{{\rm min}}$ and ${n_{{\rm heat}}}$, we can use Fig. \[fig:bhg\] to place a minimum limit on the BH seed mass, $m_{{\rm seed}}$. Here, we show the time between heating events as a function of BH mass, for both constant-$\alpha$ and constant-$\beta$ models. The time between heating events for BHs accreting at the Eddington accretion rate are shown as red lines in each panel. We now note that we require BH heating to occur regularly in high density environments. In particular, in order for a BH to be able to effectively self-regulate its own growth, we require that the numerically imposed minimum duty-cycle, $\Delta t_{{\rm heat}}$, is less than the Salpeter time, the characteristic growth time for black holes accreting at the Eddington rate. It is clear from Fig. \[fig:bhg\] by comparing the BH Salpeter time (blue line) to the BH duty cycle that in high density environments ($n_{{\rm H}}>10^2\,{\rm cm}^{-3}$) this condition is satisfied only if the BH mass is greater than $10^{-3}{m_{{\rm g}}}$. This provides a minimum allowed seed mass in our models. However, in addition to ensuring that BHs grow in a physical manner and that their feedback can be effective, we must also satisfy various observational constraints. Most fundamentally, it is known that the present day cosmic BH density is $(4.2\pm 1.1)\times 10^5$ M$_{\odot}$/Mpc$^3$ [@shan04] or $(4.22^{+1.75}_{-1.22})\times 10^5$ M$_{\odot}$/Mpc$^3$ [@marc04], although we caution that a more accurate consideration of the effects of cosmology may lead to a slightly higher determination of the BH density [@grah07b]. In order that we do not violate these observational constraints in the presence of substantial BH growth through accretion, we require the $z=0$ global seed density to be much smaller than the observed BH mass density. We now ensure that – given all of our other parameter choices – $m_{{\rm seed}}=10^{-3}\,{m_{{\rm g}}}$ does not violate this constraint on the global BH density. For our simulations $10^{-3}\,{m_{{\rm g}}}$ corresponds to a BH seed mass of $1.2\times10^5\,{\rm M}_{\odot}$. It is clear from Fig. \[fig:seeddens\] that the maximum possible contribution of the seed BH mass to the cosmic density is at least a factor of 10 less than the redshift zero observations, which we indicate by the grey, horizontal shaded region. We will see in Fig. \[fig:params\_bhdens\] that, as expected, the actual contributions of seed BHs to the total cosmic density are much smaller than this value. An additional observational constraint is placed by the well-defined relation between the mass of a BH and the mass of the bulge component of a galaxy, ${m_{{\rm BH}}}\approx 0.006\,m_{{\rm bulge}}$ [@magg98]. In simulations it is more convenient to work with the relation between BH mass and dark matter halo mass, investigated by [@ferr02], who found that in halos of $10^{12}\,{\rm M}_{\odot}$ the ratio $m_{{\rm BH}}/m_{{\rm halo}}\sim 10^{-5}$. This provides a second, related constraint on the mass of seed BHs: we wish to place them below this relationship so that they can subsequently grow on to the observed redshift zero relation[^9]. For our parameter choices ($m_{{\rm seed}}=10^{-3}\,{m_{{\rm g}}}$ and $m_{{\rm halo,min}}=100\,m_{{\rm DM}}$) $$\frac{m_{{\rm seed}}}{m_{{\rm halo,min}}}=\frac{10^{-3}}{100}\Big(\frac{\Omega_b}{\Omega_m-\Omega_b}\Big)=2.1\times10^{-6}\,,$$ where the last equality assumes our chosen cosmology. This ratio is indeed much smaller than the observed value. Finally, we note that our fiducial choice of seed mass, $m_{{\rm seed}}=10^{-3}\,{m_{{\rm g}}}$, will need to be modified for simulations that have sufficient resolution for this value to be below the expected BH seed masses. ![Maximum possible contribution to the cosmic BH density from seed mass BHs, as a function of the minimum dark matter halo mass, $m_{{\rm halo,min}}$, assuming the dark matter mass function of Reed et al. (2006). Each black line corresponds to a different BH seed mass, as indicated in the legend. The horizontal, grey shaded region shows the observed cosmic black hole density at redshift $z=0$ (Shankar et al. 2004), and the vertical line indicates $m_{{\rm halo,min}}$ for our fiducial simulation, which has a DM mass resolution of $8.64\times 1-^7\,{\rm M}_\odot$. For our fiducial BH seed mass of $m_{{\rm seed}}=1.2\times1-^5\,{\rm M}_\odot$ we see that the maximum possible contribution of seed mass BHs to the global BH density is much lower than the $z=0$ observations.[]{data-label="fig:seeddens"}](seed_density.ps){width="8.3cm"} Comparison with previous work {#sec:met-pars} ----------------------------- Through the arguments in the previous sections we were able to specify values for all of our model parameters. These fiducial parameter values are summarised in Table \[tab:parlist\]. The AGN model developed in this paper is a modification of that introduced in S05, and used thereafter in a large number of works. As such it is instructive to compare our parameter choices with those employed in other studies, as collected in Table \[tab:allparlist\]. We turn our attention first to the AGN feedback efficiency, ${\epsilon_{{\rm f}}}$. The value used in the present study (${\epsilon_{{\rm f}}}=0.15$) is significantly higher than that used in previous published studies, which all assume $\epsilon_{{\rm f}}=0.05-0.1$. We can account for this difference if we note that, unlike the other studies, we do not employ the SH03 subgrid model for the ISM. Use of a different subgrid model for the unresolved ISM is likely to lead to differences in the amount of radiative losses, as the effective density and temperature of the ISM differ significantly between the two models. We note that apart from differences in the ISM model and AGN heating mechanisms, the strength of the AGN feedback depends only on the parameter combination ${\epsilon_{{\rm f}}}\epsilon_{{\rm r}}$, and that there is significant leeway in the value of $\epsilon_{{\rm r}}$. All studies presented in Table \[tab:allparlist\] assume $\epsilon_{{\rm r}}=10\%$, but values close to $\epsilon_{{\rm r}}=20\%$ are possible for thin-disc accretion on to a Kerr BH [@yu02; @thor74]. Recent observational determinations of $\epsilon_{\rm r}$ span the full range of allowable values: $\epsilon_{{\rm r}}=30-35\%$ [@wang06], $\epsilon_{{\rm r}}=15\%$ [@elvi02; @yu08], $\epsilon_{{\rm r}}=7-8\%$ [@cao08; @mart08] depending upon the specific assumptions and models used in each study. The ratio of the minimum halo mass to the seed mass is similar in all of the cosmological studies, with the exception of the work of [@khal08], who performed zoomed cosmological simulations of an individual object. In order to avoid numerical issues when ${m_{{\rm BH}}}<{m_{{\rm g}}}$, these authors forced the BH to accrete very quickly at early times before artificially halting its accretion until the stellar mass of the halo becomes large enough that the BH lies on the observed ${m_{{\rm BH}}}-m_{{\rm *}}$ relation. We see also from Table \[tab:allparlist\] that the minimum halo mass ($m_{{\rm halo, min}}$) is consistent between all of the published cosmological studies to within a factor of 5 ($1-5\times10^{10}\,{\rm M}_\odot$). We will show in Sec. \[sec:res-pareffects\] that, for the constant-$\alpha$ accretion model assumed in previous studies, AGN feedback affects all haloes into which seed mass black holes are placed, and that changing $m_{{\rm halo,min}}$ by a factor of of ten has a large effect on the BH properties of the less massive haloes in the simulation. Care should therefore be taken when comparing results of different simulations. The two areas where our work differs most from previous models are the BH accretion model and the SN feedback model. We turn our attention first to the SN model and note that almost all previous studies employ the work of SH03, whereas we use the method of @dall08. Contrary to SH03, SN winds in our model are local and not hydrodynamically decoupled from the surrounding gas. Hydrodynamical decoupling of supernova heated gas, as implemented in the SH03 model, guarantees that when gas is kicked it is able to escape the ISM (although it may subsequently return). On the other hand, if the hydrodynamic forces are taken into account, the gas can remain confined by pressure forces, and if it does manage to escape it may drag along much of its neighbouring gas. It was shown in [@dall08] that in high-resolution simulations of individual disc galaxies this change fundamentally alters the structure of the galactic disc and that hydrodynamically coupled winds and generate galactic outflows with properties broadly comparable with observations. We demonstrate in a companion work that SN feedback has a large effect on the properties of the AGN population and, as such, it is important to investigate a number of SN feedback prescriptions. The second area where our model differs significantly from others in the literature is in the accretion model. In the nomenclature of this paper, all of the previous AGN models are constant-$\alpha$ accretion models in which $\alpha_0=100-300$. We show in later sections that the accretion model represents one of the most crucial elements of an AGN model, and that all results are very sensitive to the way in which BHs are allowed to accrete. We show in this work that the results derived from simulations depend on aspects of the BH model that are homogeneous between the studies that have thus far been published. The present work – which is carried out using different techniques and parametrizations for much of the sub-grid modelling – therefore provides a way to investigate the robustness of the models. ------------------------- ------------------------ ---------------------- ------------ --------- ------------------------------------------ ---------------------------------------- ---------------------- --------------------------------------------- -------- ---------- ------------ Study ${\epsilon_{{\rm f}}}$ $\epsilon_{{\rm r}}$ $\alpha_0$ $\beta$ $\frac{m_{{\rm seed}}}{{\rm M}_{\odot}}$ $\frac{m_{{\rm halo,min}}}{M_{\odot}}$ $N_{{\rm halo,min}}$ $\frac{m_{{\rm seed}}}{m_{{\rm halo,min}}}$ Type SF Model Wind Model (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) Springel et al. (2005) 0.05 0.1 100 0 $10^5$ n/a n/a n/a Iso. SH03 SH03 Robertson et al. (2006) 0.05 0.1 100 0 $10^5$ n/a n/a n/a Iso. SH03 SH03 Sijacki et al. (2007) 0.05 0.1 100 0 $10^5$ $5\times 10^{10}$ 2260 $2\times 10^{-6}$ Zoom SH03 SH03 Sijacki et al. (2007) 0.05 0.1 100 0 $10^5$ $5\times 10^{10}$ 285-606 $2\times 10^{-6}$ Cosmo. SH03 SH03 Johansson et al. (2008) 0.05 0.1 100 0 $10^5$ n/a n/a n/a Iso. SH03 SH03 Di Matteo et al. (2008) 0.05 0.1 100 0 $10^5$ $1\times 10^{10}$ 36-363 $1\times 10^{-5}$ Cosmo. SH03 SH03 Khalatyan et al. (2008) 0.1 0.1 300 0 n/a n/a 1463 n/a Zoom SH03 SH03 This study 0.15 0.1 1 2 $10^{-3}m_{\rm g}$ 100$m_{{\rm DM}}$ 100 $2\times 10^{-6}$ Cosmo. SD08 DS08 ------------------------- ------------------------ ---------------------- ------------ --------- ------------------------------------------ ---------------------------------------- ---------------------- --------------------------------------------- -------- ---------- ------------ \[tab:allparlist\] Simulation results {#sec:results} ================== In this section we first introduce the simulation set used in this paper (Sec. \[sec:simlist\]) before demonstrating that the fiducial simulations reproduce redshift zero observational results and quantifying how robust our model is to poorly constrained parameter choices (Sec. \[sec:res-pareffects\]). To begin with, however, we show for illustrative purposes the gas density in a 3 Mpc/$h$ thick slice from our 100 Mpc/$h$ ($L100N512$) simulation at redshift zero (Fig. \[fig:pretty\]). Each panel represents a successive factor of five zoom. The bottom-left panel shows a region 800 kpc/$h$ across, centered on a $3\times10^{7}\,{\rm M}_\odot$ BH, contained in a halo with a stellar mass of $3\times 10^{10}\,{\rm M}_\odot$ and a dark matter mass of $2\times 10^{12}\,{\rm M}_\odot$. Circles in this plot represent the locations of BHs with the area of each circle proportional to the logarithm of the mass of the BH. ![image](pretty_plot.eps){width="\textwidth"} Simulation list {#sec:simlist} --------------- In order to explore parameter space we have run a large number of smaller simulations, the details of which are summarised in Table \[tab:sims\]. Simulation names are of the form *LxxxNyyy*, where *xxx* represents the simulation box size in comoving Mpc/$h$ and *yyy* is the cube root of the initial number of dark matter and gas particles. For example, the simulation denoted $L100N512$ refers to a comoving simulation volume of 100 Mpc/$h$, which contains $512^3$ dark matter particles and an equal number of baryonic particles. Simulations for which one of the parameters was changed from its default value are denoted by appending a descriptive suffix to the end of the simulation name. For example, simulations without AGN feedback are named *LxxxNyyyNOAGN*, and correspond to the simulations denoted *REF\_LxxxNyyy* in the OWLS project (Schaye et al., in preparation). [l|r|l|l|r|l|l|l|r|l|r]{} Identifier & $\frac{L}{{\rm Mpc}/h}$ & $N_{{\rm gas}}$ & $z_{{\rm end}}$& $\alpha_0$ & $\beta$ & ${\epsilon_{{\rm f}}}$ & $\frac{m_{{\rm seed}}}{{m_{{\rm g}}}}$ & $\frac{m_{{\rm halo,min}}}{m_{{\rm DM}}}$ & $\frac{\Delta T_{{\rm heat}}}{10^8\,{\rm K}}$ & ${n_{{\rm heat}}}$\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11)\ \ *L050N256* & 50.0 & $256^3$ & 0 & 1 & 2 & 0.15 & 0.001 & 100 & 1 & 1\ *L050N256VLOEPS* & 50.0 & $256^3$ & 0 & 1 & 2 & [**0.0375**]{} & 0.001 & 100 & 1 & 1\ *L050N256LOEPS* & 50.0 & $256^3$ & 0 & 1 & 2 & [**0.075**]{} & 0.001 & 100 & 1 & 1\ *L050N256HIEPS* & 50.0 & $256^3$ & 0 & 1 & 2 & [**0.3**]{} & 0.001 & 100 & 1 & 1\ *L050N256VHIEPS* & 50.0 & $256^3$ & 0 & 1 & 2 & [**0.6**]{} & 0.001 & 100 & 1 & 1\ *L050N256HIHALO* & 50.0 & $256^3$ & 0 & 1 & 2 & 0.15 & 0.001 & [**1000**]{} & 1 & 1\ *L050N256HISEED* & 50.0 & $256^3$ & 0 & 1 & 2 & 0.15 & [**0.01**]{} & 100 & 1 & 1\ *L050N256LOSEED* & 50.0 & $256^3$ & 0 & 1 & 2 & 0.15 & [**0.0001**]{} & 100 & 1 & 1\ *L050N256HINHEAT* & 50.0 & $256^3$ & 0 & 1 & 2 & 0.15 & 0.001 & 100 & 1 & [**10**]{}\ *L050N256B0* & 50.0 & $256^3$ & 0 & 1 & [**0**]{} & 0.15 & 0.001 & 100 & 1 & 1\ *L050N256B1* & 50.0 & $256^3$ & 0 & 1 & [**1**]{} & 0.15 & 0.001 & 100 & 1 & 1\ *L050N256B4* & 50.0 & $256^3$ & 0 & 1 & [**4**]{} & 0.15 & 0.001 & 100 & 1 & 1\ *L050N256A100B0* & 50.0 & $256^3$ & 0 & [**100**]{} & [**0**]{} & 0.15 & 0.001 & 100 & 1 & 1\ *L050N256A1000B0* & 50.0 & $256^3$ & 0 & [**1000**]{}& [**0**]{} & 0.15 & 0.001 & 100 & 1 & 1\ *L050N256T7* & 50.0 & $256^3$ & 0 & 1 & 2 & 0.15 & 0.001 & 100 & [**0.1**]{} & 1\ *L050N256NOAGN* & 50.0 & $256^3$ & 0 & – & – & – & – & – & – & –\ \ *L100N128* & 100.0 & $128^3$ & 0 & 1 & 2 & 0.15 & 0.001 & 100 & 1 & 1\ *L100N128NOAGN* & 100.0 & $128^3$ & 0 & – & – & – & – & – & – & –\ *L100N256* & 100.0 & $256^3$ & 0 & 1 & 2 & 0.15 & 0.001 & 100 & 1 & 1\ *L100N256NOAGN* & 100.0 & $256^3$ & 0 & – & – & – & – & – & – & –\ *L100N512* & 100.0 & $512^3$ & 0 & 1 & 2 & 0.15 & 0.001 & 100 & 1 & 1\ *L100N512NOAGN* & 100.0 & $512^3$ & 0 & – & – & – & – & – & – & –\ *L025N128* & 25.0 & $128^3$ & 0 & 1 & 2 & 0.15 & 0.001 & 100 & 1 & 1\ \[tab:sims\] The effects of changing the model parameters {#sec:res-pareffects} -------------------------------------------- We now turn our attention to the effect of varying each of the parameters of our AGN model away from those selected in Sec. \[sec:pars\]. In order to quantify the effect of different aspects of the AGN model on the results of our simulation, we split the model parameters into four separate categories: the accretion model ($\alpha_0$; $\beta$); the seed generation model ($m_{{\rm halo, min}}$; $m_{{\rm seed}}$); the feedback efficiency (${\epsilon_{{\rm f}}}$); and the heat distribution model ($\Delta T_{{\rm min}}$; $n_{{\rm heat}}$). We look separately at the effects of changes in each of these parameter sets, and additionally consider two purely numerical effects: the simulation mass resolution and box size. For each set of simulations we make four diagnostic plots: in Fig. \[fig:params\_sfr\] we show the cosmic SFR density as a function of redshift; Fig. \[fig:params\_bhdens\] shows the evolution of the global BH density, and the cumulative BH density present in seed-mass BHs (grey curves); Fig. \[fig:params\_mbhmhalo\] shows the redshift zero ${m_{{\rm BH}}}-M_{{\rm *}}$ and $m_{{\rm BH}}-\sigma$ relations. We associate BHs with gravitationally bound objects by identifying bound substructures in the simulation using the algorithm [subfind]{} [@spri01b; @dola08]. We note that in this plot we show *total* halo stellar mass as a function of BH mass, as opposed to the observations, where only the bulge stellar mass is calculated. This means that all curves can be shifted slightly to the left. Finally, Fig. \[fig:params\_ssfr\] shows the median specific SFR (SSFR) in bins of stellar mass. In this plot, the grey lines represent results from simulations that do not include AGN feedback. In figures \[fig:params\_ssfr\] and \[fig:params\_mbhmhalo\] the vertical lines represent the halo stellar masses at with 50% and 90% of haloes contain BHs massive enough to have performed at least one heating event. It is immediately clear from Fig. \[fig:params\_mbhmhalo\] that the ${m_{{\rm BH}}}-\sigma$ relation is much more robust to changes in parameters than the ${m_{{\rm BH}}}-M_*$ relation. Each set of simulations is compared to our fiducial simulation (*L050N256*), which uses the model parameters that were justified in Sec. \[sec:pars\]. To aid comparison between the different simulation sets, the fiducial simulation appears in every plot as a solid, black curve. Details of all the simulations discussed in this section appear in Table \[tab:sims\]. We now discuss each simulation set in turn. ![image](params_sfr.eps){width="80.00000%"} ![image](params_bhdens.eps){width="80.00000%"} ![image](params_mbhmhalo.eps){width="80.00000%"} ![image](params_ssfr.eps){width="80.00000%"} ### The effect of box size and mass resolution We consider first the effect of changing the box size at a constant resolution by comparing models *L100N512*, *L050N256* and *L025N128*. The size of the simulation box has a negligible effect on both the star formation rate density (Fig. \[fig:params\_sfr\]a) and, for $z<4$, on the global mass of BHs (Fig. \[fig:params\_bhdens\]a). Because the properties of individual BHs are set by local physical processes, increasing the box size does nothing to the scaling relations (Fig. \[fig:params\_mbhmhalo\]a), except for allowing us to probe the mass function up to larger halo masses. The same holds true for the SSFRs of individual objects (Fig. \[fig:params\_ssfr\]a). We now assess the impact of numerical resolution on our results. We compare simulations at three different resolutions (*L100N512*, *L100N256*, and *L100N128*) both with and without AGN feedback. Simulation $L100N128$ has a dark matter particle mass of $3.54\times10^{10}\,{\rm M}_{\odot}$, a factor of 64 worse than that used in *L050N256*, our lowest resolution production simulation. We concentrate first on the star formation history in these simulations (Fig. \[fig:params\_sfr\]b). At high redshift ($z>4$), the star formation rate is governed by numerical resolution. At low redshift ($z<2$) the two highest resolution simulations (*L100N512* and *L100N256*) have star formation rates that differ by $\sim0.2\,$dex, indicating that this quantity is not yet fully resolved. We see by comparing simulations with and without AGN that the factor by which AGN feedback decreases the global SFR is the same in both simulations. However, in the lowest resolution simulation (*L100N128*; blue, dashed line), the AGN are largely ineffective at decreasing the global SFR. We now turn our attention to the BH properties. The $z<1$ integrated BH density (Fig. \[fig:params\_bhdens\]b) is virtually indistinguishable between runs *L100N512* and *L100N256*. This occurs because the global black hole density is dominated by the massive BHs, which are well resolved in both simulations. *L100N128* underpredicts the redshift zero BH density by a factor of two. This is due to two reasons. Firstly, in this simulation seed mass BHs are placed only in objects with masses larger than $3.54\times10^{12}\,{\rm M}_\odot$, which means that we neglect a large population of important black holes. Secondly, we see that the first BHs in this simulation start growing only at low redshift ($z\approx4$), and as such, may not have had enough time to grow onto the observed BH scaling relations. This picture is borne out by the properties of individual BHs. Fig. \[fig:params\_mbhmhalo\]b shows that for $\sigma>100\,{\rm km/s}$ the BH properties are well converged, whereas at the lower mass end of the scaling relations the resolution becomes important. The lowest resolution simulation underpredicts the BH scaling relations at all masses. If we now examine the SSFRs of individual objects (Fig. \[fig:params\_ssfr\]b), looking first at simulations that do not contain AGN (*L100N512NOAGN* and *L100N256NOAGN*; grey, dotted and solid lines respectively) we see that the SSFR is almost converged above a stellar mass of $10^{10.5}\,{\rm M}_\odot$. If we now consider the effect of adding AGN feedback to these simulations, we see that the stellar mass at which the simulations with and without AGN diverge from one another has not yet converged, indicating that results on the scale of individual objects in these simulations may remain affected by numerical resolution. We note that the redshift zero global SFR is affected by resolution to a lesser degree than the SSFR. This is likely to be because the total galaxy stellar mass is an integral over all time of the galaxy star formation rates, so the strong resolution effects at high redshift (Fig. \[fig:params\_sfr\]b) persist in the redshift zero population. We thus conclude that, because local processes govern the size of individual BHs, the simulation box size is unimportant when discussing BH scaling relations. However, the limited simulation mass resolution means that the stellar masses in our simulated galaxies are not yet fully resolved. We also conclude that BH properties, such as the integrated cosmic mass density and BH scaling relations are converged in all of our production simulations, but decreasing the mass resolution by a factor of 64 places us in the regime where no AGN properties are resolved. Therefore, we can conclude that the global results we present in this paper are robust to changes in numerical resolution, but that predictions involving the stellar properties of individual objects should be treated with caution. ### The seed model {#sec:seedmodel} Two parameters control the way that seed BHs are generated: the BH seed mass ($m_{{\rm seed}}$) and the halo mass above which BH seeds are inserted into haloes ($m_{{\rm halo,min}}$). For the minimum halo mass we chose to use a fixed number of particles, $m_{{\rm halo,min}}=100m_{{\rm DM}}$, rather than a fixed halo mass because this ensures that BHs are always placed into well defined haloes, regardless of the numerical resolution. Consideration of the AGN feedback process then allowed us to place a minimum value on the possible mass of BH seeds ($10^{-3}{m_{{\rm g}}}$) such that the average time between heating events for a BH accreting at the Eddington rate is shorter than the Salpeter time, making it possible for the BH to regulate its growth. We also noted that the requirement that the total mass in seed mass BHs generated in a simulation is much lower than the observed redshift zero cosmic BH density, does not allow the seed mass to be much larger than the minimum possible value. We thus chose $m_{{\rm seed}}=10^{-3}\,m_{{\rm g}}$ as our fiducial value. We now examine how changes in these two parameters affect our results. By comparing our base model (*L050N256*) to a simulation in which BH seeds are only placed into haloes that are ten times more massive (*L050N256HIHALO*; green dot-dashed curves) we can say that most of our results are insensitive to the choice of $m_{{\rm halo,min}}$. The global star formation rate density in the simulations (Fig. \[fig:params\_sfr\]c) is virtually unaffected by changing $m_{{\rm halo,min}}$ from $10^2\,m_{{\rm DM}}$ to $10^3\,m_{{\rm DM}}$, probably because the SFR of galaxies is only affected by AGN in the most massive haloes ($M_*\ga 10^{11}{\rm M}_\odot$; Fig. \[fig:params\_ssfr\]). The cosmic BH density is also insensitive to $m_{{\rm halo,min}}$. This follows because in the fiducial model, the cumulative seed BH mass makes up only $\sim2\%$ of the redshift zero BH density, so removing some seeds has a negligible effect on the global density. We now turn our attention to the BH scaling relations (Fig. \[fig:params\_mbhmhalo\]c), where we see that the BHs grow on to the observed scaling relations at a higher mass, but that for $M_*\ga 10^{11}\,{\rm M}_\odot$ or $\sigma>100$ km/s the results are almost identical to the fiducial simulation. These are the BHs that are energetically important and we can therefore conclude that results derived from our AGN simulations are insensitive to uncertainties in the minimum halo mass that contains a BH. Large changes in the mass of the seed BHs (*L050N256HISEED*, *L050N256LOSEED*) can lead to more significant changes in the global properties of the simulation. If the initial seed mass is lowered by a factor of ten then by redshift zero the global BH density is slightly greater than in the fiducial case (Fig. \[fig:params\_bhdens\]c). This occurs because initially smaller BHs take longer to grow onto the BH scaling relations both because they need to grow more and because they grow more slowly (see Fig. \[fig:bhga\]). As a result of this the galaxy potential well is deeper by the time that the BH begins to heat gas, and so it requires more energy input for the BH to stop its own growth. Because it takes longer for AGN feedback to become effective, the global SFR is higher (Fig. \[fig:params\_sfr\]c). The same argument can be made in reverse for high seed masses and explains why increasing the BH seed mass slightly decreases the global star formation rate (Fig. \[fig:params\_sfr\]c). We can draw the same conclusions from examining the SSFR of individual objects (Fig. \[fig:params\_ssfr\]c). Changing the seed mass has virtually no effect on the SFR of the most massive objects, and only affects haloes near the threshold mass at which AGN feedback begins to become important. The cumulative amount of seed BHs in the simulation *L050N256HISEED* is within a factor of four of the redshift zero black hole density. Although this does not violate our constraint that the total mass in seed BHs should be less than the observed redshift zero value, it is barely satisfied, and leaves very little room for AGN in our simulations to grow. Additionally a large value for the initial seed mass means that we place seed BHs initially *above* the BH scaling relations (Fig. \[fig:params\_mbhmhalo\]c). Nevertheless they still grow onto the same scaling relations. We note that all our global results are a very weak function of the seed mass. For example, changing the seed mass by a factor of 100 changes the global SFR by no more than a factor 2.5. Other uncertainties, particularly those introduced by our lack of knowledge about the way in which BHs accrete, lead to us conclude that the specifics of the seed model are relatively unimportant. ### The feedback efficiency The efficiency with which a BH’s radiated energy is coupled to the ambient gas, ${\epsilon_{{\rm f}}}$, is treated as a free parameter in our simulations. We show in this section that this parameter controls the normalization of the total mass in BHs and of the BH scaling relations [see also @dima05]. In our simulations ${\epsilon_{{\rm f}}}$ was tuned to match the $z=0$ ${m_{{\rm BH}}}-m_{{\rm halo}}$ relation and the redshift zero cosmic black hole density. A value of ${\epsilon_{{\rm f}}}=0.15$ was found to work well. Changing ${\epsilon_{{\rm f}}}$ from its fiducial value has no discernible effect on the global star formation rate (Fig. \[fig:params\_sfr\]d), or on the SSFRs of individual objects (Fig. \[fig:params\_ssfr\]d), but the total redshift zero BH density is inversely proportional to the feedback efficiency (Fig. \[fig:params\_bhdens\]d). Most strikingly, simulations *L050N256VHIEPS*, *L050N256HIEPS*, *L050N256LOEPS*, and *L050N256VLOEPS* have values of $\epsilon_{{\rm f}}$ that differ by factors of 4 , 2, 1/2, and 1/4 from the fiducial run, and the ratio of their $z=0$ BH densities to that of the fiducial simulation are 0.24, 0.53, 1.93 and 4.03 respectively. The fact that the final mass of BHs is directly proportional to $1/{\epsilon_{{\rm f}}}$ implies that in our models any given BH grows until it has injected a specific amount of energy per unit halo mass, at which point it is able to reduce its local density and to effectively self-regulate its growth. As demonstrated in @boot09 this remarkable result agrees qualitatively with the ideas presented in e.g. @fabi99 and @silk98, where BHs grow until they can expel gas from the galaxy, at which point they enter a quiescent phase. The redshift zero global BH density (Fig. \[fig:params\_bhdens\]d) and the normalization of the BH scaling relations (Fig. \[fig:params\_mbhmhalo\]d) are both directly proportional to $1/{\epsilon_{{\rm f}}}$, and so we can use these observations to constrain the value of ${\epsilon_{{\rm f}}}$ that allows our model to be consistent with redshift zero observations. We employ a value of ${\epsilon_{{\rm f}}}=0.15$ in the present study, but note that we will show elsewhere that the parameters of the AGN model must be tuned in conjunction with the SN feedback prescription. ### The heating mechanism The resolution of cosmological simulations is too poor to resolve the scales on which AGN inject energy into the ISM and as such we are forced to make some assumptions. It is therefore important to verify that the results obtained from our simulations do not depend strongly on the implementation of the energy coupling mechanism. Two parameters control how energy is distributed from the AGN to its surroundings: the minimum temperature increase and the number of neighbouring SPH particles to heat ($\Delta T_{{\rm min}}=10^{8}$ K and ${n_{{\rm heat}}}=1$). The fiducial value of the minimum temperature increase was constrained by two competing effects. Firstly, if $\Delta T_{{\rm min}}$ is too low, the cooling time of heated gas will be so short that the gas will be able to radiate away its energy before having a dynamical effect. On the other hand, if $\Delta T_{{\rm min}}$ is set too high, then the amount of energy needed to perform a heating event will be so high that the numerically determined time between heating events exceeds the Salpeter time-scale, making it impossible for the BH to regulate its growth. We found that $\Delta T_{{\rm min}}=10^8$ K provides a good compromise between these two effects. Similarly, to minimize the numerically controlled duty cycle we set $n_{{\rm heat}}=1$ in our fiducial models. Here we assess the impact of distributing feedback energy in different ways. If we change either the temperature to which gas is heated from $10^8$ K to $10^7$ K (*L050N256T7*) or the number of black hole neighbours affected by each feedback event from 1 to 10 (*L050N256HINHEAT*), we see small changes in the cosmic star formation history (Fig. \[fig:params\_sfr\]e) and in the cosmic BH density (Fig. \[fig:params\_bhdens\]e). We note, however, that the results are relatively insensitive to these parameters, and that changing the value of either $\Delta T_{{\rm min}}$ or ${n_{{\rm heat}}}$ by an order of magnitude affects both BH properties and SFRs by only $\sim0.3$ dex. We thus conclude that as long as the feedback efficiency (${\epsilon_{{\rm f}}}$) is calibrated such that the redshift zero black hole relations are satisfied, and that the minimum heating temperature is sufficient for feedback to be effective, our other results are robust to the precise way in which this energy is coupled to the ISM. ### The accretion model Because numerical simulations cannot resolve the properties of the multi-phase ISM, we can justify the use of high values for the multiplicative factor, $\alpha$, in the Bondi-Hoyle accretion rate (see Eq. \[eq:bhl\]), at least for high density gas (Sec. \[sec:pars\]). This provides a motivation for the constant-$\alpha$ accretion model – similar to those previously published – where BHs accrete with a very high efficiency. In this paper we have introduced a new class of models that treat accretion of low-density gas differently. By noting that the ISM will not contain a cold ($T\ll 10^4$ K) phase at low pressures and that we are also able to resolve Bondi-Hoyle accretion onto BHs with masses greater than the resolution limit of our simulation if we resolve the Jeans mass, we argue that $\alpha$ must asymptote to unity in low density environments. We then parametrize our ignorance about the state of the multiphase ISM, and the precise mechanism by which AGN accrete by introducing a parameter, $\beta$, that describes a power-law dependence of the accretion rate on the local gas density (Eq. \[eq:beta\]). The growth of a black hole depends upon the accretion model used. In the constant-$\alpha$ model (Bondi-Hoyle with $\alpha_0=100$) the growth is Eddington limited unless the gas in the immediate surroundings of the BH has a density that is much lower than typical of the ISM, e.g. as a result of feedback from the BH. In the constant-$\beta$ models, however, much larger densities are required for the accretion rate to become Eddington limited. Efficient growth is only possible if the BH’s local density is enhanced by dynamical processes. BHs can only decrease their accretion rate (and hence regulate their own growth) when densities are low enough that accretion rates are no longer Eddington limited. The density at which BH accretion rates become Eddington limited depends on the accretion model (Fig. \[fig:rhoedd\]). In the constant-$\alpha$ model with $\alpha_0=100$ BHs self-regulate at densities below the density at which a multi-phase medium is expected to form (Fig. \[fig:rhoedd\]), this has a large effect on the physical properties of the galaxy and because the simulations resolve Bondi-Hoyle accretion in this regime it invalidates the assumptions used to justify the use of $\alpha_0=100$ in the first place. We now consider how changes in the accretion model affect the properties of the simulated galaxy population. Increasing $\beta$ from 2 to 4 (simulation *L050N256B4*) depresses the global star formation rate somewhat (Fig. \[fig:params\_sfr\]f) because the $\beta$ parameter controls the gas density at which the accretion rate for a given BH mass becomes Eddington limited (Fig. \[fig:rhoedd\]). Hence, for a given value of $\beta$ there is a critical value of the local gas density above which a BH can switch on and begin to grow rapidly. For larger values of $\beta$ this occurs at a lower density, and hence in smaller haloes. This manifests itself in the BH scaling relations by changing the minimum galaxy mass at which the BHs grow onto the observed scaling relations. A lower value of $\beta$ therefore increases the global star formation rate by allowing BHs to grow only in more massive haloes. Increasing $\beta$ above 4 does not have a large effect on any of our results, as for any large value of $\beta$, the BH accretion model behaves in such a way as to be Eddington limited in star-forming gas (Fig. \[fig:rhoedd\]), and the difference in behaviour between a $\beta=4$ model and a $\beta=\infty$ model is very small. Any physical process that is strongly dependent on the local density is affected by numerical resolution, because higher resolution simulations allow the formation of higher density regions. For this reason we caution that the stellar mass at which BHs grow onto the observed relation is affected by numerical resolution, and so the value of $\beta$ may need to be tuned to different values for simulations with mass resolutions significantly different to those presented here. A comparison of the effect of different $\beta$ models on the SSFR of haloes (Fig. \[fig:params\_ssfr\]f) supports this picture. The stellar mass above which the behaviour of each AGN model diverges from the behaviour of the simulation without AGN depends upon the value of $\beta$. In the constant-$\alpha$ model with $\alpha_0=100$ (*L050N256A100B0*) BHs grow in an Eddington limited manner from their birth, and as such suppress star formation in all haloes that contain a BH. This is visible in Fig. \[fig:params\_ssfr\]f, in which the constant-$\alpha$ simulation deviates from the simulation without AGN in haloes with a low stellar mass. The constant-$\alpha$ simulations efficiently suppress star formation in every halo that contains a BH and as such the integrated SFR is more than an order of magnitude lower than in the other simulations. The same effect is present in the large simulation volume, but is less pronounced because seed mass BHs are placed only into larger haloes, where both accretion models are capable of suppressing SF. This result implies that in order for a simulation that employs a constant-$\alpha$ accretion model to reproduce the observed break in galaxy properties at $\log (M_*)\sim10.5$ [e.g. @kauf03], the resolution must be tuned such that BH seeds are placed into haloes of the correct size[^10]. Considering now reducing the value of $\alpha_0$ in the constant-$\alpha$ models from 100 to 1 (*L050N256A100B0* compared to *L050N256B0*; a constant-$\beta$ model with $\beta=0$ is equivalent to a constant-$\alpha$ model with $\alpha_0=1$), equivalent to removing the numerical fudge factor from the BH accretion rates, we see that BHs are unable to grow in all but the very most massive haloes (Fig. \[fig:params\_mbhmhalo\]f), and so the halo SSFRs are virtually the same as for the simulation without AGN feedback in all haloes up to a stellar mass of $10^{12}\,{\rm M}_\odot$ (Fig. \[fig:params\_ssfr\]f). This, in turn, means that the global SFR density in the $\alpha_0=1$ simulation is very similar to that in the simulation run without AGN (Fig. \[fig:params\_sfr\]f). The global BH density at redshift zero is actually higher than that in the fiducial simulation (Fig. \[fig:params\_bhdens\]f). This occurs because the BHs that do grow in the $\alpha_0=1$ simulation grow very late, and so find themselves in a deeper potential well. In order to self-regulate they then need to output more energy and hence grow even more than in the other simulations. The global BH density is therefore by itself not a good indicator of how efficiently AGN are able to suppress star formation. We now consider increasing $\alpha_0$ by an order of magnitude relative to the usual value (*L050N256A1000B0* compared to *L050N256A100B0*). We already know that the density below which BHs can self regulate depends strongly on the value of $\alpha_0$ (Fig. \[fig:rhoedd\]), and it follows that the value of $\alpha_0$ will have a strong effect on the physical conditions in the centres of haloes. In the case of $\alpha_0=1000$ the global BH density is similar to the $\alpha_0=100$ case (Fig. \[fig:params\_bhdens\]f), whereas the global SFR and galaxy SSFRs are much lower than in any other simulation (Fig. \[fig:params\_sfr\]f and Fig. \[fig:params\_ssfr\]f), and the $m_{{\rm BH}}-m_{{\rm stellar}}$ relation is shifted significantly to the left as the BHs effectively shut down star formation in all haloes. However, the $m_{{\rm BH}}-\sigma$ relation is not as strongly affected by the very effective AGN feedback. This is likely due to the fact that the stellar velocity dispersion tracks the depth of the galaxy potential well, which contains a large contribution from dark matter, whereas the galaxy stellar mass is sensitive to the details of the feedback processes operating inside the galaxy. We see that changing the value of the free parameter $\alpha_0$ can have profound effects on the simulated galaxy population, even if the evolution in the global mass density of BHs is barely affected by the parameter change. We conclude this section by noting that predictions from AGN feedback models of this type are sensitive to the accretion model that is used. More generally, it is not clear that the Bondi-Hoyle rate is the correct accretion rate to use in the case of a BH accreting from a hot halo [@krum06]. We find that a different parametrization of the accretion rate leads to profound differences in the star formation histories and speed of BH growth and therefore caution the reader that the AGN accretion mechanism represents a significant source of uncertainty in all our results. Discussion and conclusions {#sec:discussion} ========================== We have presented and tested a method to incorporate SMBHs into cosmological, smoothed particle hydrodynamics simulations. The method, which is a substantially modified version of the one introduced by S05, self-consistently describes the mass growth of BHs and feedback from AGN. Here we consider growth through mergers with other BHs as well as through accretion of gas. The AGN feedback in our model is thermal and local to the BH. Although we also use the SPH code [gadget III]{}, our code differs from that of S05 in many ways, including the use of different models for star formation, feedback from SN, radiative cooling and stellar evolution. Particularly relevant for AGN feedback is the fact that, contrary to S05, we do not make use of a subgrid model to describe the different phases of the ISM. Following S05, we make use of subgrid BHs to allow BH masses to be small compared with the masses of the particles containing the BHs. We note, however, that while this approach allows one to significantly extend the range of BH masses in the simulation, the accretion radius will only be resolved if the BH mass exceeds the local Jeans mass. Unfortunately, this is generally not the case if the BH mass is small compared with the particle mass. The AGN model is specified by seven main parameters (Table \[tab:parlist\]). Two parameters describe the BH seed generation mechanism: the BH seed mass, $m_{\rm seed}$, and the halo mass into which seed BHs are placed, $m_{\rm halo,min}$. Two parameters describe the amount of energy that is coupled back to the ISM per unit accreted mass: $\epsilon_{\rm r}$, the radiative efficiency of a BH accretion disk, is the fraction of the rest mass energy accreted by the BH that is radiated by the AGN and $\epsilon_{\rm f}$ is the fraction of the radiated energy that is coupled thermally to the ISM. For a given BH accretion rate, the rate at which energy is injected into the ISM depends only on the product $\epsilon_{\rm r}\epsilon_{\rm f}$. However, we do need to specify the radiative efficiency $\epsilon_{\rm r}$ since the mass growth of the BH is proportional to $(1-\epsilon_{\rm r})$. A further two parameters control the numerical implementation of the injection of energy into the ISM by AGN: the number of neighbouring gas particles heated by each BH heating event, ${n_{{\rm heat}}}$, and the minimum amount by which their temperature is increased, $\Delta T_{{\rm min}}$. Because we let BHs store feedback energy until they have saved enough to heat ${n_{{\rm heat}}}$ particles by $\Delta T_{{\rm min}}$ degrees Kelvin, these last two parameters together determine the AGN duty cycle for a fixed accretion rate. Finally, we require one additional parameter that controls how BHs accrete gas, and we describe two different models for this process. The gas accretion rate is assumed to scale as the Bondi-Hoyle accretion rate, evaluated at the location of the BH and on the scales resolved by the simulation. We do not, however, allow the accretion rate to exceed the Eddington rate. The Bondi-Hoyle accretion rate predicted by the simulation will underestimate the true rate if the density is underestimated or if the temperature is overestimated. A lack of numerical resolution may result in an underestimate of the gas density, which motivated S05 to multiply the Bondi-Hoyle rate predicted by the simulation by a constant factor $\alpha_0=100$. We call models of this type constant-$\alpha$ models. We noted that cosmological simulations lack not only the resolution, but also the physics to model the cold ISM phase. For example, our simulations use a polytropic effective equation of state for densities at which the gas is expected to be multiphase. Hence, they will miss the cold component for which the Bondi-Hoyle accretion rate would be highest. This will lead us to strongly underestimate the Bondi-Hoyle rate in high density gas. Although cosmological simulations cannot yet resolve cold, interstellar gas, many do resolve the Jeans scales at densities low enough for the ambient ultraviolet radiation to suppress cooling below $10^4\,$K. Specifically, any simulation that resolves the Jeans scales in the gas surrounding a BH particle, will also resolve the Bondi-Hoyle radius if the BH mass exceeds the local Jeans mass. Hence, at sufficiently low densities the Bondi-Hoyle accretion rate is modeled correctly and multiplying it by $\alpha_0=100$ would result in a large overestimate of the accretion rate. We therefore introduced a second class of models in which the Bondi-Hoyle accretion rate is multiplied by a factor $(n_{\rm H}/n_{\rm H}^\ast)^\beta$ for densities $n_{\rm H} > n_{\rm H}^\ast$, where $n_{\rm H}^\ast$ is the density above which the gas is expected to be multiphase (we take $n_{\rm H}^\ast=0.1~{\rm cm}^{-3}$). We refer to this class of models as constant-$\beta$ models. Note that both constant-$\alpha$ and constant-$\beta$ models use a single free parameter. Because we have changed the density-dependence of the accretion rate, we cannot claim to be simulating Bondi-Hoyle accretion, even though the changes are motivated by the Bondi-Hoyle formula and even though we do retain the Bondi-Hoyle scaling with the BH mass. We argued, however, that this is also true for constant-$\alpha$ models because the use of values $\alpha_0 \gg 1$ implies that the densities and temperatures predicted by the simulations are greatly in error. The parameters $\alpha_0$ and $\beta$, used in the constant-$\alpha$ and constant-$\beta$ models respectively, control the ambient gas density at which the BH accretion rate becomes Eddington limited (Fig. \[fig:rhoedd\]). Because the maximum densities sampled by the simulation increase with halo mass (both because more massive haloes are resolved with more particles and because the central pressure increases with the depth of the potential), these parameters effectively set the halo mass above which BHs can grow efficiently. We set $\beta=2$, which we find results in efficient BH growth in haloes with stellar masses $\ga 10^{10.5}\,{\rm M}_\odot$ in these simulations. Using a constant-$\alpha$ prescription with $\alpha_0=100$ implies that, in the absence of AGN feedback, the accretion rate is essentially always Eddington limited. Because the accretion is efficient even at relatively low gas densities, AGN feedback is in that case important in all haloes that exceed $m_{\rm halo,min}$. For this class of models the halo mass above which AGN can suppress star formation is thus in effect a free parameter ($m_{\rm halo,min}$). For BHs with masses greater than $10^6\,{\rm M}_\odot$ self-regulation can only occur at densities orders of magnitude below the star formation threshold (Fig. \[fig:rhoedd\]). In this regime we resolve Bondi-Hoyle accretion, invalidating the assumption used to justify large values of $\alpha$ in the first place. Constant-$\alpha$ models therefore underestimate the gas density required for self-regulation and will thus overestimate the suppression of the star formation rate. Having chosen a prescription for gas accretion, we then derive values for the other model parameters. Because each BH stores its feedback energy until it suffices to heat $n_{\rm heat}$ neighbours by $\Delta T_{\rm min}$, we are faced with a numerically determined duty cycle (for a given accretion rate). In order for the BH to be able to regulate its growth, we require the time between heating events, $\Delta t_{\rm heat} \propto n_{\rm heat} \Delta T_{\rm min}$, to be as small as possible and certainly smaller than the Salpeter time if the accretion is Eddington limited. We use $\Delta T_{{\rm min}}=10^{8}\,$K, which ensures the temperature of the heated gas is high enough that the injected thermal energy is not just radiated away, and ${n_{{\rm heat}}}=1$, which minimizes $\Delta t_{\rm heat}$. Because $\Delta t_{\rm heat}$ decreases with the ratio of the mass of the BH to that of the heated gas particle, the requirement that $\Delta t_{\rm heat}$ is smaller than the Salpeter time for Eddington-limited accretion implies that the (subgrid) BH mass must exceed 0.1 per cent of the gas particle mass (Fig. \[fig:bhg\]). Hence, we set $m_{\rm seed}=10^{-3}\,{m_{{\rm g}}}$. We set $m_{{\rm halo,min}}=100m_{{\rm DM}}$ in order to ensure that seed BHs are placed only into well defined haloes. These parameter values will obviously need to be increased if they result in seed and/or minimum halo masses that are lower than expected physically, as may be the case for simulations that use a much higher resolution than is used here. We assume the standard value $\epsilon_{\rm r}=0.1$ for the radiative efficiency and tune $\epsilon_{\rm f}$ to match the redshift zero ${m_{{\rm BH}}}-m_{{\rm halo}}$ relation and cosmic BH density. A value of ${\epsilon_{{\rm f}}}=0.15$ was found to provide a good match to the observations. Having specified the AGN model, we then analysed the results from a large suite of cosmological simulations chosen to investigate the sensitivity of the predictions on the model parameters. For this purpose we compared the predictions for the cosmic SF history, the cosmic BH density, the redshift zero BH scaling relations (both the $M_{\rm BH}-M_\ast$ and the $M_{\rm BH}-\sigma$ relations), and the redshift zero galaxy specific star formation rates (SSFRs). We demonstrated that the fiducial model provides good agreement with both the observed mass density in BHs (Fig. \[fig:params\_bhdens\]) and the BH scaling relations (Fig. \[fig:params\_mbhmhalo\]), and that the inclusion of AGN feedback in the simulations effectively suppresses star formation in galaxies with stellar masses greater than $>10^{10.5}\,M_{{\rm odot}}$ (Fig. \[fig:params\_ssfr\]). We will discuss the comparison between the simulated global SFR density and observations elsewhere, but for now we note that the SN feedback parameters of our models were tuned such that a simulation without AGN feedback has a peak SFR density that is in good agreement with observations. As such, adding an extra source of feedback energy inevitably results in an underestimate of the SFR density. In order to achieve a good match with observations the properties of the SN model must be tuned in conjunction with those of the AGN model. However, the focus of our study was not to match observations, but to explore the dependence of the results on the parameters of the model. Our main conclusions are summarised below: - Regardless of whether BH seeds are initially placed above or below the BH scaling relations, they grow onto the same relations. - Because the global BH density is dominated by massive BHs and because AGN feedback is only important in high-mass haloes, uncertainties in the seed model employed do not lead to significant changes to the global properties of our simulations. Changing the initial seed mass by two orders of magnitude changes the global star formation rate by only a factor of 2.5. The assumed seed generation model can, however, affect galaxy properties at around the galaxy mass where AGN first begin to become energetically important. At higher masses galaxy properties are largely insensitive to the initial distribution of BH seeds. - As discussed more comprehensively in @boot09, the normalization of both the global BH density and the BH scaling relations is nearly exactly inversely proportional to the AGN feedback efficiency, $\epsilon_{\rm r}\epsilon_{\rm f}$. Most strikingly, changing the efficiency by a factor 16 does not give rise to any discernible changes in the global SF history. These results imply that the total amount of thermal energy injected by AGN is conserved when the feedback efficiency is changed. These results can be explained if BHs grow until they have generated enough energy to regulate the accretion rate and if the required amount of energy depends on the depth of the potential well on scales that are larger than the radius on which the BH dominates. - Changing the way in which the thermal energy from the AGN is distributed in the gas surrounding the BH has little effect on our results, as long as the gas is heated to a temperature that is high enough for its cooling time to become long. Hence, thermal feedback can be efficient in cosmological simulations that do not resolve the multiphase ISM. - Cosmological simulations currently underestimate the Bondi-Hoyle accretion rate in dense gas because they lack both the resolution and the physics to model the dense, cold phase of the ISM. It is therefore necessary to increase the predicted accretion rate by a fudge factor, either by explicitly multiplying the accretion rate by a numerical correction factor or by employing a subgrid model for the unresolved gas physics to artificially boost accretion rates in star-forming gas. Using a multiplicative factor that asymptotes to unity in the regime where the simulation is able to model the relevant physics (our constant-$\beta$ model) gives different results from using a high factor throughout (the constant-$\alpha$ model) as has been used in most previous work. In general, the density above which BHs are able to accrete efficiently depends upon the accretion model used (Fig. \[fig:bhga\]). Because higher mass haloes contain higher density gas, the accretion model determines the halo mass above which AGN feedback becomes effective. Until the simulations are able to resolve Bondi-Hoyle accretion in a multiphase ISM, the predictions of the models will remain subject to significant uncertainty. - The ${m_{{\rm BH}}}-\sigma$ relation is more robust than the ${m_{{\rm BH}}}-M_*$ relation to changes in the model parameters (Fig. \[fig:params\_mbhmhalo\]), and so provides a test of the numerical model that is less affected by uncertainties in numerical parameters than the Magorrian relation. This is likely due to the fact that the stellar velocity dispersion tracks the depth of the galaxy potential well, which contains a large contribution from dark matter, whereas the galaxy stellar mass is sensitive to the details of the feedback processes operating inside of the galaxy. In summary, we have presented and tested a new model for the self-consistent growth of BHs and feedback from AGN in cosmological simulations. In a future work we will discuss the interaction between AGN feedback and other physical processes, and show that the results obtained from an AGN model depend also on other processes such as SN feedback and stellar mass loss. Acknowledgments {#acknowledgments .unnumbered} =============== We are very grateful to Volker Springel for generously providing us with a copy of his code. We would also like to thank him as well as Richard Bower, Ian McCarthy and the members of the OWLS collaboration for useful discussions. The simulations presented here were run on the Cosmology Machine at the Institute for Computational Cosmology in Durham as part of the Virgo Consortium research programme, on Stella, the LOFAR BlueGene/L system in Groningen, and on Huygens, the Dutch national supercomputer. This work was supported by Marie Curie Excellence Grant MEXT-CT-2004-014112 and by an NWO Vidi grant. Aller M. C., Richstone D., 2007, 665, 120 Arav N., Korista K. T., de Kool M., 2002, ApJ, 566, 699 Arav N., Moe M., Costantini E., Korista K. T., Benn C., Ellison S., 2008, ApJ, 681, 954 Barnes J. E., Hernquist L. E., 1991, ApJ, 370, L65 Barnes J. E., Hernquist L. E., 1996, ApJ, 471, 115 Baugh C. M., 2006, Reports of Progress in Physics, 69, 3101 Benson A. J., Bower R. G., Frenk C. S., Lacey C. G., Baugh C. M., Cole S., 2003, MNRAS, 599, 38 Begelman M., Rees M. J., 1978, MNRAS, 185, 847 Begelman M., Volonteri M., Rees M. J., 2006, MNRAS, 370, 289 Bhattacharya S., di Matteo T., Kosowsky A., MNRAS, 389, 34 Bondi H., Hoyle F., 1944, MNRAS, 104, 273 Booth C. M., Schaye J., astro-ph/????.???? Bower R. G., Benson A. J., Malbon R., Helly J. C., Frenk C. S., Baugh C. M., Cole S., Lacey C. G., 2006, MNRAS, 370, 645 Boyle B. J., Terlevich R. J., 1998, MNRAS, 293, L49 Bromm V., Loeb A., 2003, ApJ, 596, 34 Bruzual G., Charlot S., 2003, MNRAS, 344, 1000 Buff J., McCray R., 1974, ApJ, 189, 147 Callegari S., Mayer L., Kazantzidis S., Colpi M., Governato F., Quinn T., Wadsley J., 2008, astro-ph/0811.0615 Cao X., Li F., 2008, MNRAS, 390, 561 Cattaneo A., 2001, MNRAS, 324, 128 Chelouche D., 2008, astro-ph/0812.3621 Ciotti L., Ostriker J. P., 2001, ApJ, 551, 131 Colberg J. M., di Matteo T., 2008, MNRAS, 387, 1163 Cowie L. L., Ostriker J. P., Stark A. A., 1978, ApJ, 226, 1041 Croft R. A. C., Di Matteo T., Springel V., Hernquist L., 2008, astro-ph/0803.4003 Croton D. J., 2006, MNRAS, 365, 11 Dekel A., Silk J., 1986, ApJ, 303, 39 Dalla Vecchia C., Schaye J., 2008, MNRAS, 387, 1431 Davis M., Efstathiou G., Frenk C. S., White S. D. M., 1985, ApJ, 292, 371 Dijkstra M., Haiman Z., Mesinger A., Wyithe S.,2008, astro-ph/0810.0014 Di Matteo T., Springel V., Hernquist L., 2005, Nature, 433, 604 Di Matteo T., Colberg J., Springel V., Hernquist L., 2008, ApJ, 676, 33 Diemand J., Kuhlen M., Madau P., 2007, ApJ, 667, 859 Dolag K., Borgani S., Murante G., Springel V., 2008, astro-ph/0808.3401 Edgar R., 2004, NewAR, 48, 843 Efstathiou G., Rees M. J., 1998, MNRAS, 230, 5 Elvis M., Risaliti G., Zamorani G., 2002, ApJL, 565, L75 Fabian A. C., 1999, MNRAS, 308, L39 Fabbiano G., 2006, ARA&A, 44, 323 Ferland G. J., Korista K. T., Verner D. A., Ferguson J. W., Kingdon J. B., Verner E. M., 1998, PASP, 110, 761 Feoli A., Mele D., 2005, International Journal of Modern Physics D, 14, 1861 Ferrarese L., Merritt D., 2000, ApJL, 539, L9 Ferrarese L., 2002, ApJ, 578, 90 Gebhardt K. et al., 2000, ApJ, 539, L13 Gingold, R. A., & Monaghan, J. J. 1977, MNRAS, 181, 375 Graham A. W., Driver S. P., 2007, ApJ, 666, 77 Graham A. W., Driver S. P., 2007, MNRAS, 380, L15 Graham A. W., 2008, ApJ, 680, 143 Granato G. L., De Zotti G., Silva L., Bressan A., Danese L., 2004, ApJ, 600, 580 Greene J., Ho L. C., Barth A. J., 2008, astro-ph/08101972 Haardt F., Madau P., 2001, in Neumann D. M., Tran J. T. V., eds, Clusters of Galaxies and the High Redshift Universe Observed in X-rays Modelling the UV/X-ray cosmic background with CUBA Haehnelt M. G., Rees M. J., 1993, MNRAS, 263, 168 Hopkins P. F., Hernquist L., Cox T. J., Di Matteo T., Robertson B., Springel V., 2006, ApJS, 163, 1 Hopkins P. F., Hernquist L., Cox T. J., Robertson B., 2007, ApJ, 669, 67 Hopkins P. F., Hernquist L., Cox T. J., Keres D., Stijn W., astro-ph/0807.2868 Hoyle F., Lyttleton R. A., 1939, Proc. Cambridge Philos. Soc., 35, 405 Haring N., Rix H. W., 2004, ApJL, 604, 89 Islam R. R. Tayloe J. E., Silk J., 2003, MNRAS, 340, 647 Johansson P. H., Naab T., Burkert A., 2009, ApJ, 690, 802 Kapferer W., Knapp A., Schindler S., Kimeswenger S., van Kampen E., 2005, AAP, 438, 87 Katz N., Weinberg D. H., Hernquist L., 1996, ApJS, 105, 19 Kauffmann G., Haehnelt M., 2000, MNRAS, 311, 576 Kauffmann G. et al., 2003, MNRAS, 346, 1055 Kauffmann G., Heckman T. , astro-ph/0812.1224 Kennicutt R. C., 1998, ApJ, 498, 541 Khalatyan A., Cattaneo A., Schramm M., Gottloeber S., 2008, astro-ph Komatsu E. et al., 2008, astro-ph/0803.0547 Kormendy J., Richstone D., 1995, ARA&A, 33, 581 Krumholz M. R., McKee C. F., Klein R. I., ApJ, 638, 369 Lagos C. D. P., Cora S. A., Padilla N. D., 2008, MNRAS, 388, 587 Laor A., Fiore F., Elvis M., Wilkes B. J., 1997, ApJ, 477, 93 Laor A., 2001, ApJ, 553, 677 Loeb A., Rasio F. A., 1994, ApJ, 432, 52 Lucy, L. B., 1977, AJ, 82, 1013 Lynden-Bell D., 1969, Nature, 223, 690 Madau P., Ferguson H. C., Dickinson M. E., Giavalisco M., Steidel C. C., Fruchter A., 1996, MNRAS, 283, 1388 Madau P., Rees M. J., 2001, ApJ, 551, L27 Magorrian J. et al., 1998, AJ, 115, 2285 Makino J., Funato Y., 2004, ApJ, 602, 93 Marconi A., Hunt L. K., 2003, ApJL, 589, L21 Marconi A. et al., 2004, MNRAS, 351, 169 Merritt D., Ferrarrese L., 2001, ApJ, 547, 140 Martinez-Sansigre A., Taylor A. M., 2008, astro-ph/0810.3920 McLure R. J., Dunlop J. s., 2002, MNRAS, 331, 795 Micic M., Holley-Bockelmann K., Sigurdsson S., Abel T., 2007, MNRAS, 380, 1533 Mihos J. C., Hernquist L., 1994, ApJ, 425, L13 Mihos J. C., Hernquist L., 1996, ApJ, 464, 641 Monaghan J. J.,1992, ARA&A, 30, 543 Murray N., Quataert E., Thompson T. A., 2005, ApJ, 618, 569 Okamoto T., Nemmen R. S., Bower R. G.,2008, MNRAS, 385, 161 Pelupessy F. I., Di Matteo T., Ciardi B., 2007, ApJ, 665, 107 Pounds K. A., King A. R., Page K. L., O’Brien P. T., 2003, MNRAS, 346, 1025 Reed D. S., Bower R., Frenk C. S., Jenkins A., 2006, MNRAS, 374, 2 Robertson B., Hernquist L., Cox T. J., Di Matteo T., Hopkins P. F., Martini P., Springel V., 2006, ApJ, 641, 90 Salpeter E., 1964, ApJ, 140, 796 Schneider R., Ferrara A., Natarajan P., Omukai K., 2002, ApJ, 571, 30 Schaye J., 2004, ApJ, 609, 667 Schaye J., Dalla Vechia C., 2008, MNRAS, 383, 1210 Shakura N. I., Syunyaev R. A., 1973, AAP, 24, 337 Shankar F., Salucci P., Granato G. L., De Zotti G., Danese L.,2004, MNRAS, 354, 1020 Silk J., Rees M. J., 1998, AAP, 3 31, L1 Sijacki D., Springel V., di Matteo T., Hernquist L., 2007, MNRAS, 380, 877 Spergel D. N., 2007, ApJS, 170, 377 Springel V., Yoshida N., White S. D. M., 2001,NewA, 6, 79 Springel V., White S. D. M., Tormen G., Kauffmann G., 2001,MNRAS, 328, 726 Springel V., Hernquist L., 2003, MNRAS, 339, 289 (SH03) Springel V., Di Matteo T., Hernquist L., 2005, MNRAS, 361, 776 (S05) Springel V., 2005, MNRAS, 364, 1105 Thorne K., 1974, ApJ, 191, 507 Toomre A., Toomre J., 1972, ApJ, 178, 623 Tremaine S. et al., 2002, ApJ, 574, 740 Wang J. M., Chen Y. M., Ho L. C. McLure R. J., 2006, ApJ, 642, L111 Wang J. M., Chen Y. M., Hu C., 2006, ApJ, 637, L85 Wiersma R. P. C., Schaye J., Smith B. D., 2009, MNRAS, 393, 99 Wiersma R. P. C., Schaye J., Theuns T., Dalla Vecchia C., Tornatore L., MNRAS, submitted, arXiv:0902.1535 Volonteri M., Natarajan P., astro-ph/0903.2262 Yu Q., Tremaine S., 2002, MNRAS, 335, 965 Yu Q., Lu Y., 2008, astro-ph/0808.3777 \[lastpage\] [^1]: E-mail: [email protected] (CMB) [^2]: Although see [@okam08] for a different approach. [^3]: Our value of $\sigma_8$ is 1.6$\sigma$ lower than allowed by the WMAP 5-year data. [^4]: This normalization factor is calculated from the asymptotic ratio of the numbers of ionizing photons predicted from models of stellar populations with a constant SFR [@bruz03]. [^5]: We used their equation (3) rather than (4) and [cloudy]{} version 05.07 rather than 07.02. [^6]: Note that S05 neglected the $(1-\epsilon_{\rm r})$ term and used $\dot{m}_{{\rm BH}}=\dot{m}_{{\rm accr}}$. [^7]: [@spri05; @dima05; @sija07; @dima08; @bhat08; @colb08] all do not discuss or mention the value of $\alpha$ that they used, but we have been informed by V. Springel that they assumed $\alpha=100$. @khal08 state explicitly that in their models $\alpha=300$ and justify this by noting that when the density of the ISM is smoothed on the scale of the computational resolution, the recovered densities are much lower than would be expected on the scale of the Bondi radius. Using similar reasoning, [@joha09] reach a similar conclusion and set $\alpha=100$. [^8]: Note that the choice of sub-grid ISM model is a significant source of additional uncertainty in the accretion rates. For example, the use of the SH03 sub-grid multi-phase ISM can give rise to differences of almost an order of magnitude in the accretion rates of seed mass black holes due to the use of different effective equations of state. [^9]: In some BH seed generation scenarios, for example the direct collapse of matter in haloes we expect BH seeds to reside initially above the observed scaling relations. We will show in Sec. \[sec:seedmodel\] that in our models BHs grow onto the BH scaling relations regardless of whether they are initially placed above or below the relations. [^10]: Because the accretion rate also depends on the assumed effective EOS, this may not be true for all constant-$\alpha$ models used in the literature. In the models of S05, which employ an EOS that is initially stiffer than our $\gamma_{\rm eff}=4/3$, the BH growth time has a local minimum at $n_{{\rm H}}=n_{{\rm H}}^*$ which suppresses AGN growth relative to our constant-$\alpha$ model (Springel, private communication).
--- abstract: 'Static and dynamic magnetic properties of a ferrimagnetic \[Fe(35Å)/Gd(50Å)\]$_{12}$ superlattice were investigated in a wide $4-300$ K temperature range using magneto-optical Kerr effect (MOKE) and ferromagnetic resonance (FMR) techniques. The multilayer structure was sputtered on a transparent glass substrate which made it possible to perform MOKE measurements on both Fe and Gd terminated sides of the superlattice. These experiments allowed us to detect a transition between field-aligned and canted magnetic states on both sides of the film and to distinguish between the bulk and surface twisted phases of the superlattice. As a result, the experimental $H-T$ magnetic phase diagram of the system was obtained. FMR studies at frequencies $7-36$ GHz demonstrated a complex evolution of absorption spectra as temperature decreased from room down to 4 K. Two spectral branches were detected in the sample. Theoretical simulations show that the observed spectral branches correspond to different types of inhomogeneous resonance modes in the multilayer with non-uniform magnetization precession inside Gd layers.' address: - 'P.L. Kapitza Institute for Physical Problems RAS, 119334 Moscow, Russia' - 'Institute of Solid State Physics RAS, 142432 Chernogolovka, Moscow region, Russia' - 'M.N. Mikheev Institute of Metal Physics UB RAS, 620137 Ekaterinburg, Russia' - 'Ural Federal University, 620002 Ekaterinburg, Russia' author: - 'A.B. Drovosekov' - 'A.O. Savitsky' - 'D.I. Kholin' - 'N.M. Kreines' - 'V.V. Proglyado' - 'M.V. Ryabukhina' - 'E.A. Kravtsov' - 'V.V. Ustinov' title: | Twisted magnetization states and inhomogeneous resonance modes\ in a Fe/Gd ferrimagnetic multilayer --- Fe/Gd multilayer ,ferrimagnetics ,magnetic properties ,ferromagnetic resonance 68.65.Ac ,75.70.Cn ,75.50.Gg ,76.50.+g Introduction ============ Layered structures based on transition (TM) and rare-earth (RE) ferromagnetic (FM) metals, like Fe/Gd, are model ferrimagnetic systems demonstrating a rich magnetic phase diagram with complex types of magnetic ordering [@Cam2015; @Cam1993]. Due to an antiferromagnetic (AFM) coupling at Fe-Gd interfaces and essentially different Curie temperatures of Fe and Gd (for bulk materials, $T_\mathrm{C}^\mathrm{Fe}=1043$ K and $T_\mathrm{C}^\mathrm{Gd}=293$ K) a so-called “compensation point” $T_\mathrm{comp}$ can exist in the system. At $T=T_\mathrm{comp}$ magnetic moments of Fe and Gd layers are equal to each other and the total magnetization of the system vanishes. Below $T_\mathrm{comp}$, the magnetic moment in Gd subsystem exceeds that in Fe subsystem, while above $T_\mathrm{comp}$, opposite situation takes place. As a result, in weak fields applied in the film plane, a collinear magnetic phase is realized with Fe magnetization vector oriented parallel (at $T > T_\mathrm{comp}$) or antiparallel to the field direction (at $T < T_\mathrm{comp}$). As the magnetic field increases and exceeds some critical value, such field-aligned phases become unstable and a transition to canted magnetic state occurs. Moreover, due to a relatively weak exchange stiffness of Gd, the external magnetic field initiates essentially non-uniform distribution of magnetization inside Gd layers (twisted state). The above-discussed complex behaviour of the Fe/Gd system was described theoretically by Camley *et al.*, using the mean-field approach \[3–5\], and observed experimentally by different techniques in a number of works \[6–9\]. At the same time it was predicted theoretically that even more complicated situation takes place when a finite Fe/Gd superlattice is considered. In this case, two types of twisted magnetic states are possible in the system: surface twist and bulk twist [@LePage1990]. Starting from the field-aligned state in weak external field, an increase of the field leads first to distortion of the collinear state near the superlattice surface (at $H=H_\mathrm{s}$). At higher fields ($H>H_\mathrm{b}$) the bulk twisted state is realized. Fig.\[magn\_profiles\] represents schematically the corresponding magnetization distributions calculated for different field values at $T>T_\mathrm{comp}$ [@Drov2017]. It is important to note that the surface twist phase arises at the outermost layer of the superlattice when its magnetization is directed opposite to the applied field. Thus, the surface twist phase arises on Gd-terminated side of the superlattice at $T>T_\mathrm{comp}$ and on Fe-terminated side of the superlattice at $T<T_\mathrm{comp}$. ![image](Fig1){width="90.00000%"} Direct experimental observation of such surface twisted states comes to difficulties since it requires simultaneous probing bulk and surface magnetic states of the superlattice. A few works were devoted to this problem. Haskel *et al.* [@Hask2003] demonstrated surface twist effects in a Fe-terminated \[Fe/Gd\]$_{15}$/Fe multilayer, using grazing-incidence x-ray magnetic circular dichroism. Kravtsov *et al.* [@Kra2009] used simultaneous refinement of polarized neutron and resonant x-ray magnetic reflectometry data to directly obtain magnetization depth profiles in a \[Fe/Gd\]$_5$ multilayer. In both cases the complexity of the used methods makes it difficult to perform detailed studies of stability regions of bulk and surface twisted phases as a function of temperature and magnetic field. Magneto-optical Kerr effect (MOKE) is a relatively simple and sensitive method to obtain direct information about the surface magnetic state of the multilayer. The penetration depth of visible light into metal is about $\sim100$ Å which is comparable with typical thickness of individual layers in the superlattice. Thus, MOKE signal provides information about magnetization in several upper layers of the superlattice. Hahn *et al.* [@Hahn1995] used MOKE to study surface magnetic states in a \[Fe/Gd\]$_{15}$ structure. Since the samples were sputtered on non-transparent Si substrates, authors compared MOKE signals from superlattices terminated by Fe and Gd layers. The difference of the MOKE curves for two samples was explained by the surface magnetic twist arising in case of the surface layer magnetization oriented opposite to the field direction. In our previous work [@Drov2017] we studied static magnetization curves of a \[Fe/Gd\]$_{12}$ multilayer. Comparing the experimental data with mean-field calculations, we found indications of field-dependent phase transitions between field-aligned, surface- and bulk twisted states. However, the static magnetometry provides only the net magnetic moment of the entire multilayer and the surface effects are manifested too weakly. In this work we use MOKE to obtain more precise knowledge on the surface magnetic states in the superlattice. The investigated \[Fe/Gd\]$_{12}$ multilayer is grown on a transparent glass substrate which allows direct probing magnetic states on both sides of the structure. As a result, we determine the stability regions of bulk and surface twisted states in the superlattice, depending on temperature and magnetic field. The experimental phase diagram is compared with calculations based on the mean-field model [@Drov2017]. Studies of magnetization dynamics in RE/TM systems attract attention due to a recent idea to use such materials for realization of ultrafast magnetic switching, promising for potential applications in magnetic storage devices \[14–16\]. A number of works were devoted to investigations of ferromagnetic resonance (FMR) in TM/Gd multilayers \[17–26\]. Room temperature studies \[17–20\] demonstrated the importance of spin pumping into RE metal to explain a large FMR line width in TM/RE systems. Several groups reported about the effect of line broadening and shift of the absorption peak to lower fields at cooling the system below room temperature. Such behaviour was observed for Co/Gd [@Patrin2006; @Demirtas2010], Py/Gd [@Khod2017], Fe/Gd [@Drov2017] and Fe/Cr/Gd [@Drov2015; @Drov2018] multilayers. In most of the cited works only one “high-temperature” resonance peak was detected. This peak became much weaker or even disappeared as temperature decreased below $T_\mathrm{C}^\mathrm{Gd}$ which effect was explained by non-local damping mechanisms in the system [@Drov2017; @Khod2017]. In a short letter [@Svalov2001], Svalov *et al.* reported about experimental observation of a second absorption peak below $T_\mathrm{C}^\mathrm{Gd}$ in a Co/Gd multilayer. Similar behaviour was observed in our previous works for the Fe/Gd system [@Drov2017]. Theoretical simulations showed that the observed absorption peaks corresponded to different types of inhomogeneous resonance modes in the multilayer. In this work we perform more detailed investigation of temperature evolution of the resonance spectra in the Fe/Gd superlattice. In contrast to the work [@Drov2017], here we pay special attention to the transformation of the spectra in the vicinity of $T_\mathrm{C}^\mathrm{Gd}$. In particular, we note that the behaviour of the high-temperature resonance peak is strongly dependent on the pumping frequency. To explain this result and identify the observed resonance modes, the experimental data are compared with model calculations based on Landau-Lifshitz equations describing magnetization dynamics in the system. Sample and experimental details =============================== The \[Fe(35Å)/Gd(50Å)\]$_{12}$ superlattice was prepared on a glass substrate using high vacuum magnetron sputtering technique. Two chromium layers with thickness 50 Å and 30 Å served as buffer and cap layers respectively. X-ray diffraction studies performed in [@Drov2017] demonstrated well-defined layered structure of the sample with interfacial root mean square roughness of about 1–2 atomic monolayers. Magnetic properties of the multilayer were studied using MOKE and FMR techniques in the $4-300$ K temperature range in magnetic fields up to 10 kOe applied in the film plane. Longitudinal MOKE studies of the surface magnetization were performed on both sides of the film, using a 635 nm semiconductor laser. In our experimental geometry the MOKE signal was proportional to the component of magnetization parallel to the applied field. FMR measurements were carried out using a conventional field-sweep technique on a laboratory developed transmission type spectrometer at different frequencies in the range $7-36$ GHz. ![MOKE curves measured at 155 K from two sides of the film: 1) from the glass substrate side (Fe-terminated side of the superlattice) and 2) from the film surface (Gd-terminated side of the superlattice). Comparing the curves, different types of magnetic ordering can be identified.[]{data-label="Kerr"}](Fig2){width="\columnwidth"} ![MOKE curves obtained at different temperatures on Gd-terminated (a) and Fe-terminated (b) sides of the superlattice. Black arrows show transitions from field-aligned to canted state of the surface magnetization.[]{data-label="Kerr-T"}](Fig3){width="\columnwidth"} Results and discussion ====================== Magneto-optical Kerr effect --------------------------- Static magnetometry of the investigated sample performed in [@Drov2017] showed that Gd layers had reduced Curie temperature, $T_\mathrm{C}^\mathrm{Gd}\approx200$ K, comparing with the bulk value 293 K. The system demonstrated the compensation point at $T_\mathrm{comp}\approx90$ K. Testing MOKE experiments on Fe and Gd thin films showed that both Fe and Gd layers should contribute to the total Kerr effect for the combined Fe/Gd layered system. Under our experimental conditions, the MOKE signal from Gd is comparable with that from Fe (about two times smaller at low temperature) but has opposite sign. Thus, we expect different signs of MOKE for Gd- and Fe-aligned states in the investigated multilayer. Fig. \[Kerr\] shows the experimental MOKE hysteresis loops measured at $T=155$ K from two sides of the superlattice. For both curves, a flat part in the region of weak fields means that the magnetic moment of the outermost layer remains collinear to the external field. Positive sign of the MOKE signal at $H>0$ indicates the Fe-aligned state. At some higher field the MOKE signal decreases, indicating that the magnetization of the outermost layer begins to rotate. Note that on Gd-terminated side this rotation starts in weaker field ($H=H_\mathrm{s}$) than on Fe-terminated side ($H=H_\mathrm{b}$). Thus, we can conclude that in magnetic fields $H_\mathrm{s}<H<H_\mathrm{b}$ the surface twist state is realized on Gd-terminated side of the superlattice. In higher fields $H>H_\mathrm{b}$, a transformation to the bulk twisted phase occurs. Similar analysis of the MOKE curves was performed for different temperatures in the range 4–300 K (see Fig. \[Kerr-T\]) and the resulting phase diagram of the system was obtained (Fig. \[Phases\]). At $T>T_\mathrm{C}^\mathrm{Gd}$ we observe simple rectangular hysteresis loops without any signs of possible phase transitions. At lower temperatures the shape of the MOKE curves changes. The compensation point $T_\mathrm{comp}\approx90$ K can be clearly detected as temperature where an inversion of the hysteresis loop occurs (Fig. \[Kerr-T\]), i.e. different orientation of Fe magnetization is realized in weak fields above and below $T_\mathrm{comp}$. It is also clearly seen that at $T>T_\mathrm{comp}$ the rotation of magnetization starts in weaker fields on Gd-terminated side of the superlattice. On the contrary, at $T<T_\mathrm{comp}$ this rotation begins in weaker fields on Fe-terminated side of the multilayer. Unfortunately, in the region of low temperatures the increasing hysteresis smears the phase transitions and prevents accurate determination of the critical fields. As a result, the experimental error is increasing. Nevertheless, the observed behaviour is in agreement with the theoretical prediction that the surface twist phase arises on the side of the superlattice when the magnetization of the outermost layer is directed opposite to the applied field. In the phase diagram Fig. \[Phases\], the experimental stability regions for different phases are compared with the result of mean-field calculations (see [@Drov2017] for details). We note a good agreement between the experiment and the model. ![Resulting $H-T$ phase diagram of the investigated Fe/Gd superlattice. Points are obtained from MOKE data on two sides of the multilayer. Lines are calculations within the mean-field approach [@Drov2017]. The dashed line corresponds to a situation when Gd magnetization vanishes in the middle of Gd layer.[]{data-label="Phases"}](Fig4){width="\columnwidth"} ![image](Fig5){width="\textwidth"} ![image](Fig6){width="\textwidth"} Ferromagnetic resonance ----------------------- Fig. \[spectra\] demonstrates the temperature evolution of experimental resonance spectra at several different frequencies. At room temperature, one relatively narrow ($\Delta H\sim100$ Oe) absorption peak is observed. As temperature decreases, this “high-temperature” (HT) peak broadens and its position changes. We note that the direction of the line shift depends on frequency. At high frequencies ($f\gtrsim12$ GHz) the HT peak shifts towards lower fields (Fig. \[spectra\]b,c). The same behaviour was observed earlier for different types of TM/Gd multilayers \[21–23\]. This effect can be qualitatively described, considering a strongly coupled layered ferrimagnet (see Appendix in the end of this paper), so this behaviour can be considered as “normal”. In our case, however, another situation takes place at low frequencies ($f\lesssim12$ GHz). Here we observe the shift of the HT peak towards higher fields (Fig. \[spectra\]a). This result is opposite to the behaviour reported in the previous works \[21–23\] and clearly contradicts to the simple approximation of strongly coupled FM layers. At all frequencies under study, the HT peak disappears below $T\approx160$ K. At the same time a second “low-temperature” (LT) peak arises in the region of high fields. As temperature decreases, this peak shifts towards lower fields and becomes more pronounced. At high frequencies it can be clearly detected down to lowest temperature, however, at $f=7.65$ GHz it again disappears below $T\approx60$ K (Fig. \[spectra\]). Fig. \[fvsH\]a demonstrates the resulting experimental frequency-*vs*-field $f(H)$ dependencies at different temperatures. At room temperature the $f(H)$ curve for HT-mode can be qualitatively described by Kittel-like equation for FM film (see Appendix, Eq. ). However at lower temperatures the shape of the $f(H)$ curve changes strongly and the simple Kittel’s formula clearly becomes inapplicable. This means that the approximation of uniform magnetization precession within the structure is not valid. Taking into account a large exchange stiffness of Fe layers and a strong coupling at Fe-Gd interface, we can suppose that inhomogeneous precession occurs inside the Gd layers. To describe such inhomogeneous resonance modes theoretically we use the approach of the work [@Drov2017]. To model the non-uniform magnetization precession inside Gd layers they are divided into elementary “atomic” sublayers coupled with each other. The static magnetization in each sublayer is calculated using the mean-field model while the dynamics is described by Landau-Lifshitz equations (LLE) with relaxation terms. For relaxation terms, we consider Gilbert damping in Fe and Gd layers as well as diffusion-type damping in Gd (see [@Drov2017] for details and model parameters). As a result, we calculate the complex eigenfrequencies of the system $\omega=\omega^\prime+i\omega^{\prime\prime}$. The corresponding eigenvectors represent the depth profiles of magnetization precession in the superlattice. The damping of the calculated resonance modes can be characterized by quality factor (Q-factor) $Q=\omega^\prime/2\omega^{\prime\prime}$. The larger Q-factor is, the more intensive resonance peak is expected. Following our previous work [@Drov2017], we consider only the modes with in-phase precession of Fe layers and perform modelling for one period of the superlattice. Fig. \[fvsH\]b demonstrates the resulting calculated dependencies $f(H)$ at different temperatures (for illustrative purposes, only the modes with $Q>0.5$, are shown). The model predicts the existence of two spectral branches with different types of magnetic precession inside Gd layers (Fig. \[fvsH\]c). The HT-mode has a gap in the spectrum at low temperatures and corresponds to strongly non-uniform precession inside Gd layers. The LT-mode is quasi-uniform. Its frequency vanishes at $H=H_\mathrm{b}$, i.e. at phase transition from field-aligned to twisted magnetic state. In general, the behaviour of calculated curves $f(H)$ repeats qualitatively the experimental dependencies, except the temperature region $\approx200-225$ K (i.e. slightly above $T_\mathrm{C}^\mathrm{Gd}$) and in weak magnetic fields $H\lesssim1.5$ kOe. Above $T=225$ K the model predicts the crossing of two spectral branches. One branch with increasing dependence $f(H)$ corresponds to preferable precession of Fe layers. This branch has large Q-factor and is observed experimentally. The second branch with decreasing dependence $f(H)$ corresponds to preferable precession of inner part of Gd layers. This branch has small Q-factor and is not observed experimentally. Below $T=225$ K the model predicts the repulsion of these two crossing modes. As a consequence, a gap in the spectrum opens. Experimentally, however, such a gap arises only at $T\lesssim180$ K (Fig. \[fvsH\]a,b). Despite this discrepancy, Fig. \[fvsH\]b helps to understand different behaviour of the HT peak at frequencies below and above $f\approx12$ GHz, i.e different direction of the line shift at cooling the system below room temperature (Fig. \[spectra\]). The critical value 12 GHz corresponds to the frequency where the effect of modes repulsion arises. Fig. \[Hres-T\] shows the resulting experimental and calculated temperature dependencies of the resonance fields $H_\mathrm{res}(T)$ at different frequencies. It can be seen that the experimental and theoretical curves demonstrate not only qualitative but also a certain quantitative agreement. The noticeable discrepancy observed for HT-mode at 17.2 GHz below $T\approx230$ K is connected with the above-discussed inadequate description of the mode-repulsion region. ![Temperature dependencies of the resonance field at different frequencies. Points are experimental data, lines are calculations. Solid, dashed, and dotted lines correspond to different Q-factor of resonance modes.[]{data-label="Hres-T"}](Fig7){width="\columnwidth"} It is interesting to note that at low frequency ($f=7.65$ GHz) the model predicts the existence of minimum in the $H_\mathrm{res}(T)$ dependence for the LT-mode. This minimum is connected with the fact that the LT-mode frequency vanishes at $H=H_\mathrm{b}$ (i.e. $H_\mathrm{res}\rightarrow H_\mathrm{b}$ when $f\rightarrow0$). Since $H_\mathrm{b}$ turns to zero at $T_\mathrm{comp}$, we could expect the minimum of $H_\mathrm{res}(T)$ at this temperature. Experimentally, however, we did not manage to detect the absorption line below $T_\mathrm{comp}$. The reason for this can be the large damping of the corresponding resonance mode. Indeed, our calculations show that the Q-factor of the LT-mode increases below $T_\mathrm{comp}$ at $f=7.65$ GHz (see Fig. \[Hres-T\]). To summarize, we achieved a reasonable agreement between the experiment and model calculations. The model describes many features of the experimental spectra and helps to identify the types of the observed resonance modes. The main discrepancy between the experiment and model arises in the vicinity of $T_\mathrm{C}^\mathrm{Gd}$ where the calculated spectra are very sensitive to magnetic parameters of the system and can be strongly influenced by structural inhomogeneities of the real superlattice. Conclusion ========== In this work we demonstrated the realization of non-collinear magnetic states and inhomogeneous magnetization dynamics in a Fe/Gd artificial layered ferrimagnet. We have shown that both static and dynamic properties of the system are described taking into account essentially non-uniform magnetization distribution inside Gd layers. Using the magneto-optical Kerr effect, we defined the regions of stability for surface and bulk twisted states of the investigated multilayer. The resulting experimental $H-T$ phase diagram is in a good agreement with calculations based on the mean-field model. Ferromagnetic resonance spectra obtained in this work reveal a complex temperature evolution with two spectral branches that can not be explained in terms of uniform magnetic precession within the superlattice. The performed theoretical simulations of magnetization dynamics in the system show that the observed resonance modes correspond to different types of inhomogeneous precession inside Gd layers. In the end we would like to emphasize that the nanostructured ferrimagnets provide possibility to study such complex magnetic phenomena under easily achievable experimental conditions: in magnetic fields up to 1 T and at microwave frequencies. The traditional ferrimagnetic crystals would require magnetic fields and frequencies that are several orders of magnitude larger. In this respect, the artificial structures can be considered as suitable model objects for experimental investigations of non-collinear magnetic phases and inhomogeneous magnetization dynamics in ferrimagnets. Acknowledgments {#acknowledgments .unnumbered} =============== The work is partially supported by the Russian Foundation for Basic Research (grants No.16-02-00061, No.18-37-00182), by the Ministry of Education and Science of the Russian Federation (grant No.14-Z-50.31.0025), and by the Basic Research Program of the Presidium of Russian Academy of Sciences. Research in Ekaterinburg was performed in terms of the State assignment of Federal Agency of Scientific Organizations of the Russian Federation (theme “Spin” No. AAAA-A18-188 020290104-2). FMR frequency of a strongly coupled layered ferrimagnet ======================================================= Let us consider two FM layers with different magnetic moments $\mu_1 > \mu_2$. We suppose that these layers are strongly AFM coupled (the exchange energy is infinity). In this case, the magnetic field **H** applied in the film plane aligns $\boldsymbol{\mu}_1$ and $\boldsymbol{\mu}_2$ parallel and antiparallel to the field direction respectively. Considering Zeeman and demagnetizing energy of both layers, the total energy of the system can be written as $$E=-\mathbf{H}\left(\boldsymbol{\mu}_1+\boldsymbol{\mu}_2\right) + 2\pi\left[ \frac{\left(\boldsymbol{\mu}_1 \cdot \mathbf{z}\right)^2}{V_1} + \frac{\left(\boldsymbol{\mu}_2 \cdot \mathbf{z}\right)^2}{V_2} \right],$$ where **z** is a unit vector normal to the film plane, $V_1$ and $V_2$ are volumes of layers. Taking into account that $-\boldsymbol{\mu}_2\upuparrows\boldsymbol{\mu}_1$, the energy expression can be rewritten in the form $$E=-\mathbf{H}\boldsymbol{\mu} + 2\pi\frac{\mu_1^2V/V_1+\mu_2^2V/V_2}{(\mu_1-\mu_2)^2} \cdot \frac{\left(\boldsymbol{\mu} \cdot \mathbf{z}\right)^2}{V},$$ where $\boldsymbol{\mu}=\boldsymbol{\mu}_1+\boldsymbol{\mu}_2$ and $V=V_1+V_2$. Now it has the form of magnetic energy for a single FM film with modified demagnetizing factor. Thus, the FMR frequency of the system is defined by modified Kittel’s formula $$\omega=\gamma_\mathrm{eff} \sqrt{H\left(H+4\pi M_\mathrm{eff} \right)}, \label{A1}$$ where $$4\pi M_\mathrm{eff}=4\pi\frac{\mu_1^2/V_1+\mu_2^2/V_2}{\mu_1-\mu_2}, \label{A2}$$ and $\gamma_\mathrm{eff}$ is a net gyromagnetic ratio of two coupled layers [@Wan1953] $$\gamma_\mathrm{eff}=\frac{\mu_1-\mu_2}{\mu_1/\gamma_1-\mu_2/\gamma_2}, \label{A3}$$ where $\gamma_1$ and $\gamma_2$ are gyromagnetic ratios of individual layers. If $\gamma_1\approx\gamma_2$, Eqs. ,  predict increasing FMR frequency when $\mu_2$ is increasing. This behaviour is opposite to the case of amorphous or crystal ferrimagnetic film when the effective demagnetizing field is defined by simple expression $4\pi M_\mathrm{eff}=4\pi(M_1-M_2)$, where $M_{1,2}$ are magnetizations of FM sublattices [@Wan1953]. In this situation FMR frequency is decreasing with $M_2$ increase. It is important to note that the approximation – is valid only when the exchange fields $H_{\mathrm{ex},i}$ acting on layers $i=1,2$ are much stronger than the corresponding demagnetizing fields $H_{\mathrm{ex},i}\gg4\pi M_i$ and the external field is far below the transition to the canted state: $H\ll|H_{\mathrm{ex},1}-H_{\mathrm{ex},2}|$ [@Gurevich]. [99]{} R. E. Camley, “Thermal Properties of Magnetic Multilayers and Nanostructures: Applications to Static and Dynamic Behavior” in *Magnetism of Surfaces, Interfaces, and Nanoscale Materials*, Handbook of Surface Science, Vol. 5, edited by R. E. Camley, Z. Celinski, and R. L. Stamps (Elsevier, North-Holland, 2015). R. E. Camley and R. L. Stamps, J. Phys. Cond. Mat. **5**, 3727 (1993). R. E. Camley, Phys. Rev. B **35** 3608 (1987). R. E. Camley and D. R. Tilley, Phys. Rev. B **37**, 3413 (1988). R. E. Camley, Phys. Rev. B **39**, 12316 (1989). M. Sajieddine, Ph. Bauer, K. Cherifi, C. Dufour, G. Marchal, R. E. Camley, Phys. Rev. B **49**, 8815 (1994). N. Ishimatsu, H. Hashizume, S. Hamada, N. Hosoito, C. S. Nelson, C. T. Venkataraman, G. Srajer, and J. C. Lang, Phys. Rev. B **60**, 9596 (1999). Y. Choi, D. Haskel, R. E. Camley, D. R. Lee, J. C. Lang, G. Srajer, J. S. Jiang, and S. D. Bader, Phys. Rev. B **70**, 134420 (2004). E. Kravtsov, D. Haskel, S. G. E. te Velthuis, J. S. Jiang, and B. J. Kirby, Phys. Rev. B **79**, 134438 (2009). J. G. LePage and R. E. Camley, Phys. Rev. Lett. **65**, 1152 (1990). A. B. Drovosekov, N. M. Kreines, A. O. Savitsky, E. A. Kravtsov, M. V. Ryabukhina, V. V. Proglyado, and V. V. Ustinov, J. Phys. Cond. Mat. **29**, 115802 (2017). D. Haskel, G. Srajer, Y. Choi, D. R. Lee, J. C. Lang, J. Meersschaut, J. S. Jiang, and S. D. Bader, Phys. Rev. B **67**, 180406(R) (2003). W. Hahn, M. Loewenhaupt, Y. Y. Huang, G. P. Felcher, and S. S. P. Parkin, Phys. Rev. B **52**, 16041 (1995). S. Mangin, M. Gottwald, C-H. Lambert, D. Steil, V. Uhlíř, L. Pang, M. Hehn, S. Alebrand, M. Cinchetti, G. Malinowski, Y. Fainman, M. Aeschlimann, and E. E. Fullerton, Nat. Mater. **13**, 286 (2014). R. Chimata, L. Isaeva, K. Kádas, A. Bergman, B. Sanyal, J. H. Mentink, M. I. Katsnelson, T. Rasing, A. Kirilyuk, A. Kimel, O. Eriksson, and M. Pereiro, Phys. Rev. B **92**, 094411 (2015). C. Xu, T. A. Ostler, and R. W. Chantrell, Phys. Rev. B **93**, 054302 (2016). C. Luo, Y. Yin, D. Zhang, S. Jiang, J. Yue, Y. Zhai, J. Du, and H. Zhai, J. Appl. Phys. **117**, 17D124 (2015). L. Sun, X. C. Zhao, Z. X. Kou, D. M. Ban, H. L. Yuan, E. Liu, Y. Zhai, and H. R. Zhai, J. Phys. D: Appl. Phys. **50**, 435003 (2017). L. Sun, W. Zhang, P. K. J. Wong, Y. Yin, S. Jiang, Z. Huang, Y. Zhai, Z. Yao, J. Du, Y. Sui, H. Zhai, J. Magn. Magn. Mater. **451**, 480 (2018). R. Bansal, N. Chowdhury, and P. K. Muduli, Appl. Phys. Lett. **112**, 262403 (2018). G. S. Patrin, V. O. Vas’kovskii, A. V. Svalov, E. V. Eremin, M. A. Panova, and V. N. Vasil’ev, J. Exp. Theor. Phys. **102**, 131 (2006). S. Demirtas, I. Harward, R. E. Camley, Z. Celinski, M. R. Hossu, A. R. Koymen, C. Yu, and M. J. Pechan, preprint arXiv:1002.4889 (2010). B. Khodadadi, J. B. Mohammadi, C. Mewes, T. Mewes, M. Manno, C. Leighton, and C. W. Miller, Phys. Rev. B **96**, 054436 (2017). A. B. Drovosekov, N. M. Kreines, A. O. Savitsky, E. A. Kravtsov, D. V. Blagodatkov, M. V. Ryabukhina, M. A. Milyaev, V. V. Ustinov, E. M. Pashaev, I. A. Subbotin, and G. V. Prutskov, J. Exp. Theor. Phys. **120**, 1041 (2015). A. B. Drovosekov, M. V. Ryabukhina, D. I. Kholin, N. M. Kreines, E. A. Manuilovich, A. O. Savitsky, E. A. Kravtsov, V. V. Proglyado, V. V. Ustinov, T. Keller, Yu. N. Khaydukov, Y. Choi, and D. Haskel, J. Exp. Theor. Phys., in press (2018). A. V. Svalov, J. M. Barandiarán, V. O. Vas’kovskiy, G. V. Kurlyandskaya, L. Lezama, N. G. Bebenin, J. Gutiérrez, and D. S. Schmool, Chin. Phys. Lett. **18**, 973 (2001). R. K. Wangsness, Phys. Rev. **91**, 1085 (1953). A. G. Gurevich, Magnetic Resonance in Ferrites and Antiferromagnets (Nauka, Moscow, 1973) \[in Russian\].
[**Damaskinsky E.V.$^{a}$, Sokolov M.A.$^{b}$**]{} [**On differential operators for bivariate Chebyshev polynomials [^1]**]{} $^a$ Math. Dept. Military Engineering Institute. VI(IT). Saint Petersburg and Petersburg Department of the Steklov Mathematical Institute, [email protected]. $^b$ Peter the Great St. Petersburg Polytechnic University and Military Telecommunications Academy (MTA), [email protected]. > [**Abstract**]{} We construct the differential operators for which bivariate Chebyshev polynomials of the first kind, associated with simple Lie algebras $C_2$ and $G_2$, are eigenfunctions. [**1.**]{} In these notes, we obtain differential operators for which bivariate Chebyshev polynomials of the first kind, associated with the root systems of the simple Lie algebras $C_2$ and $G_2$, are eigenfunctions. For the case of bivariate Chebyshev polynomials, associated with the Lie algebra $A_2$, such operators were obtained in the well known Koornwinder’s work [@K1]. Chebyshev polynomials in several variables are natural generalizations of the classical Chebyshev polynomials in one variable (see, for example [@Ri]). The polynomials of the first kind can be defined in the following manner. Denote by $R$ a reducible system of roots for a simple Lie algebra $L$. A system of roots is a set of vectors in $d$-dimensional Euclidean space $E^d$ with a scalar product $(.,.)$. This system is completely determined by a basis of simple roots $\alpha_i,\,i=1,..,d$ and by a group of reflections of $R$ called a Weyl group $W(R)$. Generating elements of the Weyl group $w_i,\,i=1,..,d$ acts on any vector $x\in E^d$ according to the formula $$\label{0-1} w_i\,x=x-\frac{2(x,\alpha_i)}{(\alpha_i,\alpha_i)}\alpha_i.$$ In particular, if $x = \alpha_i$ we obtain from (\[0-1\]) $w_i\,\alpha_i =-\alpha_i$. A system of roots $R$ is closed under the action of related Weyl group $W(R)$. To any root $\alpha$ from the system $R$ corresponds the coroot $$\alpha^{\vee}=\frac{2\alpha}{(\alpha,\alpha)}.$$ For the basis of the simple coroots $\alpha^{\vee}_i,\,i=1,..,d$ one can define the dual basis of fundamental weights $\lambda_i,\,i=1,..,d$ $$(\lambda_i,\alpha^{\vee}_j)=\delta_{ij}$$ (we identify the dual space ${E^d}^*$ with $E^d$). The bases of roots and weights are related by the linear transformation $$\label{0-2} \alpha_i=\sum_j C_{ij}\lambda_j, \quad C_{ij}=\frac{2(\alpha_i,\alpha_j)}{(\alpha_j,\alpha_j)},$$ where $C$ is the Cartan matrix of the Lie algebra $L$. For any Lie algebra $L$ with related sistem of roots $R$ and Weyl group $W(R)$, an orbit function $\Phi_{\bf n}({\boldsymbol{\phi}})$ is defined as $$\label{0-3} T_{\bf n}^{L}({\boldsymbol{\phi}}) = \frac1{|W(R)|} \sum\limits_{w\in {\mbox{\footnotesize W(R)}}}e^{\rm i(\emph{w}\,{\bf n},{\boldsymbol{\phi}})}.$$ In the formula (\[0-3\]) $|W(R)|$ is a number of elements in a group $W(R)$, $\bf n$ is expressed in the basis of fundamental weights $\{\lambda_i\}$ and $ {\boldsymbol{\phi}}$ is expressed in the dual basis of coroots $\{\alpha^\vee_i\}$ $${\bf n}=\sum_{i=1}^d\,n_i\lambda_i \quad n_i\in Z, \quad {\boldsymbol{\phi}}= \sum_{i=1}^d\,\phi_i\alpha_i^\vee \quad \phi_i\in [0,2\pi).$$ Obviously $T_{\bf n}^{L}({\boldsymbol{\phi}})$ is a $W(R)$-invariant function because of $$T_{\tilde w\,\bf n}^{L}({\boldsymbol{\phi}})=T_{\bf n}^{L}({\boldsymbol{\phi}}), \,\,\forall \tilde w\,\in W(R).$$ Then we define the new variables $x_i$ (generalized cosines) by the relations $$\label{0-4} x_i\,=\,T_{{\bf e}_i}({\boldsymbol{\phi}}),\quad {\bf e}_i = (\overbrace{0,..,0}^{i-1},1,\overbrace{0,..,0}^{d-i}).$$ It is shown in the works [@K1; @K2; @H; @HW; @B; @KP] that the function $T_{\bf n}({\boldsymbol{\phi}})$ defined by the formula (\[0-3\]) with non-negative integer $n_i$ from ${\bf n} = (n_1,...,n_d)$ can be expressed in the terms of $x_i$. This function gives us up to a normalization the multivariate Chebyshev polynomials $T_{n_1,...,n_d}$ of the first kind. [**2.**]{} The simplest example of the above construction is the classical Chebyshev polynomials associated with the Lie algebra $A_1$. The related Weyl group consists from the identical element $w_0$ and the reflection of the single positive root $w_1\lambda =-\lambda$. In this case the definition (\[0-3\]) gives $$\label{0-5} T_n({\phi})=\frac1{2}(e^{\rm i n\phi}+e^{-\rm i n\phi}) = \cos{n\phi},\quad x = T_1({\phi}) = \cos{\phi}.$$ To derive the differential operator(s) for which the classical polynomials of the first kind $T_n(x)$ are eigenfunction we firstly write out the differential equation for $\cos{n\phi}$ $$\label{0-6} \frac{{{\rm{d}}}^2\cos{n\phi}}{{{\rm{d}}}\phi^2} + n^2\cos{n\phi} = 0.$$ It follows from (\[0-6\]) that desired operator in terms of the angle variable $\phi$ has the form $$\label{0-7} L^{(A_1)}(\phi)=\frac{{{\rm{d}}}^2}{{{\rm{d}}}\phi^2}.$$ Changing the variable $\cos{\phi} \rightarrow x$ in (\[0-7\]) we obtain the well known operator in terms of $x$ $$\label{0-8} L^{(A_1)}(x) = (1-x^2)\frac{{{\rm{d}}}^2}{{{\rm{d}}}x^2}-x\frac{{{\rm{d}}}}{{{\rm{d}}}x}.$$ [**3.**]{} Now we turn to the generalized cosine associated with the Lie algebra $A_2$. At the first step we find the orbit function related to the algebra $A_2$. The root system of this algebra has two fundamental roots $\alpha_1,\,\alpha_2$ and includes the positive root $\alpha_1 + \alpha_2$ together with their reflections. The action of generating elements $w_1,w_2$ of the Weyl group $W(A_2)$ on the fundamental roots are given by the formulas $$w_1\alpha_1=-\alpha_1,\quad w_1\alpha_2=\alpha_1+\alpha_2,\quad w_2\alpha_1=\alpha_1+\alpha_2, \quad w_2\alpha_2=-\alpha_2.$$ Taking into account (\[0-2\]) and explicit form of the Cartan matrix $C(A_2)$ (see, for example [@Hu]) we obtain the action of $w_1,w_2$ on the fundamental weights $$\label{0-9} w_1\lambda_1=\lambda_2-\lambda_1,\quad w_1\lambda_2=\lambda_2,\quad w_2\lambda_1= \lambda_1,\quad w_2\lambda_2=\lambda_1-\lambda_2.$$ The action of the other group elements on the fundamental weights is determined by their representation in terms of the generating elements $$\label{0-10} w_3=w_1w_2,\quad w_4=w_2w_1,\quad w_5=w_1w_2w_1,\quad w_0=e.$$ Using these formulas, the definition (\[0-3\]) and the notation $${\bf n} = m\lambda_1 + n\lambda_2,\quad {\boldsymbol{\phi}}=\phi\alpha^{\vee}_1+\psi\alpha^{\vee}_2$$ we find the $W(A_2)$-invariant function of two variables $$\begin{gathered} \label{0-11} T_{m,n}(\phi,\psi)=\\ e^{{\rm i}m\phi}e^{{\rm i}n\psi}+e^{{\rm i}m(\psi-\phi)}e^{{\rm i}n\psi}+e^{{\rm i}m\phi}e^{{\rm i}n(\phi-\psi)} +e^{{\rm i}m(\psi-\phi)}e^{-{\rm i}n\phi}+e^{-{\rm i}m\psi}e^{{\rm i}n(\phi-\psi)}+ e^{-{\rm i}m\psi}e^{-{\rm i}n\phi}.\end{gathered}$$ The normalization factor was omitted in (\[0-11\]) because it is not essential for our purpose. At the second step we find differential operators for which the orbit functions $T_{m,n}(\phi,\psi)$ for any $m,n$ are the eigenfunctions $$L_N(T_{m,n})=E_{m,n}T_{m,n}.$$ The form of the orbit function implies that the action of the operator $L_N$ on each exponent from (\[0-11\]) must gives us the same eigenvalues $E_{m,n}$ for any $m,\,n$. For this reason we search the operators of the form $$\label{0-12} L_N^{(A_2)}(\phi ,\psi)=\sum_{k=0}^{N}a_{k}\frac{\partial^N}{\partial\phi^{(N-k)}\partial\psi^k},$$ with real constant coefficients $a_{k},\, k = 0,..,N$. Let us act by the operator $L_N^{(A_2)}(\phi ,\psi)$ on $T_{m,n}$ and write out the chain of equalities of coefficients at the each exponent of (\[0-11\]) $$\sum_{k=0}^{N}a_{k}m^{N-k}n^k = \sum_{k=0}^{N}a_{k}(-m)^{N-k}(m+n)^k = \sum_{k=0}^{N}a_{k}(m+n)^{N-k}(-n)^k =$$ $$\sum_{k=0}^{N}a_{k}(-m-n)^{N-k}(m)^k = \sum_{k=0}^{N}a_{k}n^{N-k}(-m-n)^k = \sum_{k=0}^{N}a_{k}(-n)^{N-k}(-m)^k.$$ Some conclusions about the properties of the coefficients $a_{k}$ can be made directly from the form of the sums. For example, changing the summation index in the last sum of the chain $k\rightarrow N-k$ and compare this sum with the first one we conclude that $a_{k}=a_{N-k}$ for the even $N$, and $a_{k}=-a_{N-k}$ for the odd $N$. To calculate the coefficients $a_{k}$ in the explicit form it is necessary to solve some equation systems which arise from equalization of coefficients at the same monomials $m^pn^q$ in the above chain. It is convenient to reformulate this problem as a problem of calculation of the vector $$V_{N+1} = (a_{0},a_{1},...,a_{N})$$ which is a common eigenvector with the eigenvalue $1$ of the matrices related to the equation systems under consideration. Consider for example the first equality from the chain. We can write the following equation $$\label{0-13} {\tt M}_1V_{N+1}= E_{N+1}V_{N+1} = V_{N+1},$$ where $E_{N+1}$ is the unit $(N+1)\times(N+1)$ matrix, $M_1$ is the lower triangular matrix of the same degree with the nonzero matrix elements $$\label{0-14} ({{\tt M}_1})_{ij}= (-1)^{j+1}{{N+1-j}\choose {N+1-i}},\quad i,j = 1,..N+1,$$ where ${{j}\choose {i}}$ is the binomial coefficient. The equality of the first and third sums gives us the equation $$\label{0-15} {\tt M}_2V_{N+1}=V_{N+1},$$ where $M_2$ is the upper triangular matrix with the nonzero matrix elements $$({{\tt M}_2})_{ij}= (-1)^{j+1}{{j-1}\choose {i-1}},\quad i,j = 1,..N+1.$$ By the same manner we obtain the matrices $M_i,\,\,i=3,4,5,$ from the above equalities. It can be easily checked that these matrices are connected ${\tt M}_1,{\tt M}_2$ by the following formulas $${\tt M}_3={\tt M}_1{\tt M}_2,\quad {\tt M}_4={\tt M}_2{\tt M}_1,\quad {\tt M}_5= {\tt M}_1{\tt M}_2{\tt M}_1,\quad {\tt M}_0=E_{N+1}.$$ Moreover, under the correspondence $w_i\sim {\tt M}_i$ we reproduce the multiplication table of the Weyl group $W(A_2)$ including the equalities $${\tt M}_1^2={\tt M}_2^2={\tt M}_5^2={\tt M}_3^3={\tt M}_4^3=E_{N+1},\quad {\tt M}_3^2={\tt M}_4,\,{\tt M}_4^2= {\tt M}_3.$$ It follows from the above that the homomorphism $w_i\rightarrow {\tt M}_i,\,i=0,..,5$, ${\tt M}_0=E_{N+1}=w_0$ realizes faithful $(N+1)$-dimensional representation of the Weyl group $W(A_2)$. Since the matrices ${\tt M}_1$ and ${\tt M}_2$ are the images of the generators for the Weyl group $W(A_2)$, we can calculate the joint eigenvectors only for these two matrices. Joint solution of (\[0-13\]) and (\[0-15\]) in the cases $N=2,3$ gives us the following result $$\label{0-16} N=2,\quad V^{A_2}_{3} = (1,1,1),\quad N=3,\quad V_{4}^{A_2} = (2,3,-3,-2).$$ The related independent operators in the angle variables with their spectrums have the forms $$\label{0-17} L^{A_2}_{3}=\partial^2_{\phi^2}+\partial^2_{\phi\psi}+ \partial^2_{\psi^2},\quad E^{A_2}_{3}(m,n)=m^2+mn+n^2,$$ $$\label{0-18} L^{A_2}_{4}=2\partial^3_{\phi^3}+3\partial^3_{\phi^2\psi}-3\partial^3_{\phi\psi^2}- 2\partial^3_{\psi^3}, \quad E^{A_2}_{4}(m,n)=2m^3+3m^2n-3mn^2-2n^2.$$ High degree operators can be constructed as $$L = P(L^{A_2}_{3},L^{A_2}_{4})$$ where $P$ is any polynomial in two variables. [**4.**]{} At the last step it is necessary to replace the angle variables $(\phi,\psi)$ by $(x,y)$ which are defined according to the relation (\[0-4\]) as $$\label{2-11} x = \frac{1}{2}T_{1,0} = e^{i\phi}+e^{i(\psi-\phi)}+e^{-i\psi},$$ $$\label{2-12} y = \frac{1}{2}T_{0,1} = e^{i\psi}+e^{i(\phi-\psi)}+e^{-i\phi}.$$ This routine procedure in the case $N=2$ gives us the operator $$\label{2-13} L_{3}^{A_2}=(x^2-3y)\frac{\partial^2}{\partial x^2}+(xy-9)\frac{\partial^2}{\partial x\partial y}+(y^2-3x)\frac{\partial^2}{\partial y^2}+x\frac{\partial}{\partial x}+y\frac{\partial}{\partial y}.$$ The bivariate Chebyshev polynomials of the first kind associated with the Lie algebra $A_2$ are eigenvectors of $L_{3}^{A_2}$ with eigenvalues defined by (\[0-17\]). The operator (\[2-13\]) was obtained for the first time by T. Koornwinder in the well known work [@K1]. Our calculation method, presented above, is different from the method used in [@K1]. [**5.**]{} Here we use the same calculation scheme as above for the case of the polynomials, associated with the Lie algebra $C_2$. The root system of the algebra $C_2$ has two fundamental roots $\alpha_1,\,\alpha_2$ and includes the positive root $\alpha _1+\alpha_2,\,2\alpha _1+\alpha_2$ and their reflections. The action of generating elements $w_1,w_2$ of the Weyl group $W(A_2)$ on the fundamental roots are given by the formulas $$w_1\alpha_1=-\alpha_1,\quad w_1\alpha_2=2\alpha_1+\alpha_2,\quad w_2\alpha_1=\alpha_1+\alpha_2, \quad w_2\alpha_2=-\alpha_2,$$ $$w_1\lambda_1=\lambda_2-\lambda_1,\quad w_1\lambda_2=\lambda_2,\quad w_2\lambda_1=\lambda_1, \quad w_2\lambda_2=2\lambda_1-\lambda_2.$$ The action of the other group elements on the fundamental weights is determined by their representation in terms of the generating elements $$\label{cc-1} w_3=w_1w_2,\quad w_4=w_2w_1,\quad w_5=w_1w_2w_1,\quad w_6=w_2w_1w_2,\quad w_7=(w_1w_2)^2,\quad e=w_0.$$ Using the above formulas we obtain the following $W(C_2)$-invariant orbit function $$\label{cc-3} \begin{split} T_{m,n}^{C_2}(\phi,\psi) &= e^{2\pi\ri(m\phi+n\psi)}+e^{2\pi\ri(m(\psi-\phi)+n\psi))}+ e^{2\pi\ri(m\phi+n(2\phi-\psi))}+e^{2\pi\ri(m(\psi-\phi)+n(-2\phi+\psi))}+ {}\\ &+ e^{2\pi\ri(m(\phi-\psi)+n(2\phi-\psi))}+e^{2\pi\ri(-m\phi+n(-2\phi+\psi))}+ e^{2\pi\ri(m(\phi-\psi)-n\psi)}+e^{2\pi\ri(-m\phi-n\psi)}. \end{split}$$ The action of the operator (\[0-12\]) on $T_{m,n}^{C_2}(\phi,\psi)$ produces coefficients at each exponent of (\[cc-3\]). The condition of equality of these coefficients gives us the following independent relations $$\sum_{k=0}^{N}a_{k}m^{N-k}n^k = \sum_{k=0}^{N}a_{k}(m)^{N-k}(-m-n)^k = \sum_{k=0}^{N}a_{k}(m+2n)^{N-k}(-n)^k =$$ $$\sum_{k=0}^{N}a_{k}(m+2n)^{N-k}(-m-n)^k = \sum_{k=0}^{N}a_{k}(-m)^{N-k}(-n)^k.$$ It follows from the equality of the first and last sums that the coefficients $a_{k}$ are nonzero only for the even $N$. In this case the matrix elements of the matrices ${\tt M}_i,\,\,i=1,2$ have the form $$({{\tt M}_1})_{ij}= (-1)^{j+1}{{N+1-j}\choose {N+1-i}},\quad ({{\tt M}_2})_{ij}= (-1)^{j+1}2^{j-i}{{j-1}\choose {i-1}},\quad i,j = 1,..N+1.$$ These matrices are commutative $$[{\tt M}_1,{\tt M}_2]=0,\quad{\tt M}_1^2={\tt M}_2^2=E_{N+1}.$$ Besides $M_i,\,\,i=1,2$ there is only one independent matrix $M_3$ $${\tt M}_3 ={\tt M}_1{\tt M}_2.$$ Coordinates $a_{k}$ of any joint eigenvectors with unit eigenvalues of the matrices ${\tt M}_i,\,\,i=1,2$ give us the coefficients of the operator $L_N^{(C_2)}$ from (\[0-12\]). For the cases $N=2,4$ we obtain the following result $$\label{0-19} N=2,\quad V^{C_2}_{3} = (1,2,2),\quad N=4,\quad V_{5a}^{C_2} = (1,4,1,0,0),\quad V_{5b}^{C_2} = (0,0,1,2,1).$$ The related independent operators in the angle variables with their spectrums have the forms $$\label{0-20} L^{C_2}_{3}=\partial^2_{\phi^2}+2\partial^2_{\phi\psi}+2\partial^2_{\psi^2}, \qquad E^{C_2}_{3}(m,n)=m^2+2mn+2n^2,$$ $$\label{0-21} L^{C_2}_{5a}=\partial^4_{\phi^4}+4\partial^4_{\phi^3\psi}+\partial^4_{\phi^2\psi^2}, \qquad E^{C_2}_{5a}(m,n)=m^2(m^2+4mn+n^2).$$ $$\label{0-22} L^{C_2}_{5b}=\partial^4_{\phi^2\psi^2}+2\partial^4_{\phi\psi^3}+\partial^4_{\psi^4}, \qquad E^{C_2}_{5b}(m,n)=n^2(m+n)^2.$$ [**6.**]{} Transition from the angle coordinates to Descartes ones are given by the relations (see, for example, [@DKS]) $$\begin{aligned} \label{cc-6} x&=&\frac{1}{2}T_{1,0}^{C_2}=e^{2\pi\ri\phi}+e^{-2\pi\ri\phi}+ e^{2\pi\ri(\phi-\psi)}+ e^{-2\pi\ri(\phi-\psi)}, \label{cc-5}\\ y&=&\frac{1}{2}T_{0,1}^{C_2}=e^{2\pi\ri\psi}+e^{-2\pi\ri\psi}+e^{2\pi\ri(2\phi-\psi)}+ e^{-2\pi\ri(2\phi-\psi)}.\end{aligned}$$ For the case (\[0-20\]) we obtain $$\label{0-23} L^{C_2}(x,y) = (x^2-2y-8)\frac{\partial^2}{\partial x^2}+2x(y-4)\frac{\partial^2}{\partial x\partial y}+2(y^2+4y-2x^2)\frac{\partial^2}{\partial y^2}+x\frac{\partial}{\partial x}+2y\frac{\partial}{\partial y}.$$ [**7.**]{} To finish these brief notes we consider the case of the polynomials, associated with the Lie algebra $G_2$. The root system of the algebra $G_2$ has two fundamental roots $\alpha_1,\,\alpha_2$ and includes the positive roots $\alpha _1+\alpha_2,\,2\alpha _1+\alpha_2,\,3\alpha _1+\alpha_2,\,3\alpha _1+2\alpha_2$ and their reflections. The action of generating elements $w_1,w_2$ of the Weyl group $W(A_2)$ on the fundamental roots are given by the formulas $$w_1\alpha_1=-\alpha_1,\quad w_1\alpha_2=3\alpha_1+\alpha_2,\quad w_2\alpha_1=\alpha_1+\alpha_2, \quad w_2\alpha_2=-\alpha_2,$$ $$w_1\lambda_1=\lambda_2-\lambda_1,\quad w_1\lambda_2=\lambda_2,\quad w_2\lambda_1=\lambda_1, \quad w_2\lambda_2=2\lambda_1-\lambda_2.$$ The action of the other group elements on the fundamental weights is determined by their representation in terms of the generating elements $$w_3=w_1w_2,\, w_4=w_2w_1,\, w_5=w_2w_1w_2,\, w_6=w_1w_2w_1,\, w_7=(w_1w_2)^2,$$ $$\!w_8\!=\!(w_2w_1)^2,\, w_9\!=\!w_2(w_1w_2)^2,\, w_{10}\!=\!w_1(w_2w_1)^2,\, w_{11}\!=\!(w_1w_2)^3,\, w_{0}\!=\!e.$$ Using these formulas and definition (\[0-3\]) we obtain the following $W(G_2)$-invariant orbit function $$\begin{gathered} \label{4-01} T^{G_2}_{m,n} = e^{2\pi\ri(m\phi+n\psi)}+ e^{2\pi\ri(m(-\phi +\psi)+ n(-3\phi +2\psi))}+ e^{2\pi\ri(m(2\phi -\psi) + n(3\phi -\psi))}+ \\ e^{2\pi\ri(-(m\phi + n\psi))}+ e^{2\pi\ri(-(m(-\phi +\psi)+n(-3\phi +2\psi)))} + e^{2\pi\ri(-(m(2\phi -\psi)+n(3\phi -\psi)))} + \\ e^{2\pi\ri(m\phi +n(3\phi-\psi))}+ e^{2\pi\ri(m(-\phi + \psi)+n\psi)}+ e^{2\pi\ri(m(2\phi-\psi) + n(3\phi -2\psi))}+\\ e^{2\pi\ri(-(m\phi +n(3\phi-\psi)))} + e^{2\pi\ri(-(m(-\phi + \psi)+n\psi))} + e^{2\pi\ri(-(m(2\phi-\psi) + n(3\phi -2\psi)))}.\end{gathered}$$ The action of the operator (\[0-12\]) on $T_{m,n}^{G_2}(\phi,\psi)$ produces coefficients at each exponent of (\[4-01\]). The condition of equality of these coefficients gives us the following independent relations $$\sum_{k=0}^{N}a_{k}m^{N-k}n^k = \sum_{k=0}^{N}a_{k}(m)^{N-k}(-m-n)^k = \sum_{k=0}^{N}a_{k}(m+3n)^{N-k}(-n)^k =$$ $$\sum_{k=0}^{N}a_{k}(2m+3n)^{N-k}(-m-n)^k = \sum_{k=0}^{N}a_{k}(m+3n)^{N-k}(-m-2n)^k =\sum_{k=0}^{N}a_{k}(2m+3n)^{N-k}(-m-2n)^k.$$ Equality of the first and the second sums gives us the matrix ${\tt M}_1$ which is the same as in the $A_2$ and $C_2$ cases (\[0-14\]). Equality of the first and the second sums gives us the matrix ${\tt M}_2$ $$({{\tt M}_1})_{ij}= (-1)^{j+1}{{N+1-j}\choose {N+1-i}},\quad ({{\tt M}_2})_{ij}= (-1)^{j+1}3^{j-i}{{j-1}\choose {i-1}},\quad i,j = 1,..N+1.$$ The remaining matrices are $${{\tt M}_3}={{\tt M}_1}{{\tt M}_2},\quad {{\tt M}_4}={{\tt M}_2}{{\tt M}_1},\quad {{\tt M}_5}= {{\tt M}_1}{{\tt M}_2}{{\tt M}_1}={{\tt M}_2}{{\tt M}_1}{{\tt M}_2}.$$ Coordinates $a_{k}$ of any joint eigenvectors with unit eigenvalues of the matrices ${\tt M}_i,\,\,i=1,2$ give us the coefficients of the operator $L_N^{(G_2)}$ from (\[0-12\]). For the cases $N=2$ we obtain the following result (there are no solutions for the odd cases) $$\label{g-1} N=2,\quad V^{G_2}_{3} = (1,3,3).$$ The related independent operator in the angle variables with its spectrum has the form $$\label{g-2} L^{G_2}_{3}=\partial^2_{\phi^2}+3\partial^2_{\phi\psi}+3\partial^2_{\psi^2}, \qquad E^{G_2}_{3}(m,n)=m^2+3mn+3n^2,$$ Calculations in the cases $N=4,6$ give us only $L^{G_2}_{5}=\left(L^{G_2}_{3}\right)^2,\,L^{G_2}_{7}=\left(L^{G_2}_{3}\right)^3$. [**8.**]{} Transition from the angle coordinates to Descartes ones is given by the relations $$\begin{aligned} \label{cc-6} x&=&\frac{1}{2}T_{1,0}^{G_2}=e^{2\pi\ri(\phi)}+ e^{2\pi\ri(-\phi +\psi)}+ e^{2\pi\ri(2\phi -\psi)}+ e^{2\pi\ri(-\phi)}+ e^{2\pi\ri(-(-\phi +\psi))} + e^{2\pi\ri(-(2\phi -\psi))},\\ y&=&\frac{1}{2}T_{0,1}^{G_2}= e^{2\pi\ri(\psi)}+ e^{2\pi\ri(-3\phi +2\psi)}+ e^{2\pi\ri(3\phi -\psi)}+ e^{2\pi\ri(-\psi)} + e^{2\pi\ri(\phi -2\psi)} + e^{2\pi\ri(-3\phi +\psi)} .\end{aligned}$$ For the case (\[g-2\]) we obtain $$L^{G_2}(x,y) = (x^2-3x-y-12)\frac{\partial^2}{\partial x^2}+(3xy-6x^2+12y+36)\frac{\partial^2}{\partial x\partial y}+$$ $$+(3y^2+9y-3x^3+9xy+27x)\frac{\partial^2}{\partial y^2}+x\frac{\partial}{\partial x}+3y\frac{\partial}{\partial y}.$$ [99]{} T. H. Koornwinder, *Orthogonal polynomials in two variables which are eigenfunctions of two algebraically independent partial differential operators*, I - IV, Indag. math. [**36**]{}(1), 48-66; [**36**]{}(4), 357-381 (1974). T. Rivlin, *The Chebyshev Polynomials*, Wiley-Interscience publication, New York, 1974. T. H. Koornwinder, *Two-variable analogues of the classical orthogonal polynomials*, pp. 435-495 in: *Theory and application of special functions*, R.A. Askey (ed.), Academic Press, 1975. G. J. Heckman, *Root systems and hypergeometric functions*: II, Comp. Math. [**64**]{}, pp. 353-373, 1987. M. E. Hoffman and W. D. Withers, *Generalized Chebyshev polynomials associated with affine Weyl groups*, Trans. Am. Math. Soc. [**308**]{}, pp. 91-104, 1988. R. J. Beerends, *Chebyshev polynomials in several variables and the radial part Laplace – Beltrami operator*, Trans. Am. Math. Soc. [**328**]{}, pp. 770-814, 1991. A. Klimyk, J.Patera, [*Orbit functions*]{}. SIGMA. [**2**]{} 006, (2006). J. E. Humphreys, *Introduction to Lie Algebras and Representation Theory*, Springer Graduate Texts in Mathematics, 1972. E.V. Damaskinsky, P.P. Kulish, and M.A. Sokolov, *On calculation of generating functions of Chebyshev polynomials in several variables*, J.Math.Phys., **56**, 063507 (2015). [^1]: The work is supported by RFBR under the grant 15-01-03148
--- abstract: 'In this paper, we propose an [*exact holographic mapping*]{} which is a unitary mapping from the Hilbert space of a lattice system in flat space (boundary) to that of another lattice system in one higher dimension (bulk). By defining the distance in the bulk system from two-point correlation functions, we obtain an emergent bulk space-time geometry that is determined by the boundary state and the mapping. As a specific example, we study the exact holographic mapping for $(1+1)$-dimensional lattice Dirac fermions and explore the emergent bulk geometry corresponding to different boundary states including massless and massive states at zero temperature, and the massless system at finite temperature. We also study two entangled one-dimensional chains and show that the corresponding bulk geometry consists of two asymptotic regions connected by a worm-hole. The quantum quench of the coupled chains is mapped to dynamics of the worm-hole. In the end we discuss the general procedure of applying this approach to interacting systems, and other open questions.' author: - 'Xiao-Liang Qi' bibliography: - 'holography.bib' title: 'Exact holographic mapping and emergent space-time geometry' --- Introduction ============ In recent years, holographic duality, also known as anti-de-Sitter space/conformal field theory (AdS/CFT) correspondence[@maldacena1998; @witten1998; @witten1998b; @gubser1998], has attracted tremendous research interest in both high energy and condensed matter physics. This correspondence is defined as a duality between a $d+1$-dimensional conformal field theory defined on flat space and a $d+2$-dimensional quantum gravity theory defined on an AdS space background. In the known examples, the large-$N$ limit of the conformal field theory corresponds to the classical limit of the dual gravity theory. A key reason of such a correspondence is that the conformal symmetry group of $d+2$-dimensional space (with Lorentz metric) is ${\rm SO(d,2)}$, which is identical to the isometry group of AdS space. This duality can be generalized to more general field theories without conformal symmetry, which are dual to bulk gravity theories on different space-time manifolds. The holographic duality is intrinsically related to the renormalization group (RG) flow of the boundary theory[@akhmedov1998; @boer2000; @skenderis2002], which is natural since the space-time dilatation is included in the conformal transformation group. The boundary flat space is mapped to the conformal boundary of the AdS space, and the emergent dimension perpendicular to the conformal boundary has the physical interpretation of energy scale. The RG flow of the boundary coupling constants become the bulk equation of motion[@heemskerk2011; @lee2010; @lee2011; @lee2012]. Related to such ideas, B. Swingle[@swingle2012] has proposed a relation between holographic duality and multiscale entanglement renormalization ansatz (MERA)[@vidal2007; @vidal2008]. MERA is a real space renormalization procedure defined for quantum states, which represents a highly entangled many-body state, such as the ground state of a conformal field theory, by a tensor network, as is illustrated in Fig. \[fig1\] (a). Contraction of all tensors in this network defines an ansatz many-body wavefunction which can be used to approximate the ground state of the physical system. The network is viewed as a discretized version of the AdS space bulk theory[@swingle2012; @swingle2012b; @evenbly2011; @hartman2013] , which is dual to the boundary CFT. This proposed correspondence provides a physical interpretation of the Ryu-Takayanagi formula[@ryu2006] which relates entanglement entropy to the minimal surface area in the bulk. The continuous generalization of MERA[@haegeman2013] and its relation to AdS/CFT[@nozaki2012] has also been discussed. However, there are important differences between MERA and AdS/CFT correspondence. In the former the bulk tensor network is classical even for a generic CFT, while in the later the bulk is only classical when the boundary theory is in the large $N$ limit. Although the network structure provides some information about the bulk geometry, this information is incomplete. In particular, the time direction metric is not explicitly encoded in the network, which makes it difficult to understand some interesting phenomena such as the correspondence between a finite temperature boundary system and a bulk black-hole geometry. ![(a) Network representing a MERA state (without disentanglers). (b) Network of the exact holographic mapping. Each node in the network stands for a unitary transformation $U$ which maps the states of two input sites $\ket{s_1,s_2}$ to one bulk state (red dot) labeled by $\ket{\alpha}$ and one auxiliary state $\ket{t}$. More details of the definition are given in Sec. \[sec:def\]. (c) A simplified representation of the EHM network in (b), in which all the unitary transformations in the same layer are combined together to one unitary mapping (grey triangle). The boundary theory (yellow square) is mapped to auxiliary degrees of freedom (blue arrow) and bulk states (red filled circle) in each step. After $N$ steps, the boundary theory is mapped to the bulk theory consisting of all red dots. []{data-label="fig1"}](holo1p.jpg){width="3.5in"} In this paper, we propose a generalization of MERA, the exact holographic mapping (EHM), which provides a more explicit and complete understanding of the bulk theory for a given boundary theory. Consider the network in Fig. \[fig1\] (b), in which each vertex represents a unitary mapping which maps two input sites into two output sites. One of the two output sites (black) captures the low energy, or longer-range entangled degree of freedom of the two input sites, which becomes input of the next layers; The other output site (red) captures the high energy, or short-range entangled degree of freedom which is considered as a “bulk" degree of freedom. The net effect of all layers of unitary transformations is a unitary mapping from the Hilbert space of the boundary theory to that of the bulk theory, defined by qubits living on the red sites. As has been discussed in Ref. [@vidal2008], the MERA ansatz states are obtained by acting the reverse mapping (the one from bulk to boundary) on a direct product state in the bulk. Here we propose to apply the unitary mapping to all states in the boundary system, which leads to a bulk theory that is exactly equivalent to the boundary theory. Properties of the boundary theory such as Hamiltonians and other operators, correlation functions and time-evolution can all be mapped to the bulk theory. Compared with the AdS/CFT correspondence, we can consider the bulk theory obtained in EHM as a bulk “matter field" living on the back-ground of hyperbolic space, while MERA corresponds to the infinite mass limit of the bulk matter field. (It should be noted that Ref. [@aguado2011] has mapped generic states of exactly solvable quantum double models using MERA, which can be considered as a realization of EHM. The MERA ansatz is exact in these models, meaning that all eigenstates of the boundary Hamiltonian correspond to direct product states in the bulk theory.) Allowing the bulk matter field to have a nontrivial dynamics leads to important consequences in several different aspects. Firstly, this allows the bulk-boundary correspondence to be exactly defined for a generic boundary system, rather than for special ansatz states that can usually characterize the boundary physics only approximately. The entanglement in the bulk state, which is short-ranged for a properly chosen unitary mapping, can be viewed as a description of the “residual entanglement" in the boundary theory that was not captured in the classical network itself. Secondly, the bulk state can be used to provide an independent definition of bulk geometry. By assuming the bulk state to be a massive state in the bulk, we can use the two-point correlation functions of the bulk state to define the distance between any pair of points in the bulk. When the EHM is chosen properly, such a definition of bulk distance can be interpreted as geodesic distance in a space-time manifold. In such a way, the bulk geometry corresponding to each given boundary system is not given by the network a priori, but is an “emergent" property determined by the boundary state and the EHM. In particular, since we have access to time-dependent correlation functions, we can not only probe the space geometry but also space-time geometry. We illustrate the EHM by a $(1+1)$-dimensional lattice Dirac fermion example. We introduce an EHM for the free fermion and study the bulk space-time geometry for several different boundary states. For massless Dirac fermion at zero temperature, the corresponding bulk state can be considered as a discretized version of massive fermion on AdS$_{2+1}$. For massless fermion at finite temperature, the bulk state is consistent with a Banados, Teitelboim and Zanelli (BTZ) black-hole geometry[@banados1992], although the same unitary mapping as the zero temperature case is used. The bulk sites in the center of the network (the “infared" region) correspond to near-horizon region of the black-hole, and the entanglement entropy between bulk sites also provide a possible microscopic origin of the black-hole entropy. When the boundary state is a massive fermion at zero temperature, the corresponding bulk geometry is a space which effectively terminates at a certain radius, but by studying time direction correlation one can see that the “infared boundary" of this geometry is not a horizon, and also the infared region does not carry entropy. As a more sophisticated example we study a quantum quench problem in which two coupled chains are entangled to form a massive ground state, and the coupling is turned off at a certain time. We show that the two entangled chains correspond to a bulk worm-hole geometry which connects to asymptotic AdS regions. After the quench, the worm-hole shrinks and then expands again, which can be compared with the holographic description of quantum quench procedure studied in the literature[@takayanagi2010; @hartman2013]. Definition of the exact holographic mapping {#sec:def} =========================================== The exact holographic mapping is defined by multiplying a series of unitary transformations, as is illustrated in Fig. \[fig1\] (b). We start from a lattice model with an $n$-dimensional Hilbert space on each site, and with total number of sites $L=2^N$. For two sites with states labeled by $\ket{s_1s_2},~s_{1,2}=1,2,...,n$, a unitary operator $U_{12}$ is defined to map them to two output sites labeled by $a$ (standing for “auxiliary") and $b$ (“bulk"). U\_[12]{}=\_[t,=1,2,...,n]{}U\_[s\_1s\_2]{}\^[t]{}\_a\_b Here $\ket{\alpha}_b$ and $\ket{t}_a$ are sets of basis of the bulk site and auxiliary site, respectively. The same transformation is carried for all pairs of sites $2i-1$ and $2i$, which leads to a unitary transformation on the Hilbert space of the whole system: V\_1=U\_[12]{}U\_[34]{}...U\_[2\^N-1,2\^N]{} This is a mapping from a single chain with $2^N$ sites to two coupled chains, each with $2^{N-1}$ sites. Then we can define $V_2$ in the same way for the Hilbert space of $2^{N-1}$ auxiliary sites, which defines $2^{N-2}$ bulk sites and $2^{N-2}$ auxiliary sites in the second layer. Iterating this procedure $N$ times, as is illustrated in Fig. \[fig1\] (c), we obtain a unitary mapping M=V\_NV\_[N-1]{}...V\_1 which maps the $2^N$ boundary sites to the same number of bulk sites. (In the last step, we have one bulk site and one auxiliary site, but both needs to be viewed as bulk sites.) Here $V_2$ should be understood as $V_2\otimes \mathbb{I}$ where $V_2$ acts on the auxiliary states in the first layer and $\mathbb{I}$ is the identity operator acting on the bulk states of the first layer. The $V_n$ in each layer should be understood in the same way. For each choice of $U$ and a many-body state of the 1D chain $\ket{\Phi}$, the unitary operator $M$ maps $\ket{\Phi}$ to a 2D many-body state $\ket{\Psi}=M\ket{\Phi}$ defined on the network shown in Fig. \[fig1\] (a). This mapping is defined for each state in the Hilbert space of the 1D chain, so that all operators, such as the Hamiltonian and the thermal density matrix, can also be mapped to the bulk system. If we take a direct product state $\ket{0}=\prod_{{\bf x}}\ket{0}_{\bf x}$ in the bulk, with ${\bf x}$ labeling the bulk sites, the corresponding boundary state $\ket{\Psi}=M^{-1}\ket{0}$ is an ansatz state defined in ordinary MERA approach. The mapping can be easily generalized by adding disentanglers[@vidal2007; @vidal2008] in the same way as in MERA (which are unitary transformations on auxiliary sites that do not create new bulk sites), and/or by allowing $U$ to be different at different sites. Different from MERA case, such modifications are not necessary for characterizing the boundary state, since the mapping is exact. Therefore in this paper, we will focus on the simple choice described above with the same $U$ at each vertex, which already leads to rich consequences. A key property of EHM that we will study is that the nontrivial bulk state $\ket{\Psi}=M\ket{\Phi}$ obtained from the mapping provides a measure of the bulk geometry. The belief behind this “geometrical" point of view is that by appropriate choice of $U$, the bulk system can be gapped even if the boundary system is gapless. We do not have a proof of this statement for a generic boundary system, but this conjecture is supported by the fact that even a gapped bulk state on this hyperbolic geometry can provide the sufficient quantum entanglement that is necessary to characterize the boundary critical system. This is similar to the MERA case discussed in Ref. [@evenbly2011; @swingle2012]. With the assumption that we have mapped the boundary state to a massive bulk state, we can define the geodesic distance $d_{({\bf x},t_1),({\bf y},t_2)}$ between two bulk space-time points by the two point correlation function. For a massive state the two point function at long distance has the asymptotic form C\_0 We use this equation as a [*definition*]{} of the distance function: d\_[([**x**]{},t\_1),([**y**]{},t\_2)]{}=-\[distancedefgeneral\] The correlation length $\xi$ and the constant $C_0$ depend on the operator chosen, so that this equation can be used to determine the distance up to a constant and an overall scale. We have omitted the possible power law term front multiplying the exponential decay term, since the $\log$ of the correlation function will be dominated by the linear in $d$ term in the long distance limit. Apparently, one would like to define the distance in a way that is independent from the choice of operator $O$. The suitable choice will be an appropriate upper bound of all two-point correlation functions. For equal time correlation function, there is an obvious choice of such a bound, which is the mutual information defined as I\_[[**xy**]{}]{}=S\_[[**x**]{}]{}+S\_[[**y**]{}]{}-S\_[**xy**]{} Here $S_{\bf xy}=-{\rm Tr}\kc{\rho_{\bf xy}\log\rho_{\bf xy}}$ is the von Neumann entropy of sites ${\bf xy}$ with reduced density matrix $\rho_{\bf xy}$, and similarly $S_{{\bf x}({\bf y})}$ is the entropy of a single site ${\bf x}({\bf y})$. Therefore the spatial distance can be defined by d\_[[**x**]{}[**y**]{}]{}=-\[distancedef\] When each site ${\bf x}$ has $D$ states, the entropy $S_{\bf x}\leq \log D$. Therefore we have $I_{\bf xy}\leq 2\log D$. If two sites have $I_{\bf xy}=2\log D$, it means they are maximally entangled with each other and not entangled with any other sites. According to Eq. (\[distancedef\]) ${\bf x}$ and ${\bf y}$ have minimal distance in this case. Therefore it is natural to define the distance between such a maximally entangled pair to be $0$, which means $I_0=2\log D$. The geodesic distance was also related to mutual information in Ref. [@raamsdonk2010], but the mutual information discussed there was between different regions in the boundary system. It is also interesting to note that the distance definition (\[distancedef\]) may be related to the idea discussed recently in Ref. [@maldacena2013] that nonlocal quantum entanglement creates wormhole between far away spatial regions. Free fermion example {#sec:freefermion} ==================== As an explicit example, we consider the $1+1D$ lattice Dirac fermion with the following Hamiltonian: H=\_k c\_[k]{}\^c\_k\[HDirac\] Here $\sigma_x,\sigma_y$ are Pauli matrices, and the annihilation operator $c_k$ is a two component spinor. In the long wavelength limit $k\>0$, the single particle Hamiltonian is approximately $k\sigma_x+m\sigma_y$ which approaches the continuous Dirac model. Now consider the unitary transformation $U$ which is a single-particle basis transformation =1\[holomapping\] The spin index is omitted, which is preserved in this transformation. This mapping preserves the quadratic nature of the Hamiltonian, and breaks translation symmetry by doubling the unit cell. The Hamiltonian in the transformed basis is H&=&H\_a+H\_b+H\_[int]{}\ H\_a&=&12\_ka\_[k1]{}\^a\_[k1]{}\ H\_b&=&12\_kb\_[k1]{}\^b\_[k1]{}\ H\_[int]{}&=&12\_k\ For the critical point at $m=0$, we see that $H_a$ has the same form as the original Hamiltonian except for a rescaling of the bandwidth by $\frac 12$. Since $H_a$ will be the input for the next layer, the low energy Hamiltonians of the auxiliary degrees of freedom for each layer are all related by a rescaling. The same holds for the bulk Hamiltonian $H_b$. In this sense the Hamiltonian of low energy degrees of freedom $a_k$ is at the “fix point" of the EHM defined by $U$. The transformation above is iterated by defining =1\[holomapping2\] leads to a bulk Hamiltonian $H_b=\sum_{\bf x,y}b_{\bf x}^\dagger h_{\bf xy}b_{\bf y}$.[^1] To distinguish bulk and boundary we will use $i$ to label boundary sites and use ${\bf x}=(x,n)$ to label bulk sites. Here $n$ labels the layer index and $x$ labels the sites in each layer. The bulk operators are related to the boundary ones by a unitary transformation b\_[**x**]{}=\_i\_i\^\*([**x**]{})c\_i The detail expression of the matrix element $\phi_i^*({\bf x})$ is given in the appendix. The basis wavefunction $\phi_i^*({\bf x})$ is actually a known basis called Haar wavelets[@haar1910]. (It is interesting to note that wavelets have been applied in renormalization group, which might be considered as a classical analog of the EHM approach.[@battle1992; @best2000]) To understand the properties of the bulk theory, we study the bulk correlation functions. For the free fermion system studied here, we can use Wick theorem to determine all correlation functions by the single-particle Green’s function: G\_[[**xy**]{}]{}()&=&\ &=&\_k\_k\^\*\_k\ \[Greens\] We first study the spatial distance defined by mutual information in Eq. (\[distancedef\]). The entropy $S_{\bf x}$ for the free fermion state is determined by the following formula[@peschel2003]: S\_[**x**]{}=-[tr]{} with $G_{\bf xx}=\kd{G_{{\bf xx}\alpha\beta}(\tau\>0^+)}$ the equal time single particle correlation function matrix at site ${\bf x}$. The entropy of ${\bf y}$ and ${\bf xy}$ can be defined in the same way. Fig. \[MIcritical\] (a) and (b) show the spatial distance between two points with the same $n$ and two points with the same $x$. We can see that the distance $d_{{\bf x}{\bf y}}$ scales like d\_[(x,n),(y,n)]{}, d\_[(x,n),(x,m)]{} This is consistent with the geodesic distance between two points in AdS space in the limit $d\gg R$. To make closer comparison, consider the metric of Euclidean AdS$_{2+1}$ ds\^2=dt\^2+d\^2+\^2d\^2 Here $\theta\in[0,2\pi)$ is an angle variable. To compare the discrete network with the AdS space, we notice that the perimeter of $n$-th layer is $2^{N-n}$, which should be identified with $2\pi \rho$. Therefore the point $(x,n)$ corresponds to =, =\[coordinatedef\] in the AdS coordinate. In the expression of $\theta$ we have introduced a constant shift such that the point in the $n+1$-th layer is in the middle of two sites in the $n$-th layer, as is shown in Fig. \[fig1\]. The AdS geodesic distance is d\_[(x,n),(y,n)]{}&=&R[arccosh]{}2RR\ \[horizontald\]By fitting the formula (\[horizontald\]) and (\[distancedef\]) we can obtain $R\simeq 0.33$ and $\xi\simeq 0.11$, as is shown in Fig. \[MIcritical\] (a). Using this value of $R$ we can compute the distance $d_{(0,1),(0,n)}$ between two points separated vertically. As is shown in Fig. \[MIcritical\] (b), the mutual information between two sites $(0,1)$ and $(0,n)$ decays exponentially with their distance, although the slope gives a different correlation length $\xi$. The fact that $R<1$ tells us that the network can only characterize the large scale geometry of AdS space for length scale much larger than $R$. ![Distance between two points with equal time and different spatial location ((a) and (b)) and that between two points at the same spatial location with different time ((c) and (d)) for the critical system $T=m=0$. (a) and (b) shows the distance between two points separated along the horizontal direction and vertical direction, respectively. The $x$ axis is the geometrical distance between the two points in the AdS space. The AdS radius $R$ and correlation length $\xi$ is obtained from fitting in (a). (c) and (d) shows the distance between two points at the same spatial location and time difference of $\tau$. The fitting yields values of $R$ and $\xi$ independently from the spatial correlation functions. In all panels, the circles are the numerical results and the lines are the fitting with AdS space geodesic distance. All numerical calculations in Sec. \[sec:freefermion\] and \[sec:finiteTm\] are done for a chain with $2^17$ sites.[]{data-label="MIcritical"}](figMIa.pdf "fig:"){width="1.8in"} ![Distance between two points with equal time and different spatial location ((a) and (b)) and that between two points at the same spatial location with different time ((c) and (d)) for the critical system $T=m=0$. (a) and (b) shows the distance between two points separated along the horizontal direction and vertical direction, respectively. The $x$ axis is the geometrical distance between the two points in the AdS space. The AdS radius $R$ and correlation length $\xi$ is obtained from fitting in (a). (c) and (d) shows the distance between two points at the same spatial location and time difference of $\tau$. The fitting yields values of $R$ and $\xi$ independently from the spatial correlation functions. In all panels, the circles are the numerical results and the lines are the fitting with AdS space geodesic distance. All numerical calculations in Sec. \[sec:freefermion\] and \[sec:finiteTm\] are done for a chain with $2^17$ sites.[]{data-label="MIcritical"}](figMIb.pdf "fig:"){width="1.8in"} ![Distance between two points with equal time and different spatial location ((a) and (b)) and that between two points at the same spatial location with different time ((c) and (d)) for the critical system $T=m=0$. (a) and (b) shows the distance between two points separated along the horizontal direction and vertical direction, respectively. The $x$ axis is the geometrical distance between the two points in the AdS space. The AdS radius $R$ and correlation length $\xi$ is obtained from fitting in (a). (c) and (d) shows the distance between two points at the same spatial location and time difference of $\tau$. The fitting yields values of $R$ and $\xi$ independently from the spatial correlation functions. In all panels, the circles are the numerical results and the lines are the fitting with AdS space geodesic distance. All numerical calculations in Sec. \[sec:freefermion\] and \[sec:finiteTm\] are done for a chain with $2^17$ sites.[]{data-label="MIcritical"}](figTauc.pdf "fig:"){width="1.8in"} ![Distance between two points with equal time and different spatial location ((a) and (b)) and that between two points at the same spatial location with different time ((c) and (d)) for the critical system $T=m=0$. (a) and (b) shows the distance between two points separated along the horizontal direction and vertical direction, respectively. The $x$ axis is the geometrical distance between the two points in the AdS space. The AdS radius $R$ and correlation length $\xi$ is obtained from fitting in (a). (c) and (d) shows the distance between two points at the same spatial location and time difference of $\tau$. The fitting yields values of $R$ and $\xi$ independently from the spatial correlation functions. In all panels, the circles are the numerical results and the lines are the fitting with AdS space geodesic distance. All numerical calculations in Sec. \[sec:freefermion\] and \[sec:finiteTm\] are done for a chain with $2^17$ sites.[]{data-label="MIcritical"}](figTaud.pdf "fig:"){width="1.8in"} In addition to the spatial geometry, we can also use time-ordered correlation functions to study the space-time geometry. In principle one should use a properly defined upper bound of all correlation functions for a given pair of space-time points. However, such a generalization of mutual information to space-time points is not known to us. Thus we instead consider the time-ordered single-particle Green’s function C\_[**x**]{}()=\_G\_[[**xx**]{}]{}(),\[TimeCor\] with $G_{{\bf xy}\alpha\beta}$ defined in Eq. (\[Greens\]). For simplicity we will use imaginary time. The time-direction distance is defined by the asymptotic behavior of $C_{\bf x}(\tau)$: C\_[**x**]{}()=C\_0e\^[-d\_[([**x**]{},),([**x**]{},0)]{}/]{}\[distancedef2\] The distance defined by this equation can be compared with the geodesic distance in the AdS space d\_[([**x**]{},),([**x**]{},0)]{}=R[acosh]{}Fitting of this formula can be used to independently determine the $R$ and $\xi$. As is shown in Fig. \[MIcritical\] (c) and (d), the numerical results fit well with $R\simeq 0.34$ which is closed to the $R$ obtained from the spatial correlation function. However, $\xi$ is almost different by a factor of $2$. Such an anisotropy between space and time direction is a consequence of the difference between the two distance definitions (\[distancedef\]) and (\[distancedef2\]), and the different treatment of space and time in the EHM. Effect of finite temperature and finite mass {#sec:finiteTm} ============================================ A main advantage of EHM is that we can go beyond the scale invariant case and describe the space-time metric corresponding to a non-critical system without having to adjust the network itself. We can apply the same mapping $M$ to a different boundary system, which then leads to a bulk system with different correlation functions. If we still define the bulk geodesic distance using the correlation functions in Eq. (\[distancedef\]) and (\[distancedef2\]), we obtain a bulk geometry different from the AdS space. Two simplest ways to drive the system away from criticality are by adding a finite mass $m\neq 0$ and a finite temperature $T>0$. Finite $T$ system with $m=0$ ---------------------------- We first study the system with $m=0$ and finite temperature $T>0$. The space and (Euclidean) time direction distance are computed in the same way as zero temperature case, as is shown in Fig. \[fig:MIfiniteT\]. In the spatial direction, the distance between two sites $(x,n)$ and $(y,n)$ as a function of the coordinate distance $|x-y|$ shows a cross over from $\propto\log\abs{x-y}$ (the zero temperature behavior) in short range to linear $\propto \abs{x-y}$ in long range, as is shown in Fig. \[fig:MIfiniteT\] (a). The inset shows the ratio $I_{\bf xy}(T)/I_{\bf xy}(0)$ between the finite temperature and zero temperature mutual information, which shows a cross over from $1$ (green region) to $0$ (blue region). This is qualitatively consistent with the behavior of geodesic distance in a BTZ black hole geometry. The black-hole metric in Euclidean time is given by ds\^2=d\^2+d\^2+\^2d\^2 with $b$ the black hole radius. The distance between two points $(x,n)$ and $(y,n)$ at the same time is d\_[(x,n),(y,n)]{}=2R[asinh]{},  = However, it is difficult to fit the numerical results with this formula because we cannot assume $\rho=2^{N-n}/2\pi$ any more. In the critical system, scale invariance determines that $\rho$ must scales in the same way as the lattice perimeter $2^{N-n}$, but for finite temperature $\rho$ is not determined [a priori]{}. To obtain $\rho$ we numerically calculate the distance in time direction $d_{({\bf x},\tau),({\bf x},0)}$ and fit it with the analytic geodesic distance d\_[([**x**]{},),([**x**]{},0)]{}=R[acosh]{}\[geodesicT\] with $\beta=1/T$. From the fitting (Fig. \[fig:MIfiniteT\] (b)) we obtain the parameters $R,\xi,b$ and also obtain the radius $\rho$ as a function of the vertical coordinate $n$, which is shown in Fig. \[fig:MIfiniteT\] (c). Interestingly, $\rho$ approaches $b$ exponentially in the IR limit. Physically, this behavior reflects the fact that in IR limit (large $n$ limit) the bandwidth of the bulk states decays exponentially, so that the time-direction correlation length increases exponentially. In other words, the time direction correlation function decays more and more slowly in large $n$, as is shown in Fig. \[fig:MIfiniteT\] (c). ![(a) Spatial distance $d_{12}$ between two points $(j_1,n)$ and $(j_2,n)$ for different $n$. Distance increases with the increase of $n$. The inset is the colorplot of the ratio of mutual information $I_{12}(T)$ of the finite temperature system and $I_{12}(0)$ that of the critical system, as a function of horizontal and vertical coordinates. (b) Temporal distance $d_n(\tau)$ between two points $(j,n,0)$ and $(j,n,\tau)$ for different $n$. Distance decreases with the increase of $n$. The dashed lines are the fitting with the analytic formula (\[geodesicT\]). (c) The radius $\rho$ as a function of $n$, obtained from the fitting (red line with circles). The blue dashed line labels $r=b$ and the black dotted line shows the zero temperature value $\rho=2^{N-n}/2\pi$. (d) Entropy per site as a function of $n$ for finite temperature (red circles) and zero temperature (blue line). The black dotted line shows the maximal entropy value $2\log 2$. All calculations are done for $T=0.005$. []{data-label="fig:MIfiniteT"}](finiteTa.pdf "fig:"){width="1.8in"} ![(a) Spatial distance $d_{12}$ between two points $(j_1,n)$ and $(j_2,n)$ for different $n$. Distance increases with the increase of $n$. The inset is the colorplot of the ratio of mutual information $I_{12}(T)$ of the finite temperature system and $I_{12}(0)$ that of the critical system, as a function of horizontal and vertical coordinates. (b) Temporal distance $d_n(\tau)$ between two points $(j,n,0)$ and $(j,n,\tau)$ for different $n$. Distance decreases with the increase of $n$. The dashed lines are the fitting with the analytic formula (\[geodesicT\]). (c) The radius $\rho$ as a function of $n$, obtained from the fitting (red line with circles). The blue dashed line labels $r=b$ and the black dotted line shows the zero temperature value $\rho=2^{N-n}/2\pi$. (d) Entropy per site as a function of $n$ for finite temperature (red circles) and zero temperature (blue line). The black dotted line shows the maximal entropy value $2\log 2$. All calculations are done for $T=0.005$. []{data-label="fig:MIfiniteT"}](finiteTb.pdf "fig:"){width="1.8in"} ![(a) Spatial distance $d_{12}$ between two points $(j_1,n)$ and $(j_2,n)$ for different $n$. Distance increases with the increase of $n$. The inset is the colorplot of the ratio of mutual information $I_{12}(T)$ of the finite temperature system and $I_{12}(0)$ that of the critical system, as a function of horizontal and vertical coordinates. (b) Temporal distance $d_n(\tau)$ between two points $(j,n,0)$ and $(j,n,\tau)$ for different $n$. Distance decreases with the increase of $n$. The dashed lines are the fitting with the analytic formula (\[geodesicT\]). (c) The radius $\rho$ as a function of $n$, obtained from the fitting (red line with circles). The blue dashed line labels $r=b$ and the black dotted line shows the zero temperature value $\rho=2^{N-n}/2\pi$. (d) Entropy per site as a function of $n$ for finite temperature (red circles) and zero temperature (blue line). The black dotted line shows the maximal entropy value $2\log 2$. All calculations are done for $T=0.005$. []{data-label="fig:MIfiniteT"}](finiteTc.pdf "fig:"){width="1.8in"} ![(a) Spatial distance $d_{12}$ between two points $(j_1,n)$ and $(j_2,n)$ for different $n$. Distance increases with the increase of $n$. The inset is the colorplot of the ratio of mutual information $I_{12}(T)$ of the finite temperature system and $I_{12}(0)$ that of the critical system, as a function of horizontal and vertical coordinates. (b) Temporal distance $d_n(\tau)$ between two points $(j,n,0)$ and $(j,n,\tau)$ for different $n$. Distance decreases with the increase of $n$. The dashed lines are the fitting with the analytic formula (\[geodesicT\]). (c) The radius $\rho$ as a function of $n$, obtained from the fitting (red line with circles). The blue dashed line labels $r=b$ and the black dotted line shows the zero temperature value $\rho=2^{N-n}/2\pi$. (d) Entropy per site as a function of $n$ for finite temperature (red circles) and zero temperature (blue line). The black dotted line shows the maximal entropy value $2\log 2$. All calculations are done for $T=0.005$. []{data-label="fig:MIfiniteT"}](finiteTd.pdf "fig:"){width="1.8in"} The behavior of $\rho$ tells us that the IR region of the network now maps to the near-horizon region of the BTZ black hole geometry. This is an important difference from MERA case where a state with finite correlation length is simulated by a network truncated at finite depth[@evenbly2011]. An interesting relation to the black hole physics is given by studying the entanglement entropy of each bulk site with the rest of the system. Due to translation symmetry, $S_{(x,n)}=S_n$ is only a function of the vertical coordinate $n$. The entropy $S_n$ for both $T=0$ and $T=0.005$ is shown in Fig. \[fig:MIfiniteT\] (d). From this result we see that the entropy per site quickly approaches the maximal value $2\log 2$, which means each bulk site, except those near the boundary, is maximally entangled with other sites, although the mutual information shows that the entanglement is only with nearby sites. It should be noticed that such a maximal entropy also shows that the state at the boundary is far from a MERA state defined by the same unitary mapping, as the latter with have a direct product bulk state with $S_n=0$ at each site. At finite $T$, the IR region has very long time-direction correlation length and very short spatial correlation length, which is interpreted as the neighborhood of black hole horizon. The maximal entropy carried by each site in this region can be considered as the origin of the black hole Bekenstein-Hawking entropy. Finite mass system with $T=0$ ----------------------------- Now we study the system with finite mass $m\neq 0$ and temperature $T=0$, which leads to a different bulk space-time with a characteristic scale given by $m$. As is shown in Fig. \[fig:MIfinitem\] (a), the spatial distance behaves similarly as that of $T>0,~m=0$. In both cases, the spatial geometry has a IR cutoff scale, so that the distance between two points $(x,n)$ and $(y,n)$ interpolates from the $\log\abs{x-y}$ AdS behavior to the Euclidean $\abs{x-y}$ behavior. However, the time direction distance clearly distinguishes these two systems. As is shown in Fig. \[fig:MIfinitem\] (b), in long time limit the distance $d_{({\bf x},\tau),({\bf x},0)}$ along the time direction increases linearly in $\tau$. This is simply a consequence of the exponential decay of correlation functions controlled by mass $m$. In the IR region, the spatial correlation becomes very short range, but the time-direction correlation length remains finite. Compared with the finite temperature case, one can see that this space-time has a spatial cut-off scale, but at the cut-off scale (“end of the space") the time direction remains finite. In other words, the different behavior of IR region for finite $T$ and finite $m$ shows that the IR boundary of the space-time is a light-like surface (the black hole horizon) in the former case, and a time-like surface in the latter case. Because the choice of space-time with such IR boundary is not unique, we have not fit the numerical results to a specific geometry. For example, one natural candidate metric to compare with will be the confined space-time proposed in Ref. [@witten1998]. ![(a) Spatial distance $d_{12}$ between two points $(j_1,n)$ and $(j_2,n)$ for different $n$. Distance increases with the increase of $n$. The inset is the colorplot of the ratio of mutual information $I_{12}(m)$ of the massive system and $I_{12}(0)$ that of the critical system, as a function of horizontal and vertical coordinates. (b) Temporal distance $d_n(\tau)$ between two points $(j,n,0)$ and $(j,n,\tau)$ for different $n$. Distance decreases with the increase of $n$. (c) Long time behavior of $d_n(\tau)$ as a function of $n$ for a $\tau=1580\gg 1/m$. (d) Entropy per site as a function of $n$, for finite mass (red line with circles) and massless system (blue dashed line). The black dotted line show the maximal entropy value $2\log 2$. All calculations are done for $m=0.005, T=0$.[]{data-label="fig:MIfinitem"}](finiteMa.pdf "fig:"){width="1.8in"} ![(a) Spatial distance $d_{12}$ between two points $(j_1,n)$ and $(j_2,n)$ for different $n$. Distance increases with the increase of $n$. The inset is the colorplot of the ratio of mutual information $I_{12}(m)$ of the massive system and $I_{12}(0)$ that of the critical system, as a function of horizontal and vertical coordinates. (b) Temporal distance $d_n(\tau)$ between two points $(j,n,0)$ and $(j,n,\tau)$ for different $n$. Distance decreases with the increase of $n$. (c) Long time behavior of $d_n(\tau)$ as a function of $n$ for a $\tau=1580\gg 1/m$. (d) Entropy per site as a function of $n$, for finite mass (red line with circles) and massless system (blue dashed line). The black dotted line show the maximal entropy value $2\log 2$. All calculations are done for $m=0.005, T=0$.[]{data-label="fig:MIfinitem"}](finitemb.pdf "fig:"){width="1.8in"} ![(a) Spatial distance $d_{12}$ between two points $(j_1,n)$ and $(j_2,n)$ for different $n$. Distance increases with the increase of $n$. The inset is the colorplot of the ratio of mutual information $I_{12}(m)$ of the massive system and $I_{12}(0)$ that of the critical system, as a function of horizontal and vertical coordinates. (b) Temporal distance $d_n(\tau)$ between two points $(j,n,0)$ and $(j,n,\tau)$ for different $n$. Distance decreases with the increase of $n$. (c) Long time behavior of $d_n(\tau)$ as a function of $n$ for a $\tau=1580\gg 1/m$. (d) Entropy per site as a function of $n$, for finite mass (red line with circles) and massless system (blue dashed line). The black dotted line show the maximal entropy value $2\log 2$. All calculations are done for $m=0.005, T=0$.[]{data-label="fig:MIfinitem"}](finitemc.pdf "fig:"){width="1.8in"} ![(a) Spatial distance $d_{12}$ between two points $(j_1,n)$ and $(j_2,n)$ for different $n$. Distance increases with the increase of $n$. The inset is the colorplot of the ratio of mutual information $I_{12}(m)$ of the massive system and $I_{12}(0)$ that of the critical system, as a function of horizontal and vertical coordinates. (b) Temporal distance $d_n(\tau)$ between two points $(j,n,0)$ and $(j,n,\tau)$ for different $n$. Distance decreases with the increase of $n$. (c) Long time behavior of $d_n(\tau)$ as a function of $n$ for a $\tau=1580\gg 1/m$. (d) Entropy per site as a function of $n$, for finite mass (red line with circles) and massless system (blue dashed line). The black dotted line show the maximal entropy value $2\log 2$. All calculations are done for $m=0.005, T=0$.[]{data-label="fig:MIfinitem"}](finiteMd.pdf "fig:"){width="1.8in"} Besides the time-direction metric, another interesting difference between the finite mass and finite temperature systems is the behavior of bulk entanglement entropy. As shown in Fig. \[fig:MIfinitem\] (d), the entropy of each bulk site $S_{\bf x}=S_n$ in UV region behaves similarly from the critical system (and the finite $T$ zero mass system), but in IR region the entropy is suppressed. Physically this is a consequence of the fact that the mass remains a constant during the mapping while the bandwidth decays exponentially in IR limit. In the geometric point of view, this is again consistent with the fact that the finite mass space-time terminates “smoothly" and there is no entropy accumulated at the neighborhood of the cut-off scale, in contrast to the finite $T$ case. Worm-hole geometry and quantum quench process ============================================= One advantage of the EHM approach is that it can be applied to generic boundary states, so that it can also characterize time-dependent processes. As an interesting example, we study the quantum quench process in two coupled chains, which is mapped to a “worm-hole" geometry with two asymptotic AdS regions. ![(a) Schematic picture of the wormhole geometry. The green line illustrates a geodesic path between the two end points. (b) Distance $d_{n}^{12}$ between two sites with coordinate $(j,n)$ in the two layers. The main figure and the inset shows the distance in linear scale and log scale, respectively. All the results in this section are done for $2^16$ sites and $\lambda=0.05$. []{data-label="fig:bilayerMI"}](double1a.pdf "fig:"){width="1.8in"}![(a) Schematic picture of the wormhole geometry. The green line illustrates a geodesic path between the two end points. (b) Distance $d_{n}^{12}$ between two sites with coordinate $(j,n)$ in the two layers. The main figure and the inset shows the distance in linear scale and log scale, respectively. All the results in this section are done for $2^16$ sites and $\lambda=0.05$. []{data-label="fig:bilayerMI"}](double1b.pdf "fig:"){width="1.8in"} Consider the Hamiltonian H&=&\_k(c\_[k1]{}\^ c\_[k2]{}\^) with $h_k=\sigma_x\sin k+B(1-\cos k)\sigma_y$ the Hamiltonian of a critical chain. The $\lambda$ term is a hopping that couples the two chains. We apply the holographic mapping (\[holomapping\]) independently on the two chains. For $\lambda =0$ we will obtain two decoupled AdS-like spaces as we analyzed above. For $\lambda \neq 0$, entanglement occurs between the two chains in the ground state. Consequently, the mutual information between the two corresponding bulk spaces is non-vanishing. According to our definition of distance (\[distancedef\]), this means that the distance between points in these two spaces is finite, [*i.e.*]{}, the bulk corresponding to the two coupled chains is now a connected topological space. To understand the bulk geometry we consider the distance between the sites $(j,n)$ in the two layers. The corresponding annihilation operators $b_{(j,n)}^{1,2}$ are superpositions of $c_{i1,2}$ correspondingly. The numerical results of the distance $d_{n}^{12}$ is shown in Fig. \[fig:bilayerMI\]. Here we define $d_n^{12}=\log \frac{I_{\rm max}}{I_{(j,n)}^{12}}$ with $I_{(j,n)}^{12}$ the mutual information between the two sites at position $(j,n)$, and $I_{\rm max}$ the maximal possible mutual information between two sites $I_{\rm max}=2{\rm max}S_{\bf x}=4\log 2$. This choice of $I_{\rm max}$ means that we define the distance between two maximally entangled sites to be zero. From Fig. \[fig:bilayerMI\] one can see that in the UV limit (small $n$) the distance between the two sites scales linearly with $n$, while in the IR limit the distance decays exponentially. The (exponentially) vanishing distance suggests that the two UV regions are connected by a worm-hole, similar to the one obtained from analytic continuation of a black-hole.[@israel1976] ![(a) Distance between two sites at position $(j,n)$ in the two layers as a function of radial coordinate $n$ and time. (b) The distance between two sites at the first layer $(j,1)$ as a function of $\log(t)$. (c) The entanglement entropy of the two sites at $(j,n)$ with the rest of the system as a function of radial coordinate $n$ and time. The blue region in (a) and (c0 is the wormhole where both the distance between the two sites and the net entropy of the two sites are exponentially small. []{data-label="fig:quench"}](double2a.pdf){width="3.1in"} ![(a) Distance between two sites at position $(j,n)$ in the two layers as a function of radial coordinate $n$ and time. (b) The distance between two sites at the first layer $(j,1)$ as a function of $\log(t)$. (c) The entanglement entropy of the two sites at $(j,n)$ with the rest of the system as a function of radial coordinate $n$ and time. The blue region in (a) and (c0 is the wormhole where both the distance between the two sites and the net entropy of the two sites are exponentially small. []{data-label="fig:quench"}](double2b.pdf){width="2.7in"} ![(a) Distance between two sites at position $(j,n)$ in the two layers as a function of radial coordinate $n$ and time. (b) The distance between two sites at the first layer $(j,1)$ as a function of $\log(t)$. (c) The entanglement entropy of the two sites at $(j,n)$ with the rest of the system as a function of radial coordinate $n$ and time. The blue region in (a) and (c0 is the wormhole where both the distance between the two sites and the net entropy of the two sites are exponentially small. []{data-label="fig:quench"}](double2c.pdf){width="3.1in"} In a recent work[@hartman2013], the behavior of quantum entanglement is studied in a quantum quench problem with the wormhole geometry as the initial state. We can consider a corresponding quantum quench problem in our system by turning off the coupling $\lambda$ at time $0$. Before the quench, the system is at the ground state $\ket{G(\lambda)}$ of finite $\lambda$. After the quench, $\lambda=0$ and the time evolution of the two chains are independent. The single-particle Green’s function can be obtained by G\_[[**xy**]{}p,q]{}&=&\_k\_k\^\*\_kc\_[kp]{}(t\_1)c\_[kq]{}\^(t\_2)\ c\_[kp]{}(t\_1)&=&\_c\_[kp]{}(0)\[Greensdouble\] Here $p,q=1,2$ labels the layers and $\alpha,\beta$ labels the spin. It should be noticed that real time evolution instead of the imaginary Euclidean time is considered here. Using the single particle Green’s function we can define the space-time metric by Eq. (\[distancedef\]) and (\[distancedef2\]) in the same way as the static case. As an example of the correlation function we study the time-evolution of the distance between the two points at site ${\bf x}=(j,n)$. At time $t=0$ the distance $d_n^{12}(0)$ gives the worm-hole geometry shown in Fig. \[fig:bilayerMI\]. The time evolution $d_n^{12}(t)$ is shown in Fig. \[fig:quench\] (a). After the quench, the size of the worm-hole shrinks quickly and then expand again. The shrinking of the wormhole corresponds to a thermalization of the excitations created by the quench, and the reexpansion of the wormhole is a dethermalization procedure. In a generic system this should only occur in an Poincare recurrence time which is exponentially long in the system size, but in the free electron system it occurs in a time $T_P=L/2v$ with $L=2^N$ the system size and $v$ the speed of light of the system.[@takayanagi2010] In our model $v=1$ and $T_P=2^{N-1}$. To see the time-evolution clearly, in Fig. \[fig:quench\] we change the time variable $t$ to $f=\log\frac{t}{T_P-t}$. For $t\ll T_P$, $f\simeq \log \frac{t}{T_P}$, so that Fig. \[fig:quench\] (a) tells us that the decrease of the wormhole size (blue region) is proportional to $\log t$. We also studied the distance between two points close to the boundary at $n=1$, which is also proportional to $\log t$. This result is different from the observation of Ref. [@hartman2013] where the geodesic distance between two boundary points increases linearly in $t$.[@maldacenaprivate] (The area of minimal surface connecting two boundary regions is studied there, and for AdS$_{2+1}$ the minimal surface reduces to the geodesic line.) Physically, the linear $t$ increase of distance in Ref. [@hartman2013] corresponds to an exponential decay of mutual information between the two points, while the $\log t$ dependence we obtain corresponds to a $1/t$ dependence of the mutual information. This difference is possibly because the following difference between free fermion theory and an interacting theory. In a free fermion system a single particle excitation propagates in space but remains a single particle, while in an interacting theory the particle can decay into multiple other particles. Therefore, for the free fermion theory the mutual information between the two bulk sites “propagates" into a region with size $t$ (the speed of light is taken to be $1$). In other words, the mutual information remained is proportional to $1/t$. In contrast, in an interacting theory the mutual information can “propagate" to one of the many-body states in the region with size $t$. Since there are $D^t$ states in this region, with $D=4$ the number of states at each site, the remaining mutual information will be estimated by $D^{-t}$. Such a difference provides an example when the geodesic distance we define by mutual information is inconsistent with the minimal surface area required by the Ryu-Takayanagi formula of entanglement entropy, although in the simpler cases of single chain, they qualitatively agree. This is probably related to the fact that the distance defined by Eq. (\[distancedef\]) is generically different from the geometrical distance of the “classical" network we use to define $M$. More discussion about this will be given in Sec. \[sec:generals\]. Another quantity we calculate is the entanglement entropy of the two sites at $(j,n)$ with other sites in the bulk. At $t=0$, in the IR region the two sites at $(j,n)$ are almost maximally entangled with each other, and the net entropy of the two sites $S^{12}_{(j,n)}$ almost vanishes. After the quench, the entanglement starts to delocalize, and the net entropy of the two sites increases quickly. As is shown in Fig. \[fig:quench\] (b), entropy is filled into infared region when the wormhole shrinks. During the dethermalization period, the entropy is removed. Some more general analysis of EHM ================================= In the three sections above, we have restricted our discussions to EHM in free fermion systems, for which the bulk properties can be computed exactly. The EHM can in principle be applied to more generic interacting systems, but the bulk or boundary properties cannot be computed exactly for the general cases. However, one can still understand some generic properties of the EHM, which we will discuss in the following. Causal cone structure --------------------- An important feature of the MERA ansatz state is the existence of a causal cone structure[@vidal2008; @evenbly2013]. To compute the reduced density matrix of a boundary region for a MERA state (which determines all the physical variables in that region, such as energy average value), one does not need the information about the whole network, but only need the network in a region in the bulk, named as the causal cone. The causal cone only contains $\sim \log L$ number of sites when the boundary system has $L$ sites. The causal cone structure is essential for the efficient calculation of physical quantities in the MERA state. Since the EHM is an exact mapping, the causal cone structure for special MERA states does not apply. However, there is a generalized causal cone structure, as is illustrated in Fig. \[fig:causalcone\]. Each tensor stands for the unitary transformation $U$ which maps the two incoming indices to one outgoing auxiliary index and one bulk index (blue line with a solid circle). One can draw a bulk region that has $A$ as its boundary, and has only incoming arrows acrossing it. For such a region, the inverse of EHM maps the bulk states in this region to boundary degrees of freedom in $A$ and auxiliary degrees of freedom, without replying on other degrees of freedom outside this region. The causal cone $C_A$ is defined as the minimal one among all such bulk regions. Consider a boundary state $\ket{\Phi}$ which is related to a bulk state $\ket{\Psi}$ by the EHM $\ket{\Phi}=M^{-1}\ket{\Psi}$. Now we want to obtain the reduced density matrix of a region $A$ on the boundary. $M$ consists of a sequence of unitary transformations. As is illustrated in Fig. \[fig:causalcone\] (b) and (c), all the transformations outside a causal cone cancels each other in the partial trace, so that $\rho_A={\rm tr}_{\bar{A}}\ket{\Phi}\bra{\Phi}$ is determined by the reduced density matrix of the bulk state in the causal cone $C_A$: \_[C\_A]{}&=&[tr]{}\_\ \_[A]{}&=&[tr]{}\_[aux.]{} Here $\bar{A}$ is the complementary set of $A$ in the boundary, and $\overline{C_A}$ is that of $C_A$ in the bulk. $M\kc{C_A}$ is the unitary transformations in the causal cone, which maps the bulk states in $C_A$ to auxiliary sites and boundary sites in $A$. The density matrix of $A$ is obtained by tracing out the auxiliary sites. ![image](causalcone1.jpg){width="5in"} ![image](causalcone2.jpg){width="5in"} Therefore we see that the computation of the boundary reduced density matrix still only involve $\sim \log L$ number of bulk sites. In general the bulk reduced density matrix $\rho_{C_A}$ cannot be obtained, which forbids us to make use of the causal cone structure. However if we take some ansatz states in the bulk such as a free fermion state, or a tensor product state (TPS), it is possible to obtain $\rho_{C_A}$ and calculate the boundary reduced density matrix. It should be noticed that the boundary state can be interacting even if the bulk state is a free fermion state, if the mapping $V$ at each vertex of the network does not preserve the quadratic nature of the Hamiltonian. In this point of view, EHM with short-range entangled bulk states can be taken as a larger class of variational states, which generalizes MERA states and in general allows a better approximation to the boundary ground state. Comparison of EHM and the ordinary AdS/CFT duality -------------------------------------------------- A natural question is what is the relation of EHM with ordinary AdS/CFT duality. In particular, it has been proposed[@klebanov2002; @sezgin2005] that a free boson or free fermion $O(N)$ vector model is dual to the Vasiliev theory[@vasiliev1992], which contains interaction between infinite number of high spin fields, one at each spin. (For real bosons (Majorana fermions), only even-spin (odd-spin) fields are present. If we believe that the free fermion in continuum can be viewed as a continuum limit of the lattice Dirac fermion studied in this paper, there appears to be a contradiction since EHM leads to a free fermion theory in the bulk rather than Vasiliev theory. One possible explanation of this contradiction is that there is no continuum limit of the bulk theory we obtained by EHM. However, there seems to be no principle to exclude the analog of EHM in continuum systems. Assuming such a mapping for free fermion can be found, it will be a unitary transformation of the fermion field \_a=d\^dyM\_a(y)\[continuousEHM\] Here $\psi_a(y)$ is a boundary fermion field and $\eta_a\kc{X}$ is a bulk fermion field, with $a=1,2,...,N$ an $O(N)$ index. $X$ and $y$ are bulk and boundary coordinates, respectively. The space of arbitrary field configurations on AdS$_{d+1}$ is much higher dimensional than that on R$^d$, but it may be possible to define a unitary mapping between the field configurations with a suitable UV cut-off, as is indicated by the lattice EHM. Assuming such a mapping is possible, we can obtain a quadratic fermion action $S_{\rm bulk}\kd{\eta_a,\bar{\eta}_a}$ in the bulk from the quadratic action of the boundary fermion $\psi_a(y)$. It should be noticed that the bulk theory obtained in this way contains all states of the free fermion system on the boundary, including the states that are not $O(N)$ invariant. This is the key difference from the Vasiliev theory which only contain fields that correspond to $O(N)$ invariant single-trace operators on the boundary. If we introduce the most generic $O(N)$ invariant single-trace source term for the bulk fermion, we can define Z&=&\ This defines the action of the bilocal field $J\kc{X,X'}$ by S\_[eff]{}=-Z\[Seffbilocal\] This effective action encodes all $O(N)$ invariant correlation functions of the bulk fermion (and thus the boundary fermion). Although we haven’t worked out this procedure sketched above explicitly, we would like to make the conjecture that for a suitable choice of the mapping $M(X|y)$ defined in Eq. (\[continuousEHM\]), action (\[Seffbilocal\]) reproduces the Vasiliev theory when the bilocal field $J\kc{X,X'}$ is expanded into different spin components. Physically, this conjecture means that the strong interaction in Vasiliev theory comes from the simple fact that we insist to study the $O(N)$ singlet sector of a free fermion or free boson theory, while the well-defined propagating modes in this system are $O(N)$ vectors. This is similar to what happens when one tries to describe the particle-hole excitations of a Fermion system in space-time dimension higher than $2$. When there is no well-defined collective mode (such as spin waves), one ends up with a large number of boson fields interacting with each other. Another theory that EHM shall be compared with is the theory of S.-S. Lee[@lee2010; @lee2011; @lee2012], which also constructs the bulk theory by modifying an RG procedure of the boundary theory. Similar to Vasiliev theory, what is obtained for $O(N)$ vector model in Ref.[@lee2010] is an interacting theory that describes the $O(N)$ singlet sector. Some more thoughts on the space-time geometry {#sec:generals} --------------------------------------------- In the approach so far, we have considered a fixed tree-like background network, and define a distance on this network by two-point correlation functions. There are apparently many open questions in this scheme. Taking the spatial distance defined in Eq. (\[distancedef\]) as an example. To make this distance $d_{\bf xy}=-\xi \log\frac{I_{\bf xy}}{I_0}$ a legitimate distance function, the triangle inequality needs to be satisfied, which means the mutual information should satisfy I\_[**xy**]{}I\_[**yz**]{}I\_[**xz**]{} for any three points in the bulk. Apparently this is not always true for a generic state. Physically this equation requires some locality of correlation and entanglement, [*i.e.*]{}, the correlation between two farther away points ${\bf x}$ and ${\bf z}$ are mediated by the third point ${\bf y}$ which is on the shortest path between ${\bf x}$ and ${\bf z}$. The identification of correlation/entanglement with geometrical distance should be only made in the large scale (long distance) limit, but it isn’t clear how to define this condition more quantitatively. In general, there is no reason to view the bulk geometry as a static classical background. The distance function calculated in this work should be considered as an average distance in a certain coordinate choice, for a fluctuating bulk geometry. For the same boundary theory, it is possible that different EHM can be defined, each of which leads to a local bulk theory. The equivalence between such bulk theories can be viewed as a large symmetry group of the bulk theory, which include gauge symmetries and general covariance of the bulk as subgroups. It should be noted that the definition of “average distance" requires to specify a coordinate for each of the fluctuating geometry, which is therefore not general covariant.[@maldacenaprivate] This is consistent with the fact that the “average distance" is defined for a particular EHM. The choice of EHM acts as a gauge fixing. So far we have been taking a tree-like background in defining EHM. The tensor network can be viewed as a discretization of hyperbolic space, but the metric defined by correlation functions is generically different from that of the hyperbolic space, unless the boundary theory is critical. One can interpret the tensor network we start with as a classical background geometry, and consider the emergent metric defined by correlation functions as a quantum correction to the geometry. (For the special states defined in MERA, the quantum correction vanishes.) Generically, the hyperbolic space we start with is not a “saddle point" so that the correction leads to a different geometry. There is no particular reason to start from the hyperbolic space. To avoid relying on the specific starting point, one can consider EHM on a more generic network, and determine the network self-consistently. The self-consistent equation is determined by the condition -C\_[**xy**]{}=d\_[**xy**]{}=d\_[**xy**]{}\^g\[SCE\] where $d_{\bf xy}$ is the distance defined from correlation function $C_{\bf xy}$, and $d_{\bf xy}^g$ is the graph distance on the network. (We assume that the unitary transformations on all vertices of the network are identical, so that all information about the geometry is in the network itself.) An interesting question is whether the “fix point" space-time geometry determined by the self-consistent equation (\[SCE\]) satisfies Einstein equation. Conclusion ========== In conclusion, we have proposed an exact holographic mapping between $d$-dimensional and $d+1$-dimensional quantum many-body states. For suitable mapping the $d+1$-dimensional bulk theory is local and short-range entangled, and we can use the bulk correlation function to define the emergent bulk geometry. In this case, the bulk geometry is a “holographic dual description" of the boundary $d$-dimensional theory. The general idea of EHM is to find a new direct-product decomposition of the Hilbert space, in which entanglement between different sites is short-ranged even if in the original system the correlation and entanglement may be long-ranged. It is such “quasi-local" basis which defines a “geometrized" description of the system. For the example of $1+1$-d free fermions, we studied the bulk geometry corresponding to several different systems, including massive and massless fermions at zero temperature, and massless fermions at finite temperature. The bulk geometry obtained is qualitatively consistent with the expectation from AdS/CFT duality. In particular, for a finite temperature system we show that the IR region of the network behaves like the near-horizon region of a black-hole. As an example of time-dependent geometry, we studied the quantum quench problem in two $1+1$-d chains. In the initial state, the two chains are entangled and the bulk geometry is a wormhole geometry with two asymptotic AdS regions. After the quench the two chains are decoupled and the wormhole shrinks and stretches. This bulk-edge correspondence is also consistent with the known results in AdS/CFT, except that the system dethermalizes in a short time proportional to the system size, so that the wormhole size will oscillate. This is an artifact of free fermion systems, due to infinite number of conserved quantities. A lot of open questions remain to be studied in this new scheme. We discussed that EHM has a similar causal cone structure as MERA which allows the numerical calculation of boundary properties corresponding to certain simple bulk states. A general question is how to understand the bulk geometry in a more complete and background independent way. We discussed the possibility of choosing the bulk geometry self-consistently. This can be viewed as a “mean-field approximation" of a fluctuating geometry. Another open question is whether we should generalize the definition of “bulk geometry" to include the information about more generic correlation functions, rather than just two-point correlation. It is also interesting to study black hole physics using this new approach. In particular, one may wonder whether it is possible to create a black hole with Hawking radiation, such that the black hole information parodox[@hawking1976; @susskind1993; @stephens1994] can be tested. These open questions will be the topics of future research. [**Acknowledgement.**]{} We acknowledge helpful discussion with Sean Hartnoll, Chaoming Jian, Juan Maldacena, Shinsei Ryu, Brian Swingle, T. Senthil, Frank Verstraete, Xiao-Gang Wen, Edward Witten, and in particular Leonard Susskind and Guifre Vidal. This work is supported by the National Science Foundation through the grant No. DMR-1151786. The detail of the EHM for free fermion model ============================================ Bulk Green’s function --------------------- The mapping to the operators is defined by the two equations (\[holomapping\]) and (\[holomapping2\]). From these two equations it is easy to see that a\_[jn]{}&=&1=12\_[l=4j-3]{}\^[4j]{}a\_[l,n-2]{}=...=2\^[-n/2]{}\_[l=2\^n(j-1)+1]{}\^[2\^nj]{}c\_[l]{} Therefore $a_{jn}^\dagger$ creates a state with square shape wavefunction, which is a constant at the $2^n$ sites from $2^n(j-1)+1$ to $2^nj$, and zero elsewhere. From Eq. (\[holomapping2\]) we can then obtain the bulk state $b_{jn}$: b\_[jn]{}&=&2\^[-n/2]{}\_l\_[in]{}(l)c\_l The last step is a definition of the wavefunction $\phi_{jn}(l)$, which is known as the Haar wavelet[@haar1910]. By a Fourier transformation we obtain b\_[jn]{}&=&\_[q=2n/2\^N, n=1,2,...,2\^N]{}\_[jn]{}\^\*(q)c\_q\ \_[jn]{}(q)&=&2\^[-N/2]{}\_l\_[jn]{}(l)e\^[-iql]{} The explicit form of $\phi_{jn}(q)$ can be obtained. From \_[jn]{}(l+1)-\_[jn]{}(l)&=&2\^[-n/2]{} we obtain \_[jn]{}(q)&=&2\^[-N/2]{}2\^[-n/2]{}\ &gt;\_[jn]{}(q)&=&- The bulk Green’s function is thus &=&\_q\_[jn]{}\^\*(q)\_[km]{}(q)G\_[q]{}\ G\_(q)&=&The boundary Green’s function can be explicitly written (in $2\times 2$ matrix form) as G\_q&=&e\^[-h\_q]{}\^[-1]{} for $\tau\in(0,\beta]$. For the Dirac Hamiltonian h\_q&=&the expression can be further simplified to G\_q&=&12\[GreenDirac\] “Renormalization" of the Hamiltonian ------------------------------------ For a generic Hamiltonian rather than the lattice Dirac model (\[HDirac\]), one can still apply the same EHM to obtain a bulk theory. Since the auxiliary fermions $a_{j,n+1}$ is obtained from transformation of $a_{jn}$, the low energy quadratic Hamiltonian in the $n+1$-th layer $h_a^{(n+1)}$ is completely determined by $h_a^{(n)}$. Therefore we can write a generic iterative relation between $h_a^{(n+1)}$ and $h_a^{(n)}$, which plays the role of RG equation. For simplicity, we consider translation invariant Hamiltonians. We start from the $a$ Hamiltonian in $n$-th layer H\_a\^[(n)]{}=\_k a\_[k,n]{}\^h\_[ak]{}\^[(n)]{}a\_[k,n]{} The transformation \[holomapping2\] can be Fourier transformed to a\_[q,n+1]{}&=&2\^[-/2]{}\_[j=1]{}\^[2\^[N-n-1]{}]{}a\_[j,n+1]{}e\^[-iqj]{} =2\^[-/2]{}\_[j=1]{}\^[2\^[N-n-1]{}]{}e\^[-iqj]{}\ &=&2\^[-]{}\_[j=1]{}\^[2\^[N-n-1]{}]{}\_[p]{}a\_[p]{}e\^[i(2p-q)j]{}\ &=&2a\_[q/2]{}+2a\_[q/2+]{} Similarly b\_[q,n+1]{}&=&2\^[-]{}\_[j=1]{}\^[2\^[N-n-1]{}]{}\_[p]{}a\_[p]{}e\^[i(2p-q)j]{}\ &=&2a\_[q/2]{}+2a\_[q/2+]{} The Hamiltonian can be rewritten as H\_a\^[(n)]{}&=&\_[k\[0,)]{}\ &=&\_[q\[0,2)]{}V\^V\ V&=&Therefore $H_a^{(n+1)}$ is determined by the upper block of the transformed Hamiltonian: h\_[aq]{}\^[(n+1)]{}=h\_[a,q/2]{}\^[(n)]{}2+h\_[a,q/2+]{}\^[(n)]{}2\[RGE\] Eq. (\[RGE\]) plays the role of RG equation of the Hamiltonian. Since the transformation does not act on spin, each component of the Hamiltonian satisfies this equation. It can be checked that for a Hamiltonian of the form $h_{a,q}^{(n)}=\sin qA_1+(1-\cos q)A_2$, with $A_1,A_2$ arbitrary matrices independent of $q$, we obtain $h_{aq}^{(n+1)}=\frac12h_{aq}^{(n)}$. This result shows that the lattice mapping is different from an RG flow in continuum limit, since in the latter case the scaling dimension of the term $1-\cos k\simeq k^2/2$ will be different from that of $\sin k\simeq k$. The detail of the fitting procedure =================================== The fitting of zero temperature results --------------------------------------- The coordinates of the sites are given by Eq. (\[horizontald\]) for the critical system, which is determined by scaling and translation symmetries. The geodesic distance in (Euclidean) AdS space can be written simply in the embedded coordinates $X^a,~a=1,2,3,4$. In this coordinate the AdS space is a hyperbolic surface embedded in the 4d flat space with Lorentz metric, determined by the equation $X^aX^b\eta_{ab}=R^2$. Here $\eta_{ab}={\rm diag}[1,-1,-1,-1]$ is the Lorentz metric. The relation between $X^a$ and the intrinsic coordinate $\rho,\theta,t$ is X=\[embeddedAdS\] The geodesic distance is d\_[X\_1X\_2]{}=R[acosh]{}\[geodesicAdSembedded\] For two sites $(x,n),(y,n)$ separated horizontally, the distance reduces to formula (\[horizontald\]). We have =, R Since $\xi$ is unknown, we first do a linear fitting at large $|x-y|$ to obtain P\_0+P\_1 Then we obtain $R$ from $\frac{P_1}{P_0}=-\log R$ and and then obtain $\xi$ from $P_0=2R/\xi$. Now we input the $R$ value to Eq. (\[horizontald\]) to obtain the AdS distance between points $(x,1)$ and $(x,n)$ separated in radial direction. By a linear fitting of this distance with the $\frac{d}{\xi_\perp}=\log\frac{I_0}{I_{(x,1),(x,n)}}$ we can obtain the correlation length $\xi_\perp$. As is shown in Fig. \[MIcritical\] (b), $\xi_{\perp}$ is different from $\xi$ in the horizontal direction. The distance between two points $(\rho,\theta,0)$ and $(\rho,\theta,\tau)$ is d(,)=R [acosh]{} However, there is a rescaling of time $t$ that we need to include in comparison with the boundary system. At the boundary $\rho=\frac{L}{2\pi}$ ($L=2^N$ is the perimeter), and the metric reduces to ds\^2=dt\^2+\^2d\^2 Therefore we should rescale $t\>t/ \sqrt{\frac{L^2}{4\pi^2l^2}+1}$ so that at the boundary we have the standard metric $dt^2+\rho^2d\theta^2$ (with speed of light $c=1$). After the rescaling the geodesic distance is d(r,t)=R[acosh]{} Consider the limit 2R, tR which leads to d(r,t)2R Using this formula, the same fitting procedure as the spatial distance leads to an independent way to determine $R$ and $\xi$. However, it should be noted that the time-direction distance is defined by the single-particle Green’s function, so one does not expect the $\xi$ to be compared with that observed in spatial distance. The fitting of finite temperature results ----------------------------------------- A special property of gravity in 3-dimension is that there is a black-hole solution, the BTZ solution, which is a quotient of the AdS space. In other words, the black-hole solution is locally equivalent to AdS$_3$. The quotient can be seen in the following parameterization of the embedded coordinate X&=&R Compare this expression with the pure AdS$_3$ case (\[embeddedAdS\]) we see that the black-hole solution is obtained by a double Wick rotation from the pure AdS$_3$ solution $t\>b\theta, \theta\>\frac{bt}{R^2}$ and replace $\rho\>R\sqrt{\frac{\rho^2}{b^2}-1}$. After the rotation, time $t$ is periodic with periodicity $\beta=\frac{2\pi R^2}{b}$, and $\theta$ becomes a real number. We then compactify the $\theta$ direction by identifying the points $\theta$ with $\theta+2n\pi,~n\in\mathbb{Z}$, which can also be viewed as taking the quotient of AdS space to a $\mathbb{Z}$ subgroup of the isometry group $SO(2,1)$. The metric in the intrinsic coordinates $\rho,\theta,t$ is ds\^2=dt\^2+d\^2+\^2d\^2 $b$ has the physical meaning of black-hole radius, which also determines the temperature. It should be noticed that the time needs to be rescaled when compared with the boundary system, in the same way as in the zero temperature case. The rescaling is defined as tt=t After the rescaling, the period of the boundary time is =2R\[BHtemperature\] which is the inverse temperature of the boundary system. Now we look at the time-direction distance between two points $(\rho,\theta,t=0)$ and $(\rho,\theta,t)$. The distance can still be computed by the AdS formula (\[geodesicAdSembedded\]) d\_[(,,0),(,,t)]{}&=&R [acosh]{}\[dfiniteT\] Taking $t=\beta/2$, we obtain the “maximal" in the time circle as d\_[max]{}()=R[acosh]{}=2R[acosh]{}\[dmax\] The numerically obtained time-direction correlation function (\[TimeCor\]) shall be fitted with the analytic formula -= The righthand side means that $\frac{d_{(\rho,\theta,0),(\rho,\theta,t)}}\xi$ is a function of two dimensionless parameters $\frac{R}{\xi}$ and $\frac{\rho}{b}$. The constant $C_0=C_{\bf x}(t=0)$ is the trace of the equal time correlation function, which is $1$ as can be seen from Eq. (\[GreenDirac\]). To determine the parameters $b,R,\xi$, we first take $\frac{R}{\xi}$ as a parameter and obtain $\frac{\rho}b$ as a function of $\frac{R}\xi$ from the maximal distance in Eq. (\[dmax\]). By inputting this $\frac{\rho}{b}$ value to Eq. (\[dfiniteT\]), we obtain the distance $d_{(\rho,\theta,0),(\rho,\theta,t)}=d\kc{\frac{R}{\xi}}$ as a function of the single parameter $\frac{R}{\xi}$. Then we compare the resulting distance function and determine the optimal $\frac{R}{\xi}$ by minimizing the square-averaged deviation function \^2=\_[**x**]{}\^2 To determine the value of $b,R,\xi$ we use the boundary condition of $\rho$. By an linear extrapolation of the function $\log\kc{\rho_n/b}$ as a function of $n$, we obtain $\rho_0/b$ at $n=0$. $\rho_0$ is the boundary value of $\rho$ which should be identified with $\rho_0=L/2\pi$. This determines the value of $b$. Then we determine $R$ by $b$ and temperature $T$ from Eq. (\[BHtemperature\]). Once $R$ is determined, $\xi=R/(R/\xi)$ can be obtained. [^1]: There is an additional site $a_{0N}$ in the last layer, as is shown in Fig. \[fig1\] (b) and (c). In the translation invariant Hamiltonians, $a_{0N}$ decouples from the rest of the sites, but in more general systems one should include the coupling of $a_{0N}$ with $b_{\bf x}$. For simplicity in the following we will omit $a_{0N}$ and focus on translation invariant states.
--- abstract: 'In this paper we present a tableau proof system for first order logic of proofs ${{\sf FOLP}}$. We show that the tableau system is sound and complete with respect to Mkrtychev models of ${{\sf FOLP}}$.' author: - Meghdad Ghari title: Tableaux for First Order Logic of Proofs --- Introduction ============ Artemov in [@A1995; @A2001] introduced the first propositional justification logic ${{\sf LP}}$, the Logic of Proofs (for more information about justification logics see [@A2008; @ArtemovFitting]). Later Artemov and Yavorskaya (Sidon) introdiced in [@ArtemovSidon2011] the first order logic of proofs ${{\sf FOLP}}$. The language of ${{\sf FOLP}}$ extends the language of first order logic by justification terms and expressions of the form $t:_X A$, where $X$ is a set of individual variables. The intended meaning of $t:_X A$ is “$t$ justifies $A$ in which the variables in $X$ can be substituted for and cannot be quantified.” Fitting in [@Fitting2011; @Fitting2014] proposed possible world semantics and Mkrtychev semantics for ${{\sf FOLP}}$. Various tableau proof systems have been developed for the logic of proofs (see [@Finger2010; @Fitting2005; @Ghari-tableaux-2014; @Renne2004; @Renne2006]). The aim of this paper is to present a tableau proof system for ${{\sf FOLP}}$. Our tableau rules are extensions of Renne’s tableau rules [@Renne2004] for ${{\sf LP}}$. We show that our tableau proof system is sound and complete with respect to Mkrtychev models of ${{\sf FOLP}}$. The logic ${{\sf FOLP}}$ {#sec:FOLP} ======================== The language of ${{\sf FOLP}}$ is an extension of the language of first order logic by expressions of the form $t:_X A$, where $A$ is a formula, $t$ is a justification term and $X$ is a set of individual variables. Following [@ArtemovSidon2011] we consider a first order language in which there are no constant symbols, function symbols, and identity, but of course a countable set of individual variables $Var$ (denoted by $x, y, z, \ldots$). *Justification terms* are built up from a countable set of justification variables $JVar$ and a countable set of justification constants $JCons$ by the following grammar: $$t::= p~|~c~|~t+t~|~t \cdot t~|~!t~|~gen_x(t),$$ where $p\in JVar$, $c \in JCons$, and $x \in Var$. ${{\sf FOLP}}$ formulas are constructed from a countable set of predicate symbols of any arity by the following grammar: $$A::= Q(x_1,\ldots,x_n)~|~\neg A~|~A\rightarrow A~|~\forall x A~|~\exists x A~|~ t:_X A,$$ where $Q$ is an $n$-place predicate symbol, $t$ is a justification term, and $X \subseteq Var$. Free individual variable occurrences in formulas are defined as in the first order logic, with the following addition: the free individual variable occurrences in $t:_X A$ are the free individual variable occurrences in $A$, provided the variables also occur in $X$, together with all variable occurrences in $X$ itself. The set of all free individual variables of the formula $A$ is denoted by $FVar(A)$. Thus $FVar(t:_X A) = X$. The universal closure of a formula $A$ will be denoted by $\forall A$. The notion of substitution of an individual variable for another individual variable is defined as in the first order logic. If $y$ is an individual variable, then $Xy$ is short for $X \cup \{y\}$, and in addition it means $y \not\in X$. \[def: FOLP axiom system\] Axioms schemes and rules of ${{\sf FOLP}}$ are:[^1] FOL. : Axiom schemes of first order logic, Ctr. : $t:_{Xy} A \rightarrow t:_X A$, provided $y\not\in FVar(A)$. Exp. : $t:_{X} A \rightarrow t:_{Xy} A$. Sum. : $s:_X A\rightarrow (s+t):_X A~,~s:_X A\rightarrow (t+s):_X A$. jK. : $s:_X (A\rightarrow B)\rightarrow(t:_X A\rightarrow (s\cdot t):_X B)$. jT. : $t:_X A\rightarrow A$. j4. : $t:_X A\rightarrow !t:_X t:_X A$. Gen. : $t:_X A \rightarrow gen_x(t):_X \forall x A$, provided $x\not\in X$. MP. : From $\vdash A$ and $\vdash A \rightarrow B$ infer $\vdash B$. UG. : From $\vdash A$ infer $\vdash \forall x A$. AN. : $\vdash c:A$, where $A$ is an axiom instance and $c$ is an arbitrary justification constant. 1. A *constant specification* ${{\sf CS}}$ for ${{\sf FOLP}}$ is a set of formulas of the form $c:A$, where $c$ is a justification constant and $A$ is an axiom instance of ${{\sf FOLP}}$. 2. A constant specification ${{\sf CS}}$ is axiomatically appropriate if for every axiom instance $A$ there is a justification constant $c$ such that $c:A \in {{\sf CS}}$. 3. Two formulas are variable variants if each can be turned into the other by a renaming of free and bound individual variables. 4. A constant specification ${{\sf CS}}$ is variant closed provided that whenever $A$ and $B$ are variable variants, $c:A \in {{\sf CS}}$ if and only if $c:B \in{{\sf CS}}$. Let ${{\sf FOLP}}_{{\sf CS}}$ be the fragment of ${{\sf FOLP}}$ where the Axiom Necessitation rule only produces formulas from the given ${{\sf CS}}$. In the remaining of this section, we recall the definition of Mkrtychev models for ${{\sf FOLP}}$ from [@Fitting2014] (Mkrtychev models was first introduced for ${{\sf LP}}$ in [@Mkrtychev1997]). First we need the following auxiliary definition. \[def:K-formulas\] Let $K$ be a non-empty set. 1. A $K$-formula is the result of substituting some free individual variables in an ${{\sf FOLP}}$ formula with members of $K$. 2. A $K$-formula is closed if it contains no free occurrences of individual variables. 3. For a $K$-formula $A$, let $K(A)$ be the set of all members of $K$ that occur in $A$. 4. For a formula $F(\vec{x})$ and $\vec{a}\in K$, by $F(\vec{a})$ we mean that all free occurrences of the individual variables in $\vec{x}$ have been replaced with corresponding occurrences in $\vec{a}$. This is sometimes denoted by $F \{\vec{x} / \vec{a} \}$. \[def:FOLP-models\] A Mkrtychev model ${{\mathcal M}}=({{\mathcal D}},{{\mathcal I}},{{\mathcal E}})$ for ${{\sf FOLP}}_{{\sf CS}}$ (or ${{\sf FOLP}}_{{\sf CS}}$-model, for short) consists of: - A non-empty set ${{\mathcal D}}$, called the domain of the model. The definitions of (closed) ${{\mathcal D}}$-formulas and ${{\mathcal D}}(A)$, for a ${{\mathcal D}}$-formula $A$, are similar to Definition \[def:K-formulas\], where $K$ is replaced by ${{\mathcal D}}$. - The interpretation ${{\mathcal I}}$ assigns to each $n$-place predicate symbol some $n$-ary relation on ${{\mathcal D}}$. - The admissible evidence function ${{\mathcal E}}$ assigns to each justification term a set of ${{\mathcal D}}$-formulas meeting the following conditions: ${{\mathcal E}}1.$ : $c:A\in{{\sf CS}}$ implies $A\in{{\mathcal E}}(c)$. ${{\mathcal E}}2.$ : $A\r B\in{{\mathcal E}}(s)$ and $A\in{{\mathcal E}}(t)$ implies $B\in{{\mathcal E}}(s\cdot t)$. ${{\mathcal E}}3.$ : ${{\mathcal E}}(s)\cup {{\mathcal E}}(t)\subseteq{{\mathcal E}}(s+t)$. ${{\mathcal E}}4.$ : $A\in{{\mathcal E}}(t)$ implies $t:_X A\in{{\mathcal E}}(!t)$, where ${{\mathcal D}}(A) \subseteq X \subseteq {{\mathcal D}}$. ${{\mathcal E}}5.$ : $A\in {{\mathcal E}}(t)$ implies $\forall x A \in {{\mathcal E}}(gen_x(t))$. ${{\mathcal E}}6.$ : $a \in {{\mathcal D}}$ and $A(x) \in {{\mathcal E}}(t)$ implies $A(a) \in {{\mathcal E}}(t)$. Condition ${{\mathcal E}}6$ is called the Instantiation Condition in [@Fitting2014]. \[def:forcing relation\] For an ${{\sf FOLP}}_{{\sf CS}}$-model ${{\mathcal M}}=({{\mathcal D}},{{\mathcal I}},{{\mathcal E}})$ and a closed ${{\mathcal D}}$-formula we define when the formula is true in ${{\mathcal M}}$ as follows: 1. ${{\mathcal M}}\Vdash Q(\vec{a})$ if[f]{} $\vec{a} \in {{\mathcal I}}(Q)$, for $n$-place predicate symbol $Q$ and $\vec{a} \in {{\mathcal D}}$. 2. ${{\mathcal M}}\Vdash \neg A$ if[f]{} ${{\mathcal M}}\not\Vdash A$. 3. ${{\mathcal M}}\Vdash A\r B$ if[f]{} ${{\mathcal M}}\not\Vdash A$ or ${{\mathcal M}}\Vdash B$. 4. ${{\mathcal M}}\Vdash \forall x A(x)$ if[f]{} ${{\mathcal M}}\Vdash A(a)$ for every $a \in {{\mathcal D}}$. 5. ${{\mathcal M}}\Vdash \exists x A(x)$ if[f]{} ${{\mathcal M}}\Vdash A(a)$ for some $a \in {{\mathcal D}}$. 6. ${{\mathcal M}}\Vdash t:_X A$ if[f]{} $A\in{{\mathcal E}}(t)$ and ${{\mathcal M}}\Vdash \forall A$. If ${{\mathcal M}}\Vdash F$ then it is said that $F$ is true in ${{\mathcal M}}$ or ${{\mathcal M}}$ satisfies $F$. A sentence $F$ is ${{\sf FOLP}}_{{\sf CS}}$-valid if it is true in every ${{\sf FOLP}}_{{\sf CS}}$-model. For a set $S$ of sentences, ${{\mathcal M}}\Vdash S$ provided that ${{\mathcal M}}\Vdash F$ for all formulas $F$ in $S$. Note that given a constant specification ${{\sf CS}}$ for ${{\sf FOLP}}$, and a model ${{\mathcal M}}$ of ${{\sf FOLP}}_{{\sf CS}}$ we have ${{\mathcal M}}\Vdash {{\sf CS}}$ (in this case it is said that ${{\mathcal M}}$ respects ${{\sf CS}}$). The proof of soundness and completeness theorems of ${{\sf FOLP}}$ are given in [@Fitting2014]. \[Sound Compl JL\] Let ${{\sf CS}}$ be an axiomatically appropriate and variant closed constant specification for ${{\sf FOLP}}$. Then a sentence $F$ is provable in ${{\sf FOLP}}_{{\sf CS}}$ if[f]{} $F$ is ${{\sf FOLP}}_{{\sf CS}}$-valid. Tableaux ======== Tableau proof systems for the logic of proofs are given in [@Fitting2005; @Renne2004; @Renne2006]. In this section we extend them and present tableaux for ${{\sf FOLP}}$. Let $Par$ be a denumerable set of new individual variables, i.e. $Par \cap Var = \emptyset$. The members of $Par$ are called *parameters*, with typical members denoted ${{\sf u}}, {{\sf v}}, {{\sf w}}$. Parameters are never quantified. The definitions of (closed) $Par$-formulas, $Par$-instance of a formula, and $Par(A)$, for a $Par$-formula $A$, are similar to Definition \[def:K-formulas\], where $K$ is replaced by $Par$. Notice that closed $Par$-formulas may contain free parameters but do not contain free individual variables. Tableau proofs will be of sentences of ${{\sf FOLP}}$ but will use closed $Par$-formulas. An ${{\sf FOLP}}_{{\sf CS}}$-tableau for a sentence is a binary tree labeled by closed $Par$-formulas with the negation of that sentence at the root constructed by applying ${{\sf FOLP}}$ tableau rules from Table \[table:tableau rules FOLP\]. An ${{\sf FOLP}}_{{\sf CS}}$-tableau branch closes if one of the following holds: 1. Both $A$ and $\neg A$ occurs in the branch, for some closed $Par$-formula $A$. 2. $\neg c: A$ occurs in the branch, where $c:A\in{{\sf CS}}$. A tableau closes if all branches of the tableau close. An ${{\sf FOLP}}_{{\sf CS}}$-tableau proof for a sentence $F$ is a closed tableau beginning with $\neg F$ (the root of the tableau) using only ${{\sf FOLP}}$ tableau rules. An ${{\sf FOLP}}_{{\sf CS}}$-tableau for a finite set $S$ of closed $Par$-formulas begins with a single branch whose nodes consist of the formulas of $S$ as roots. [|lc|]{}  First order logic rules: &\ \ & \ &\  ${{\sf u}}$ is any parameter &  ${{\sf u}}$ is any parameter\ &\  ${{\sf u}}$ is a new parameter & ${{\sf u}}$ is a new parameter\  Justification logic rules:&\ &\ &\ &\ &\ \ We give an ${{\sf FOLP}}_{{\sf CS}}$-tableau proof of the sentence $$p: \forall x A(x) \r \forall x (c \cdot p):_{\{x\}} A(x),$$ where $p \in JVar$ and ${{\sf CS}}$ contains $c:(\forall x A(x)\r A(x))$. An axiomatic proof of this sentence is given in [@ArtemovSidon2011]. This sentence is an explicit counterpart of the Converse Barcan Formula $\Box \forall x A(x) \r \forall x \Box A(x)$. Formulas 2 and 3 are from 1 by rule $(F\r)$, 4 is from 3 by rule $(F\forall)$, where ${{\sf u}}$ is a new parameter, 5 and 6 are from 4 by rule $(F\cdot)$, 7 is from 5 by rule $(Ins)$, and 8 and 9 are from 6 and 7, respectively, by rule $(Exp)$. Let us now show the soundness of ${{\sf FOLP}}$ tableau system. \[def:Par-satisfiable\] A $Par$-formula $A({{\sf u}}_1, \ldots, {{\sf u}}_n)$, where ${{\sf u}}_1, \ldots, {{\sf u}}_n$ are all parameters of $A$, is satisfiable in an ${{\sf FOLP}}_{{\sf CS}}$-model ${{\mathcal M}}= ({{\mathcal D}}, {{\mathcal I}}, {{\mathcal E}})$, denoted by ${{\mathcal M}}\Vdash A({{\sf u}}_1, \ldots, {{\sf u}}_n)$, if ${{\mathcal M}}\Vdash A(a_1, \ldots, a_n)$ for some $a_1, \ldots, a_n \in {{\mathcal D}}$. A tableau branch is satisfiable in a model ${{\mathcal M}}$ if every formula of the branch is satisfiable. \[lem: soundness lemma\] Let $\pi$ be any branch of an ${{\sf FOLP}}_{{\sf CS}}$-tableau and ${{\mathcal M}}$ be an ${{\sf FOLP}}_{{\sf CS}}$-model that satisfies all the formulas occur in $\pi$. If an ${{\sf FOLP}}$ tableau rule is applied to $\pi$, then it produces at least one extension $\pi'$ such that ${{\mathcal M}}$ satisfies all the formulas occur in $\pi'$. Suppose that a tableau branch $\pi$ is satisfiable in the model ${{\mathcal M}}=({{\mathcal D}},{{\mathcal I}},{{\mathcal E}})$, and $\pi'$ is obtained by applying an ${{\sf FOLP}}$ tableau rule to $\pi$. To prove the lemma, we consider each rule in turn. The cases for the propositional logic rules are standard. Hence, we need consider only the rules for quantifiers and ${{\sf FOLP}}$ rules. Suppose that the rule $(T\forall)$ is applied $$\AXC{$\forall x A(x, \vec{{{\sf w}}})$}\RightLabel{$(T\forall)$} \UIC{$A({{\sf u}}, \vec{{{\sf w}}})$} \DisplayProof$$ where ${{\sf u}},\vec{{{\sf w}}} \in Par$. Since ${{\mathcal M}}\Vdash \forall x A(x, \vec{{{\sf w}}})$, we have ${{\mathcal M}}\Vdash \forall x A(x, \vec{b})$ for some $\vec{b} \in {{\mathcal D}}$. Thus, ${{\mathcal M}}\Vdash A(a, \vec{b})$ for every $a \in {{\mathcal D}}$. Then, obviously ${{\mathcal M}}\Vdash A({{\sf u}}, \vec{{{\sf w}}})$. Hence ${{\mathcal M}}\Vdash \pi'$ as desired. The case of the rule $(F\exists)$ is similar. Suppose that the rule $(T\exists)$ is applied $$\AXC{$\exists x A(x, \vec{{{\sf w}}})$}\RightLabel{$(T\exists)$} \UIC{$A({{\sf u}}, \vec{{{\sf w}}})$} \DisplayProof$$ where $\vec{{{\sf w}}} \in Par$ and ${{\sf u}}$ is a new parameter in $\pi$. Since ${{\mathcal M}}\Vdash \exists x A(x, \vec{{{\sf w}}})$, we have ${{\mathcal M}}\Vdash \exists x A(x,\vec{b})$ for some $\vec{b} \in {{\mathcal D}}$. Thus, ${{\mathcal M}}\Vdash A(a, \vec{b})$ for some $a \in {{\mathcal D}}$. Then, obviously ${{\mathcal M}}\Vdash A({{\sf u}}, \vec{{{\sf w}}})$. Hence ${{\mathcal M}}' \Vdash \pi'$ as desired. The case of the rule $(T\forall)$ is similar. Suppose that the rule $(T:)$ is applied $$\AXC{$t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$} \RightLabel{$(T:)$} \UIC{$\forall \vec{x} A(\vec{{{\sf w}}},\vec{x})$} \DP$$ Since ${{\mathcal M}}\Vdash t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$, we have ${{\mathcal M}}\Vdash t:_{\{\vec{a}, \vec{b}\}} A(\vec{a},\vec{x})$ for some $\vec{a}, \vec{b} \in {{\mathcal D}}$. Thus ${{\mathcal M}}\Vdash \forall \vec{x} A(\vec{a},\vec{x})$. Then, obviously ${{\mathcal M}}\Vdash \forall \vec{x} A(\vec{{{\sf w}}},\vec{x})$. Hence ${{\mathcal M}}\Vdash \pi'$ as desired. Suppose that the rule $(F+)$ is applied $$\AXC{$\neg t+s:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$}\RightLabel{$(F+)$} \UIC{$\neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$}\noLine \UIC{$\neg s:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$} \DisplayProof$$ Since ${{\mathcal M}}\Vdash \neg t+s:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$, we have ${{\mathcal M}}\Vdash \neg t+s:_{\{\vec{a}, \vec{b}\}} A(\vec{a},\vec{x})$ for some $\vec{a}, \vec{b} \in {{\mathcal D}}$. Thus either $A(\vec{a},\vec{x}) \not\in {{\mathcal E}}(t+s)$ or ${{\mathcal M}}\not\Vdash \forall \vec{x} A(\vec{a},\vec{x})$. In the former case we have $A(\vec{a},\vec{x}) \not\in {{\mathcal E}}(t) \cup {{\mathcal E}}(s)$, and hence ${{\mathcal M}}\not\Vdash t :_{\{\vec{a}, \vec{b}\}} A(\vec{a},\vec{x})$ and ${{\mathcal M}}\not \Vdash s :_{\{\vec{a}, \vec{b}\}} A(\vec{a},\vec{x})$. We get the same results in the latter case. In either case ${{\mathcal M}}\Vdash \neg t :_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$ and ${{\mathcal M}}\Vdash \neg s :_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$. Hence ${{\mathcal M}}\Vdash \pi'$ as desired. Suppose that the rule $(F\cdot)$ is applied $$\AXC{$\neg s\cdot t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} B(\vec{{{\sf w}}},\vec{x})$} \RightLabel{$(F\cdot)$} \UIC{$\neg s:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} (A(\vec{{{\sf w}}'},\vec{{{\sf v}}'},\vec{y})\rightarrow B(\vec{{{\sf w}}},\vec{x})) | \neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}'},\vec{{{\sf v}}'},\vec{y})$} \DisplayProof$$ where $\{\vec{{{\sf w}}'} \} \subseteq \{ \vec{{{\sf w}}} \}$ and $\{\vec{{{\sf v}}'} \} \subseteq \{ \vec{{{\sf v}}} \}$. Since ${{\mathcal M}}\Vdash \neg s\cdot t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} B(\vec{{{\sf w}}},\vec{x})$, we have ${{\mathcal M}}\Vdash \neg s\cdot t:_{\{\vec{a}, \vec{b}\}} B(\vec{a},\vec{x})$ for some $\vec{a}, \vec{b} \in {{\mathcal D}}$. Thus either $B(\vec{a},\vec{x}) \not\in {{\mathcal E}}(s\cdot t)$ or ${{\mathcal M}}\not\Vdash \forall \vec{x} B(\vec{a},\vec{x})$. In the former case we have either $A(\vec{a'},\vec{b'},\vec{y}) \rightarrow B(\vec{a},\vec{x}) \not\in {{\mathcal E}}(s)$ or $A(\vec{a'},\vec{b'},\vec{y}) \not\in {{\mathcal E}}(t)$, where $A(\vec{a'},\vec{b'},\vec{y}) = A(\vec{{{\sf w}}'},\vec{{{\sf v}}'},\vec{y}) \{ \vec{{{\sf w}}}/\vec{a} , \vec{{{\sf v}}}/\vec{b}\}$, and hence either ${{\mathcal M}}\not\Vdash s :_{\{\vec{a}, \vec{b}\}} (A(\vec{a'},\vec{b'},\vec{y}) \r B(\vec{a},\vec{x}))$ or ${{\mathcal M}}\not\Vdash t :_{\{\vec{a}, \vec{b}\}} A(\vec{a'},\vec{b'},\vec{y})$. In the latter case, either ${{\mathcal M}}\not\Vdash \forall A$ and hence ${{\mathcal M}}\not\Vdash t :_{\{\vec{a}, \vec{b}\}} A(\vec{a'},\vec{b'},\vec{y})$, or ${{\mathcal M}}\Vdash \forall A$ and hence ${{\mathcal M}}\not\Vdash s :_{\{\vec{a}, \vec{b}\}} (A(\vec{a'},\vec{b'},\vec{y}) \r B(\vec{a},\vec{x}))$, since ${{\mathcal M}}\not\Vdash \forall (A \r B)$. Thus, in both cases we have either ${{\mathcal M}}\Vdash \neg s :_{\{\vec{a}, \vec{b}\}} (A(\vec{a'},\vec{b'},\vec{y}) \r B(\vec{a},\vec{x}))$ or ${{\mathcal M}}\Vdash \neg t :_{\{\vec{a}, \vec{b}\}} A(\vec{a'},\vec{b'},\vec{y})$. Therefore either ${{\mathcal M}}\Vdash \neg s :_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} (A(\vec{{{\sf w}}'},\vec{{{\sf v}}'},\vec{y}) \r B(\vec{{{\sf w}}},\vec{x}))$ or ${{\mathcal M}}\Vdash \neg t :_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}'},\vec{{{\sf v}}'},\vec{y})$. Suppose that the rule $(F!)$ is applied $$\AXC{$\neg !t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$} \RightLabel{$(F!)$} \UIC{$\neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$} \DP$$ Since ${{\mathcal M}}\Vdash \neg !t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$, we have ${{\mathcal M}}\Vdash \neg !t:_{\{\vec{a}, \vec{b}\}} t:_{\{\vec{a}, \vec{b}\}} A(\vec{a},\vec{x})$ for some $\vec{a}, \vec{b} \in {{\mathcal D}}$. Thus either $t:_{\{\vec{a}, \vec{b}\}} A(\vec{a},\vec{x}) \not\in {{\mathcal E}}(!t)$ or ${{\mathcal M}}\not\Vdash t :_{\{\vec{a}, \vec{b}\}} A(\vec{a},\vec{x})$. In the former case we have $A(\vec{a},\vec{x}) \not\in {{\mathcal E}}(t)$, and hence ${{\mathcal M}}\Vdash \neg t :_{\{\vec{a}, \vec{b}\}} A(\vec{a},\vec{x})$. In either case ${{\mathcal M}}\Vdash \neg t :_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$. Hence ${{\mathcal M}}\Vdash \pi'$ as desired. Suppose that the rule $(Ctr)$ is applied $$\AXC{$\neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$} \RightLabel{$(Ctr)$} \UIC{$\neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}},{{\sf u}}\}} A(\vec{{{\sf w}}},\vec{x})$} \DP$$ Since ${{\mathcal M}}\Vdash \neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$, we have ${{\mathcal M}}\Vdash \neg t:_{\{\vec{a}, \vec{b}\}} A(\vec{a},\vec{x})$ for some $\vec{a}, \vec{b} \in {{\mathcal D}}$. Thus either $A(\vec{a},\vec{x}) \not\in {{\mathcal E}}(t)$ or ${{\mathcal M}}\not\Vdash \forall A$. From this it follows that ${{\mathcal M}}\Vdash \neg t:_{\{\vec{a}, \vec{b},d\}} A(\vec{a},\vec{x})$ for an arbitrary $d \in {{\mathcal D}}$. Therefore ${{\mathcal M}}\Vdash \neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}},{{\sf u}}\}} A(\vec{{{\sf w}}},\vec{x})$. Hence ${{\mathcal M}}\Vdash \pi'$ as desired. Suppose that the rule $(Exp)$ is applied $$\AXC{$\neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}},{{\sf u}}\}} A(\vec{{{\sf w}}},\vec{x})$} \RightLabel{$(Exp)$} \UIC{$\neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$} \DP$$ where ${{\sf u}}\not\in Par(A)$. Since ${{\mathcal M}}\Vdash \neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}},{{\sf u}}\}} A(\vec{{{\sf w}}},\vec{x})$, we have ${{\mathcal M}}\Vdash \neg t:_{\{\vec{a}, \vec{b},d\}} A(\vec{a},\vec{x})$ for some $\vec{a}, \vec{b},d \in {{\mathcal D}}$. Thus either $A(\vec{a},\vec{x}) \not\in {{\mathcal E}}(t)$ or ${{\mathcal M}}\not\Vdash \forall A$. From this it follows that ${{\mathcal M}}\Vdash \neg t:_{\{\vec{a}, \vec{b}\}} A(\vec{a},\vec{x})$. Therefore ${{\mathcal M}}\Vdash \neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{x})$. Hence ${{\mathcal M}}\Vdash \pi'$ as desired. Suppose that the rule $(Ins)$ is applied $$\AXC{$\neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}},{{\sf u}}\}} A(\vec{{{\sf w}}},\vec{y},{{\sf u}})$} \RightLabel{$(Ins)$} \UIC{$\neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}},{{\sf u}}\}} A(\vec{{{\sf w}}},\vec{y},x)$} \DP$$ Since ${{\mathcal M}}\Vdash \neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}},{{\sf u}}\}} A(\vec{{{\sf w}}},\vec{y},{{\sf u}})$, we have ${{\mathcal M}}\Vdash \neg t:_{\{\vec{a}, \vec{b},d\}} A(\vec{a},\vec{y},d)$ for some $\vec{a}, \vec{b},d \in {{\mathcal D}}$. Thus either $A(\vec{a},\vec{y},d) \not\in {{\mathcal E}}(t)$ or ${{\mathcal M}}\not\Vdash \forall \vec{y} A(\vec{a},\vec{y},d)$. Thus, by the Instantiation Condition $({{\mathcal E}}6)$, either $A(\vec{a},\vec{y},x) \not\in {{\mathcal E}}(t)$ or ${{\mathcal M}}\not\Vdash \forall x\forall \vec{y} A(\vec{a},\vec{y},x)$. Thus ${{\mathcal M}}\Vdash \neg t:_{\{\vec{a}, \vec{b},d\}} A(\vec{a},\vec{y},x)$. Therefore ${{\mathcal M}}\Vdash \neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}},{{\sf u}}\}} A(\vec{{{\sf w}}},\vec{y},x)$. Hence ${{\mathcal M}}\Vdash \pi'$ as desired. Suppose that the rule $(gen_x)$ is applied $$\AXC{$\neg gen_x(t):_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} \forall x A$} \RightLabel{$(gen_x)$} \UIC{$\neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A$} \DP$$ We consider the case where $A=A(\vec{{{\sf w}}},\vec{y},x)$, i.e. $x \in FVar(A)$. The case that $x$ is not free in $A$ is treated similarly. Since ${{\mathcal M}}\Vdash \neg gen_x(t):_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} \forall x A(\vec{{{\sf w}}},\vec{y},x)$, we have ${{\mathcal M}}\Vdash \neg gen_x(t):_{\{\vec{a}, \vec{b}\}} \forall x A(\vec{a},\vec{y},x)$ for some $\vec{a}, \vec{b} \in {{\mathcal D}}$. Thus either $\forall x A(\vec{a},\vec{y},x) \not\in {{\mathcal E}}(gen_x(t))$ or ${{\mathcal M}}\not\Vdash \forall \vec{y} \forall x A(\vec{a},\vec{y},x)$. Hence either $A(\vec{a},\vec{y},x) \not\in {{\mathcal E}}(t)$ or ${{\mathcal M}}\not\Vdash \forall A$. In either case ${{\mathcal M}}\Vdash \neg t:_{\{\vec{a}, \vec{b}\}} A(\vec{a},\vec{y},x)$, and therefore ${{\mathcal M}}\Vdash \neg t:_{\{\vec{{{\sf w}}}, \vec{{{\sf v}}}\}} A(\vec{{{\sf w}}},\vec{y},x)$. Hence ${{\mathcal M}}\Vdash \pi'$ as desired. Let $A$ be a sentence of ${{\sf FOLP}}$. If $A$ has an ${{\sf FOLP}}_{{\sf CS}}$-tableau proof, then it is ${{\sf FOLP}}_{{\sf CS}}$-valid. If the sentence $A$ is not ${{\sf FOLP}}_{{\sf CS}}$-valid, then there is an ${{\sf FOLP}}_{{\sf CS}}$-model ${{\mathcal M}}$ such that ${{\mathcal M}}\Vdash \neg A$. Then, by Lemma \[lem: soundness lemma\], there is no closed ${{\sf FOLP}}_{{\sf CS}}$-tableau beginning with $\neg A$. Therefore, $A$ does not have an ${{\sf FOLP}}_{{\sf CS}}$-tableau proof. Next we will prove the completeness theorem by making use of maximal consistent sets. Suppose $\Gamma$ is a set of closed $Par$-formulas. 1. $\Gamma$ is tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent if there is no closed ${{\sf FOLP}}_{{\sf CS}}$-tableau beginning with any finite subset of $\Gamma$. 2. $\Gamma$ is maximal if it has no proper tableau consistent extension (w.r.t. closed $Par$-formulas). 3. $\Gamma$ is $E$-complete (with members of $Par$ as witnesses) if $\bullet$ : $\exists x A(x) \in \Gamma$ implies $A({{\sf u}})\in \Gamma$ for some ${{\sf u}}\in Par$. $\bullet$ : $\neg \forall x A(x) \in \Gamma$ implies $\neg A({{\sf u}})\in \Gamma$ for some ${{\sf u}}\in Par$. By making use of the Henkin construction it is not hard to show the following result. Every tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent set of sentences of ${{\sf FOLP}}$ can be extended to a tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent, maximal and $E$-complete set of closed $Par$-formulas. It is easy to show that $E$-complete maximally tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent sets are closed under ${{\sf FOLP}}_{{\sf CS}}$-tableau rules. For a non-branching rule like this means that if $\alpha$ is in a $E$-complete maximally tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent set $\Gamma$, then both $\alpha_1\in\Gamma$ and $\alpha_2\in\Gamma$. For a branching rule like $$\di{\frac{\beta}{\beta_1 | \beta_2}}$$ this means that if $\beta$ is in a $E$-complete maximally tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent set $\Gamma$, then $\beta_1\in\Gamma$ or $\beta_2\in\Gamma$. For the rule $(F\cdot)$ this means that if $\neg s \cdot t :_X B\in\Gamma$, then for every formula $A$ such that $Par(A) \subseteq X$ either $\neg s:_X (A\r B) \in\Gamma$ or $\neg t:_X A \in\Gamma$. \[lem:downward saturated\] Suppose $\Gamma$ is an $E$-complete maximally tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent set of closed $Par$-formulas. Then $\Gamma$ is closed under ${{\sf FOLP}}_{{\sf CS}}$-tableau rules. The proof for rules $(F\neg)$, $(F\r)$, and $(T\r)$, are standard. We detail the proof for other tableau rules. $(T\forall)$ : Suppose $\forall x A \in \Gamma$ and ${{\sf u}}$ is an arbitrary parameter. We want to show that $A({{\sf u}})\in \Gamma$. If this is not the case, since $\Gamma$ is maximal, then $\Gamma \cup \{A({{\sf u}})\}$ is not tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent. Hence there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for a finite subset, say $\Gamma_0 \cup \{A({{\sf u}})\}$. But $\Gamma_0 \cup \{\forall x A\}$ is a finite subset of $\Gamma$ and, using rule $(T\forall)$, there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for it, contra the tableau consistency of $\Gamma$. The case of $(F\exists)$ is similar. $(T\exists)$ : Suppose $\exists x A \in \Gamma$. Since $\Gamma$ is $E$-complete, $A({{\sf u}}) \in \Gamma$ for some parameter ${{\sf u}}$. The case of $(F\forall)$ is similar. $(F\cdot)$ : Suppose $\neg s \cdot t :_X B\in\Gamma$. Suppose towards a contradiction that for some formula $A$ such that $Par(A) \subseteq X$ we have $\neg s:_X (A\r B)\not\in\Gamma$ and $\neg t:_X A\not\in\Gamma$. Since $\Gamma$ is maximal, $\Gamma \cup \{ \neg s:_X (A\r B) \}$ and $\Gamma \cup \{ \neg t:_X A\}$ are not tableau consistent. Thus there are closed tableaux for finite subsets, say $\Gamma_1 \cup \{ \neg s:_X (A\r B) \}$ and $\Gamma_2 \cup \{ \neg t:_X A\}$. But $\Gamma_1 \cup \Gamma_2 \cup \{\neg s \cdot t :_X B \}$ is a finite subset of $\Gamma$ and, using rule $(F\cdot)$, there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for it, contra the tableau consistency of $\Gamma$. $(F!)$ : Suppose $\neg !t:_X t:_{X} A \in \Gamma$. We want to show that $\neg t:_{X} A \in \Gamma$. If it is not the case, since $\Gamma$ is maximal, then $\Gamma \cup \{\neg t:_{X} A\}$ is not tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent. Hence there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for a finite subset, say $\Gamma_0 \cup \{\neg t:_{X} A\}$. But $\Gamma_0 \cup \{\neg !t:_X t:_{X} A\}$ is a finite subset of $\Gamma$ and, using rule $(F!)$, there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for it, contra the tableau consistency of $\Gamma$. The cases of rules $(T:)$ and $(F+)$ are similar. $(Ctr)$ : Suppose $\neg t:_{X} A \in \Gamma$. We want to show that $\neg t:_{X{{\sf u}}} A \in \Gamma$. If it is not the case, since $\Gamma$ is maximal, then $\Gamma \cup \{\neg t:_{X{{\sf u}}} A\}$ is not tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent. Hence there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for a finite subset, say $\Gamma_0 \cup \{\neg t:_{X{{\sf u}}} A\}$. But $\Gamma_0 \cup \{\neg t:_{X} A\}$ is a finite subset of $\Gamma$ and, using rule $(Ctr)$, there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for it, contra the tableau consistency of $\Gamma$. $(Exp)$ : Suppose $\neg t:_{X{{\sf u}}} A \in \Gamma$ and ${{\sf u}}\not\in Par(A)$. We want to show that $\neg t:_{X} A \in \Gamma$. If it is not the case, since $\Gamma$ is maximal, then $\Gamma \cup \{\neg t:_{X} A\}$ is not tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent. Hence there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for a finite subset, say $\Gamma_0 \cup \{\neg t:_{X} A\}$. But $\Gamma_0 \cup \{\neg t:_{X{{\sf u}}} A\}$ is a finite subset of $\Gamma$ and, using rule $(Exp)$, there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for it, contra the tableau consistency of $\Gamma$. $(Ins)$ : Suppose $\neg t:_{X} A({{\sf u}})\in \Gamma$. We want to show that $\neg t:_{X} A(x) \in \Gamma$. If it is not the case, since $\Gamma$ is maximal, then $\Gamma \cup \{\neg t:_{X} A(x)\}$ is not tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent. Hence there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for a finite subset, say $\Gamma_0 \cup \{\neg t:_{X} A(x)\}$. But $\Gamma_0 \cup \{\neg t:_{X} A({{\sf u}})\}$ is a finite subset of $\Gamma$ and, using rule $(Ins)$, there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for it, contra the tableau consistency of $\Gamma$. $(gen_x)$ : Suppose $\neg gen_x(t):_{X} \forall x A \in \Gamma$. We want to show that $\neg t:_{X} A \in \Gamma$. If it is not the case, since $\Gamma$ is maximal, then $\Gamma \cup \{\neg t:_{X} A\}$ is not tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent. Hence there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for a finite subset, say $\Gamma_0 \cup \{\neg t:_{X} A\}$. But $\Gamma_0 \cup \{\neg gen_x(t):_{X} \forall x A\}$ is a finite subset of $\Gamma$ and, using rule $(gen_x)$, there is a closed ${{\sf FOLP}}_{{\sf CS}}$-tableau for it, contra the tableau consistency of $\Gamma$. \[def:canonical model tableau\] Given an $E$-complete maximally tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent set $\Gamma$ of closed $Par$-formulas, the canonical model ${{\mathcal M}}=({{\mathcal D}},{{\mathcal I}},{{\mathcal E}})$ with respect to $\Gamma$ and ${{\sf CS}}$ is defined as follows: - ${{\mathcal D}}=Par$. - ${{\mathcal I}}(Q) = \{ ({{\sf u}}_1, \ldots, {{\sf u}}_n) \in {{\mathcal D}}~|~Q({{\sf u}}_1, \ldots, {{\sf u}}_n) \in \Gamma \}$, for any n-place relation symbol $Q$. - ${{\mathcal E}}(t) = \{A~|~\neg t:_{Par(A)} A \not\in \Gamma\}$. Given an $E$-complete maximally tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent set $\Gamma$ of closed $Par$-formulas, the canonical model ${{\mathcal M}}=({{\mathcal D}},{{\mathcal I}},{{\mathcal E}})$ with respect to $\Gamma$ and ${{\sf CS}}$ is an ${{\sf FOLP}}_{{\sf CS}}$-model. Suppose $\Gamma$ is an $E$-complete maximally tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent set, and ${{\mathcal M}}=({{\mathcal D}},{{\mathcal I}},{{\mathcal E}})$ is the canonical model with respect to $\Gamma$ and ${{\sf CS}}$. We will show that the admissible evidence function ${{\mathcal E}}$ satisfies ${{\mathcal E}}1$-${{\mathcal E}}6$ from Definition \[def:FOLP-models\]. $({{\mathcal E}}1)$ : Suppose that $c:A\in{{\sf CS}}$. Then $Par(A) = \emptyset$. We have to show that $A\in{{\mathcal E}}(c)$. Since $\Gamma$ is tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent, $\neg c:A\not\in\Gamma$ and hence $\neg c:_{Par(A)} A\not\in\Gamma$. Thus $A\in{{\mathcal E}}(c)$. $({{\mathcal E}}2)$ : Suppose that $A\in{{\mathcal E}}(t)$ and $A\r B\in{{\mathcal E}}(s)$. We have to show that $B\in{{\mathcal E}}(s\cdot t)$. Let $X=Par(A \r B)=Par(A) \cup Par(B)$. By the definition of ${{\mathcal E}}$, $\neg t:_{Par(A)} A\not\in\Gamma$ and $\neg s:_X (A\r B)\not \in\Gamma$. Since $\Gamma$ is closed under rule $(Exp)$, $\neg t:_X A \not\in\Gamma$. Since $\Gamma$ is closed under rule $(F\cdot)$, $\neg s\cdot t:_X B\not\in\Gamma$. Since $\Gamma$ is closed under rule $(Ctr)$, $\neg s\cdot t:_{Par(B)} B\not\in\Gamma$. Hence, by the definition of ${{\mathcal E}}$, $B\in{{\mathcal E}}(s\cdot t)$. $({{\mathcal E}}3)$ : Suppose that $A\in{{\mathcal E}}(s)\cup {{\mathcal E}}(t)$. We have to show that $A\in{{\mathcal E}}(s+t)$. If $A\in{{\mathcal E}}(s)$, then $\neg s:_{Par(A)} A \not\in \Gamma$. Since $\Gamma$ is closed under rule $(F+)$, $\neg s+t:_{Par(A)} A\not\in\Gamma$. Therefore, $A\in{{\mathcal E}}(s+t)$. The case that $A\in{{\mathcal E}}(t)$ is similar. $({{\mathcal E}}4)$ : Suppose that $A\in{{\mathcal E}}(t)$ and ${{\mathcal D}}(A)=Par(A) \subseteq X$. First consider the case that $X \neq Par(A)$. We have to show that $t:_X A \in {{\mathcal E}}(!t)$. By the definition of ${{\mathcal E}}$, $\neg t:_{Par(A)} A \not\in \Gamma$. Since $\Gamma$ is closed under rule $(Exp)$, $\neg t:_X A \not\in \Gamma$. Since $\Gamma$ is closed under rule $(F!)$, $\neg !t:_X t:_X A\not\in\Gamma$. Therefore, $t:_X A\in{{\mathcal E}}(!t)$. The case that $X = Par(A)$ is similar. $({{\mathcal E}}5)$ : Suppose that $A\in {{\mathcal E}}(t)$. We have to show that $\forall x A\in{{\mathcal E}}(gen_x(t))$. By the definition of ${{\mathcal E}}$, $\neg t:_{Par(A)} A \not\in \Gamma$. Since $\Gamma$ is closed under rule $(gen_x)$, $\neg gen_x(t) :_{Par(A)} \forall x A\not\in\Gamma$. Therefore, $\forall x A\in{{\mathcal E}}(gen_x(t))$. $({{\mathcal E}}6)$ : Suppose that $A(x) \in{{\mathcal E}}(t)$ and ${{\sf u}}\in {{\mathcal D}}=Par$. We have to show that $A({{\sf u}}) \in {{\mathcal E}}(t)$. Let $X = Par(A(x))$. By the definition of ${{\mathcal E}}$, $\neg t:_{X} A(x) \not\in \Gamma$. We distinguish two cases. (1) Suppose ${{\sf u}}\not\in X$. Since $\Gamma$ is closed under rule $(Exp)$, $\neg t:_{X{{\sf u}}} A(x) \not\in \Gamma$. Since $\Gamma$ is closed under rule $(Ins)$, $\neg t:_{X{{\sf u}}} A({{\sf u}}) \not\in \Gamma$. Therefore, $A({{\sf u}}) \in {{\mathcal E}}(t)$. (2) Suppose ${{\sf u}}\in X$. Since $\Gamma$ is closed under rule $(Ins)$, $\neg t:_{X} A({{\sf u}}) \not\in \Gamma$. Therefore, $A({{\sf u}}) \in {{\mathcal E}}(t)$. Suppose $\Gamma$ is an $E$-complete maximally tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent set of closed $Par$-formulas and ${{\mathcal M}}=({{\mathcal D}},{{\mathcal I}},{{\mathcal E}})$ is the canonical model with respect to $\Gamma$ and ${{\sf CS}}$. Then for every closed $Par$-formula $F$: 1. $F \in \Gamma$ implies ${{\mathcal M}}\Vdash F$. 2. $\neg F \in \Gamma$ implies ${{\mathcal M}}\not\Vdash F$. By induction on the complexity of $F$. The base case and the propositional and quantified inductive cases are standard. The proof for the case that $F=t:_X A$ is as follows. Note that $Par(A) \subseteq X$. Assume $t:_X A \in\Gamma$. Since $\Gamma$ is ${{\sf FOLP}}_{{\sf CS}}$-consistent, $\neg t:_X A \not\in \Gamma$. If $X=Par(A)$, then $A \in {{\mathcal E}}(t)$. If $X \neq Par(A)$, then since $\Gamma$ is closed under rule $(Ctr)$, $\neg t:_{Par(A)} A \not\in \Gamma$. Thus $A \in {{\mathcal E}}(t)$. On the other hand, since $t:_X A \in\Gamma$ and $\Gamma$ is closed under rule $(T:)$, $\forall A \in \Gamma$. Let $FVar(A) = \{\vec{x}\}$. Thus $\forall \vec{x} A(\vec{x}) \in \Gamma$. Since $\Gamma$ is closed under rule $(T\forall)$, $A(\vec{{{\sf u}}}) \in \Gamma$ for any $\vec{{{\sf u}}} \in Par$. Hence, by the induction hypothesis, ${{\mathcal M}}\Vdash A(\vec{{{\sf u}}})$ for any $\vec{{{\sf u}}} \in Par={{\mathcal D}}$, and thus ${{\mathcal M}}\Vdash \forall A$. Therefore, ${{\mathcal M}}\Vdash t:_X A$. Assume $\neg t:_X A\in\Gamma$. If $X=Par(A)$, then $A \not\in {{\mathcal E}}(t)$. On the other hand, if $X \neq Par(A)$, then since $\Gamma$ is closed under rule $(Exp)$, $\neg t:_{Par(A)} A \in \Gamma$, and hence $A \not\in {{\mathcal E}}(t)$. In either cases ${{\mathcal M}}\not\Vdash t:_X A$. \[thm:completeness tableaux\] Let $A$ be a sentence of ${{\sf FOLP}}$. If $A$ is ${{\sf FOLP}}_{{\sf CS}}$-valid, then it has a ${{\sf FOLP}}_{{\sf CS}}$-tableau proof. If $A$ does not have a ${{\sf FOLP}}_{{\sf CS}}$-tableau proof, then $\{\neg A\}$ is a ${{\sf FOLP}}_{{\sf CS}}$-consistent set and can be extended to a tableau ${{\sf FOLP}}_{{\sf CS}}$-consistent, maximal and $E$-complete set $\Gamma$ of closed $Par$-formulas. Since $\neg A\in\Gamma$, by the Truth Lemma, ${{\mathcal M}}\not\Vdash A$, where ${{\mathcal M}}$ is the canonical model of ${{\sf FOLP}}_{{\sf CS}}$ with respect to $\Gamma$ and ${{\sf CS}}$. Therefore $A$ is not ${{\sf FOLP}}_{{\sf CS}}$-valid. [**Acknowledgments**]{}\ This research was in part supported by a grant from IPM. (No. 95030416) [99]{} S. Artemov, Operational modal logic, Technical Report MSI 95–29, Cornell University, 1995. S. Artemov, Explicit provability and constructive semantics, The Bulletin of Symbolic Logic, 7(1), 1–36, 2001. S. Artemov, The logic of justification, The Review of Symbolic Logic, 1(4), 477–513, 2008. S. Artemov and M. Fitting, Justification logic, In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, 2012. S. Artemov and T. Yavorskaya (Sidon), First-order logic of proofs, Technical Report TR–2011005, CUNY Ph.D. Program in Computer Science, May 2011. M. Finger, Analytic methods for the logic of proofs. Journal of Logic and Computation, 20(1), 167–188, 2010. M. Fitting, The logic of proofs, semantically, Annals of Pure and Applied Logic, 132(1), 1–25, 2005. M. Fitting, Possible world semantics for the first order ${{\sf LP}}$, Technical Report TR–2011010, CUNY Ph.D. Program in Computer Science, September 2011. M. Fitting, Possible world semantics for the first-order logic of proofs, Annals of Pure and Applied Logic, 165: 225–240, 2014. Errata at http://melvinfitting.org/errata/paper\_errors/papererrata.html M. Ghari, Tableau proof systems for justification logics, ArXiv e-prints, arXiv:1405.1828, May 2014. A. Mkrtychev, Models for the logic of proofs, In S. I. Adian, A. Nerode (Eds.), Logical Foundations of Computer Science, Vol. 1234 of Lecture Notes in Computer Science, Springer, 1997, pages 266–275. B. Renne, Tableaux for the logic of proofs, Technical Report TR–2004001, CUNY Ph.D. Program in Computer Science, March 2004. B. Renne, Semantic cut-elimination for two explicit modal logics, In Janneke Huitink and Sophia Katrenko, editors, Proceedings of the Eleventh ESSLLI Student Session, 18th European Summer School in Logic, Language and Information (ESSLLI’06), pages 148–158, 2006. [^1]: Ctr and Exp are abbreviations for Contraction and Expansion respectively. The rule AN is called Axiom Necessitation.